id
int64 12
1.07M
| title
stringlengths 1
124
| text
stringlengths 0
228k
| paragraphs
list | abstract
stringlengths 0
123k
| date_created
stringlengths 0
20
| date_modified
stringlengths 20
20
| templates
sequence | url
stringlengths 31
154
|
---|---|---|---|---|---|---|---|---|
6,639 | Cantor Fitzgerald | Cantor Fitzgerald, L.P. is an American financial services firm that was founded in 1945. It specializes in institutional equity, fixed-income sales and trading, and serving the middle market with investment banking services, prime brokerage, and commercial real estate financing. It is also active in new businesses, including advisory and asset management services, gaming technology, and e-commerce. It has more than 5,000 institutional clients.
Cantor Fitzgerald is one of 24 primary dealers that are authorized to trade US government securities with the Federal Reserve Bank of New York.
Cantor Fitzgerald's 1,600 employees work in more than 30 locations, including financial centers in the Americas, Europe, Asia-Pacific, and the Middle East. Together with its affiliates, Cantor Fitzgerald operates in more than 60 offices in 20 countries and has more than 12,500 employees.
Before 2001, the company's headquarters were located between the 101st and 105th floors of the North Tower of the World Trade Center in New York City, just above the impact site of American Airlines Flight 11 during the September 11 attacks. All 658 Cantor Fitzgerald employees who were present that day were killed, representing the largest loss of life among any single organization in the attacks.
Cantor Fitzgerald was formed in 1945 by Bernard Gerald Cantor and John Fitzgerald as an investment bank and brokerage business. It later became known for its computer-based bond brokerage, the quality of its institutional distribution business model, and the market's premier government securities dealer.
In 1965, Cantor Fitzgerald began "large block" sales/trading of equities for institutional customers. It became the world's first electronic marketplace for US government securities in 1972, and in 1983, it was the first to offer worldwide screen brokerage services in US government securities.
In 1991, Howard Lutnick was named president and CEO of Cantor Fitzgerald; he became chairman of Cantor Fitzgerald, L.P., in 1996.
Cantor Fitzgerald's corporate headquarters and New York City office, on the 101st to the 105th floors of 1 World Trade Center in Lower Manhattan (2 to 6 floors above the impact zone of American Airlines Flight 11), were destroyed during the September 11, 2001 attacks. At 8:46:46 a.m., six seconds after the plane struck the tower, a Goldman Sachs server issued an alert saying that its trading system had gone offline because it could not connect with the server. Since all stairwells leading past the impact zone were destroyed by the initial crash or blocked with smoke, fire, or debris, every employee who reported for work that morning was killed in the attacks; 658 of its 960 New York employees were killed or missing, or 68.5% of its total workforce, which was considerably more than any of the other World Trade Center tenants, the New York City Police Department, the Port Authority Police Department, the New York City Fire Department, or the Department of Defense. Forty-six contractors, food service workers, and visitors in the Cantor Fitzgerald offices at the time were also killed. CEO Howard Lutnick was not present that day, but his younger brother, Gary, was among those killed. Lutnick vowed to keep the company alive, and the company was able to bring its trading markets back online within a week.
On September 19, Cantor Fitzgerald made a pledge to distribute 25% of the firm's profits for the next five years, and it committed to paying for ten years of health care for the benefit of the families of its 658 former Cantor Fitzgerald, eSpeed, and TradeSpark employees (profits that would otherwise have been distributed to the Cantor Fitzgerald partners). In 2006, the company had completed its promise, having paid a total of $180 million (and an additional $17 million from a relief fund run by Lutnick's sister, Edie).
Until the attacks, Cantor had handled about a quarter of the daily transactions in the multi-trillion dollar treasury security market. Cantor Fitzgerald has since rebuilt its infrastructure, partly through the efforts of its London office, and now has its headquarters in Midtown Manhattan. The company's effort to regain its footing was the subject of Tom Barbash's 2003 book On Top of the World: Cantor Fitzgerald, Howard Lutnick, and 9/11: A Story of Loss and Renewal as well as a 2012 documentary, Out of the Clear Blue Sky.
On September 2, 2004, Cantor and other organizations filed a civil lawsuit against Saudi Arabia for allegedly providing money to the hijackers and Al Qaeda. It was later joined in the suit by the Port Authority of New York. Most of the claims against Saudi Arabia were dismissed on January 18, 2005.
In December 2013, Cantor Fitzgerald settled its lawsuit against American Airlines for $135 million. Cantor Fitzgerald had been suing for loss of property and interruption of business by alleging the airline to have been negligent by allowing hijackers to board Flight 11.
In 2003, the firm launched its fixed-income sales and trading group. Three years later, the Federal Reserve added Cantor Fitzgerald & Co. to its list of primary dealers. The firm later launched Cantor Prime Services in 2009. It was meant to be a provider of multi-asset, perimeter brokerage prime brokerage platforms to exploit its clearing, financing, and execution capabilities. A year after, Cantor Fitzgerald began building its real estate business with the launch of CCRE. Cantor's affiliate, BGC Partners, expanded into commercial real estate services in 2011 by its purchase of Newmark Knight Frank and the assets of Grubb & Ellis, to form Newmark Grubb Knight Frank.
On December 5, 2014, two Cantor Fitzgerald analysts were said to be in the top 25 analysts on TipRanks. Cantor Fitzgerald has a prolific special-purpose acquisition company underwriting practice, having led all banks in SPAC underwriting activity in both 2018 and 2019.
In 2023, Cantor Fitzgerald began servicing Tether, the unregulated cryptocurrency stablecoin known to be widely used for terrorist financing, sanction evasion, and money laundering. As reported by the Wall Street Journal, Cantor Fitzgerald Helps Oversee Tether’s $39 Billion in Treasury Holdings as of February 2023. Following reports of illicit use of Tether for funding Hamas, congressional representatives have called for swift action by Department of Justice to choke off funding of terrorists.
Edie wrote An Unbroken Bond: The Untold Story of How the 658 Cantor Fitzgerald Families Faced the Tragedy of 9/11 and Beyond. All proceeds from the book's sale benefit the Cantor Fitzgerald Relief Fund and the charities it assists.
The Cantor Fitzgerald Relief Fund provided $10 million to families affected by Hurricane Sandy. Howard Lutnick and the Relief Fund "adopted" 19 elementary schools in impacted areas by distributing $1,000 prepaid debit cards to each family from the schools. A total of $10 million in funds was given to families affected by the storm.
Two days after the 2013 Moore tornado struck Moore, Oklahoma, killing 24 people and injuring hundreds, Lutnick pledged to donate $2 million to families affected by the tornado. The donation was given to families in the form of $1,000 debit cards.
Each year, on September 11, Cantor Fitzgerald and its affiliate, BGC Partners, donate 100% of their revenue to charitable causes on their annual Charity Day, which was initially established to raise money to assist the families of the Cantor employees who died in the World Trade Center attacks. Since its inception, Charity Day has raised $110 million for charities globally.
The firm has many subsidiaries and affiliates, including: | [
{
"paragraph_id": 0,
"text": "Cantor Fitzgerald, L.P. is an American financial services firm that was founded in 1945. It specializes in institutional equity, fixed-income sales and trading, and serving the middle market with investment banking services, prime brokerage, and commercial real estate financing. It is also active in new businesses, including advisory and asset management services, gaming technology, and e-commerce. It has more than 5,000 institutional clients.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cantor Fitzgerald is one of 24 primary dealers that are authorized to trade US government securities with the Federal Reserve Bank of New York.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cantor Fitzgerald's 1,600 employees work in more than 30 locations, including financial centers in the Americas, Europe, Asia-Pacific, and the Middle East. Together with its affiliates, Cantor Fitzgerald operates in more than 60 offices in 20 countries and has more than 12,500 employees.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Before 2001, the company's headquarters were located between the 101st and 105th floors of the North Tower of the World Trade Center in New York City, just above the impact site of American Airlines Flight 11 during the September 11 attacks. All 658 Cantor Fitzgerald employees who were present that day were killed, representing the largest loss of life among any single organization in the attacks.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Cantor Fitzgerald was formed in 1945 by Bernard Gerald Cantor and John Fitzgerald as an investment bank and brokerage business. It later became known for its computer-based bond brokerage, the quality of its institutional distribution business model, and the market's premier government securities dealer.",
"title": "Early history"
},
{
"paragraph_id": 5,
"text": "In 1965, Cantor Fitzgerald began \"large block\" sales/trading of equities for institutional customers. It became the world's first electronic marketplace for US government securities in 1972, and in 1983, it was the first to offer worldwide screen brokerage services in US government securities.",
"title": "Early history"
},
{
"paragraph_id": 6,
"text": "In 1991, Howard Lutnick was named president and CEO of Cantor Fitzgerald; he became chairman of Cantor Fitzgerald, L.P., in 1996.",
"title": "Early history"
},
{
"paragraph_id": 7,
"text": "Cantor Fitzgerald's corporate headquarters and New York City office, on the 101st to the 105th floors of 1 World Trade Center in Lower Manhattan (2 to 6 floors above the impact zone of American Airlines Flight 11), were destroyed during the September 11, 2001 attacks. At 8:46:46 a.m., six seconds after the plane struck the tower, a Goldman Sachs server issued an alert saying that its trading system had gone offline because it could not connect with the server. Since all stairwells leading past the impact zone were destroyed by the initial crash or blocked with smoke, fire, or debris, every employee who reported for work that morning was killed in the attacks; 658 of its 960 New York employees were killed or missing, or 68.5% of its total workforce, which was considerably more than any of the other World Trade Center tenants, the New York City Police Department, the Port Authority Police Department, the New York City Fire Department, or the Department of Defense. Forty-six contractors, food service workers, and visitors in the Cantor Fitzgerald offices at the time were also killed. CEO Howard Lutnick was not present that day, but his younger brother, Gary, was among those killed. Lutnick vowed to keep the company alive, and the company was able to bring its trading markets back online within a week.",
"title": "9/11 attacks"
},
{
"paragraph_id": 8,
"text": "On September 19, Cantor Fitzgerald made a pledge to distribute 25% of the firm's profits for the next five years, and it committed to paying for ten years of health care for the benefit of the families of its 658 former Cantor Fitzgerald, eSpeed, and TradeSpark employees (profits that would otherwise have been distributed to the Cantor Fitzgerald partners). In 2006, the company had completed its promise, having paid a total of $180 million (and an additional $17 million from a relief fund run by Lutnick's sister, Edie).",
"title": "9/11 attacks"
},
{
"paragraph_id": 9,
"text": "Until the attacks, Cantor had handled about a quarter of the daily transactions in the multi-trillion dollar treasury security market. Cantor Fitzgerald has since rebuilt its infrastructure, partly through the efforts of its London office, and now has its headquarters in Midtown Manhattan. The company's effort to regain its footing was the subject of Tom Barbash's 2003 book On Top of the World: Cantor Fitzgerald, Howard Lutnick, and 9/11: A Story of Loss and Renewal as well as a 2012 documentary, Out of the Clear Blue Sky.",
"title": "9/11 attacks"
},
{
"paragraph_id": 10,
"text": "On September 2, 2004, Cantor and other organizations filed a civil lawsuit against Saudi Arabia for allegedly providing money to the hijackers and Al Qaeda. It was later joined in the suit by the Port Authority of New York. Most of the claims against Saudi Arabia were dismissed on January 18, 2005.",
"title": "9/11 attacks"
},
{
"paragraph_id": 11,
"text": "In December 2013, Cantor Fitzgerald settled its lawsuit against American Airlines for $135 million. Cantor Fitzgerald had been suing for loss of property and interruption of business by alleging the airline to have been negligent by allowing hijackers to board Flight 11.",
"title": "9/11 attacks"
},
{
"paragraph_id": 12,
"text": "In 2003, the firm launched its fixed-income sales and trading group. Three years later, the Federal Reserve added Cantor Fitzgerald & Co. to its list of primary dealers. The firm later launched Cantor Prime Services in 2009. It was meant to be a provider of multi-asset, perimeter brokerage prime brokerage platforms to exploit its clearing, financing, and execution capabilities. A year after, Cantor Fitzgerald began building its real estate business with the launch of CCRE. Cantor's affiliate, BGC Partners, expanded into commercial real estate services in 2011 by its purchase of Newmark Knight Frank and the assets of Grubb & Ellis, to form Newmark Grubb Knight Frank.",
"title": "Recent history"
},
{
"paragraph_id": 13,
"text": "On December 5, 2014, two Cantor Fitzgerald analysts were said to be in the top 25 analysts on TipRanks. Cantor Fitzgerald has a prolific special-purpose acquisition company underwriting practice, having led all banks in SPAC underwriting activity in both 2018 and 2019.",
"title": "Recent history"
},
{
"paragraph_id": 14,
"text": "",
"title": "Recent history"
},
{
"paragraph_id": 15,
"text": "In 2023, Cantor Fitzgerald began servicing Tether, the unregulated cryptocurrency stablecoin known to be widely used for terrorist financing, sanction evasion, and money laundering. As reported by the Wall Street Journal, Cantor Fitzgerald Helps Oversee Tether’s $39 Billion in Treasury Holdings as of February 2023. Following reports of illicit use of Tether for funding Hamas, congressional representatives have called for swift action by Department of Justice to choke off funding of terrorists.",
"title": "Relations to Tether and Terrorist Financing"
},
{
"paragraph_id": 16,
"text": "Edie wrote An Unbroken Bond: The Untold Story of How the 658 Cantor Fitzgerald Families Faced the Tragedy of 9/11 and Beyond. All proceeds from the book's sale benefit the Cantor Fitzgerald Relief Fund and the charities it assists.",
"title": "Philanthropy"
},
{
"paragraph_id": 17,
"text": "The Cantor Fitzgerald Relief Fund provided $10 million to families affected by Hurricane Sandy. Howard Lutnick and the Relief Fund \"adopted\" 19 elementary schools in impacted areas by distributing $1,000 prepaid debit cards to each family from the schools. A total of $10 million in funds was given to families affected by the storm.",
"title": "Philanthropy"
},
{
"paragraph_id": 18,
"text": "Two days after the 2013 Moore tornado struck Moore, Oklahoma, killing 24 people and injuring hundreds, Lutnick pledged to donate $2 million to families affected by the tornado. The donation was given to families in the form of $1,000 debit cards.",
"title": "Philanthropy"
},
{
"paragraph_id": 19,
"text": "Each year, on September 11, Cantor Fitzgerald and its affiliate, BGC Partners, donate 100% of their revenue to charitable causes on their annual Charity Day, which was initially established to raise money to assist the families of the Cantor employees who died in the World Trade Center attacks. Since its inception, Charity Day has raised $110 million for charities globally.",
"title": "Philanthropy"
},
{
"paragraph_id": 20,
"text": "The firm has many subsidiaries and affiliates, including:",
"title": "Subsidiaries and affiliates"
}
] | Cantor Fitzgerald, L.P. is an American financial services firm that was founded in 1945. It specializes in institutional equity, fixed-income sales and trading, and serving the middle market with investment banking services, prime brokerage, and commercial real estate financing. It is also active in new businesses, including advisory and asset management services, gaming technology, and e-commerce. It has more than 5,000 institutional clients. Cantor Fitzgerald is one of 24 primary dealers that are authorized to trade US government securities with the Federal Reserve Bank of New York. Cantor Fitzgerald's 1,600 employees work in more than 30 locations, including financial centers in the Americas, Europe, Asia-Pacific, and the Middle East. Together with its affiliates, Cantor Fitzgerald operates in more than 60 offices in 20 countries and has more than 12,500 employees. Before 2001, the company's headquarters were located between the 101st and 105th floors of the North Tower of the World Trade Center in New York City, just above the impact site of American Airlines Flight 11 during the September 11 attacks. All 658 Cantor Fitzgerald employees who were present that day were killed, representing the largest loss of life among any single organization in the attacks. | 2001-10-31T05:21:06Z | 2023-12-12T23:07:20Z | [
"Template:Cn",
"Template:Portal",
"Template:Investment banks",
"Template:Authority control",
"Template:Infobox company",
"Template:Further",
"Template:Cite web",
"Template:Cite news",
"Template:Short description",
"Template:Main",
"Template:Cbignore",
"Template:Official website",
"Template:Use mdy dates",
"Template:Reflist",
"Template:Cite magazine",
"Template:Cite book",
"Template:More references"
] | https://en.wikipedia.org/wiki/Cantor_Fitzgerald |
6,641 | Cane toad | The cane toad (Rhinella marina), also known as the giant neotropical toad or marine toad, is a large, terrestrial true toad native to South and mainland Central America, but which has been introduced to various islands throughout Oceania and the Caribbean, as well as Northern Australia. It is a member of the genus Rhinella, which includes many true toad species found throughout Central and South America, but it was formerly assigned to the genus Bufo.
The cane toad is an old species. A fossil toad (specimen UCMP 41159) from the La Venta fauna of the late Miocene in Colombia is indistinguishable from modern cane toads from northern South America. It was discovered in a floodplain deposit, which suggests the R. marina habitat preferences have long been for open areas. The cane toad is a prolific breeder; females lay single-clump spawns with thousands of eggs. Its reproductive success is partly because of opportunistic feeding: it has a diet, unusual among anurans, of both dead and living matter. Adults average 10–15 cm (4–6 in) in length; the largest recorded specimen had a snout-vent length of 24 cm (9.4 in).
The cane toad has poison glands, and the tadpoles are highly toxic to most animals if ingested. Its toxic skin can kill many animals, both wild and domesticated, and cane toads are particularly dangerous to dogs. Because of its voracious appetite, the cane toad has been introduced to many regions of the Pacific and the Caribbean islands as a method of agricultural pest control. The common name of the species is derived from its use against the cane beetle (Dermolepida albohirtum), which damages sugar cane. The cane toad is now considered a pest and an invasive species in many of its introduced regions. The 1988 film Cane Toads: An Unnatural History documented the trials and tribulations of the introduction of cane toads in Australia.
Historically, the cane toad was used to eradicate pests from sugarcane, giving rise to its common name. The cane toad has many other common names, including "giant toad" and "marine toad"; the former refers to its size, and the latter to the binomial name, R. marina. It was one of many species described by Carl Linnaeus in his 18th-century work Systema Naturae (1758). Linnaeus based the specific epithet marina on an illustration by Dutch zoologist Albertus Seba, who mistakenly believed the cane toad to inhabit both terrestrial and marine environments. Other common names include "giant neotropical toad", "Dominican toad", "giant marine toad", and "South American cane toad". In Trinidadian English, they are commonly called crapaud, the French word for toad.
The genus Rhinella is considered to constitute a distinct genus of its own, thus changing the scientific name of the cane toad. In this case, the specific name marinus (masculine) changes to marina (feminine) to conform with the rules of gender agreement as set out by the International Code of Zoological Nomenclature, changing the binomial name from Bufo marinus to Rhinella marina; the binomial Rhinella marinus was subsequently introduced as a synonym through misspelling by Pramuk, Robertson, Sites, and Noonan (2008). Though controversial (with many traditional herpetologists still using Bufo marinus) the binomial Rhinella marina is gaining in acceptance with such bodies as the IUCN, Encyclopaedia of Life, Amphibian Species of the World and increasing numbers of scientific publications adopting its usage.
Since 2016, cane toad populations native to Mesoamerica and northwestern South America are sometimes considered to be a separate species, Rhinella horribilis.
In Australia, the adults may be confused with large native frogs from the genera Limnodynastes, Cyclorana, and Mixophyes. These species can be distinguished from the cane toad by the absence of large parotoid glands behind their eyes and the lack of a ridge between the nostril and the eye. Cane toads have been confused with the giant burrowing frog (Heleioporus australiacus), because both are large and warty in appearance; however, the latter can be readily distinguished from the former by its vertical pupils and its silver-grey (as opposed to gold) irises. Juvenile cane toads may be confused with species of the genus Uperoleia, but their adult colleagues can be distinguished by the lack of bright colouring on the groin and thighs.
In the United States, the cane toad closely resembles many bufonid species. In particular, it could be confused with the southern toad (Bufo terrestris), which can be distinguished by the presence of two bulbs in front of the parotoid glands.
The cane toad genome has been sequenced and certain Australian academics believe this will help in understanding how the toad can quickly evolve to adapt to new environments, the workings of its infamous toxin, and hopefully provide new options for halting this species' march across Australia and other places it has spread as an invasive pest.
Studies of the genome confirm its evolutionary origins in northern part of South America and its close genetic relation to Rhinella diptycha and other similar species of the genus. Recent studies suggest that R. marina diverged between 2.75 and 9.40 million years ago.
A recent split in the species into further subspecies may have occurred approximately 2.7 million years ago following the isolation of population groups by the rising Venezuelan Andes.
Considered the largest species in the Bufonidae, the cane toad is very large; the females are significantly longer than males, reaching a typical length of 10–15 cm (4–6 in), with a maximum of 24 cm (9.4 in). Larger toads tend to be found in areas of lower population density. They have a life expectancy of 10 to 15 years in the wild, and can live considerably longer in captivity, with one specimen reportedly surviving for 35 years.
The skin of the cane toad is dry and warty. Distinct ridges above the eyes run down the snout. Individual cane toads can be grey, yellowish, red-brown, or olive-brown, with varying patterns. A large parotoid gland lies behind each eye. The ventral surface is cream-coloured and may have blotches in shades of black or brown. The pupils are horizontal and the irises golden. The toes have a fleshy webbing at their base, and the fingers are free of webbing.
Typically, juvenile cane toads have smooth, dark skin, although some specimens have a red wash. Juveniles lack the adults' large parotoid glands, so they are usually less poisonous. The tadpoles are small and uniformly black, and are bottom-dwellers, tending to form schools. Tadpoles range from 10 to 25 mm (0.4 to 1.0 in) in length.
The common name "marine toad" and the scientific name Rhinella marina suggest a link to marine life, but cane toads do not live in the sea. However, laboratory experiments suggest that tadpoles can tolerate salt concentrations equivalent to 15% of seawater (~5.4‰), and recent field observations found living tadpoles and toadlets at salinities of 27.5‰ on Coiba Island, Panama. The cane toad inhabits open grassland and woodland, and has displayed a "distinct preference" for areas modified by humans, such as gardens and drainage ditches. In their native habitats, the toads can be found in subtropical forests, although dense foliage tends to limit their dispersal.
The cane toad begins life as an egg, which is laid as part of long strings of jelly in water. A female lays 8,000–25,000 eggs at once and the strings can stretch up to 20 m (66 ft) in length. The black eggs are covered by a membrane and their diameter is about 1.7–2.0 mm (0.067–0.079 in). The rate at which an egg grows into a tadpole increases with temperature. Tadpoles typically hatch within 48 hours, but the period can vary from 14 hours to almost a week. This process usually involves thousands of tadpoles—which are small, black, and have short tails—forming into groups. Between 12 and 60 days are needed for the tadpoles to develop into juveniles, with four weeks being typical. Similarly to their adult counterparts, eggs and tadpoles are toxic to many animals.
When they emerge, toadlets typically are about 10–11 mm (0.39–0.43 in) in length, and grow rapidly. While the rate of growth varies by region, time of year, and gender, an average initial growth rate of 0.647 mm (0.0255 in) per day is seen, followed by an average rate of 0.373 mm (0.0147 in) per day. Growth typically slows once the toads reach sexual maturity. This rapid growth is important for their survival; in the period between metamorphosis and subadulthood, the young toads lose the toxicity that protected them as eggs and tadpoles, but have yet to fully develop the parotoid glands that produce bufotoxin. Only an estimated 0.5% of cane toads reach adulthood, in part because they lack this key defense—but also due to tadpole cannibalism. Although cannibalism does occur in the native population in South America, the rapid evolution occurring in the unnaturally large population in Australia has produced tadpoles 30x more likely to be interested in cannibalising their siblings, and 2.6x more likely to actually do so. They have also evolved to shorten their tadpole phase in response to the presence of older tadpoles. These changes are likely genetic, although no genetic basis has been determined.
As with rates of growth, the point at which the toads become sexually mature varies across different regions. In New Guinea, sexual maturity is reached by female toads with a snout–vent length between 70 and 80 mm (2.8 and 3.1 in), while toads in Panama achieve maturity when they are between 90 and 100 mm (3.5 and 3.9 in) in length. In tropical regions, such as their native habitats, breeding occurs throughout the year, but in subtropical areas, breeding occurs only during warmer periods that coincide with the onset of the wet season.
The cane toad is estimated to have a critical thermal maximum of 40–42 °C (104–108 °F) and a minimum of around 10–15 °C (50–59 °F). The ranges can change due to adaptation to the local environment. Cane toads from some populations can adjust their thermal tolerance within a few hours of encountering low temperatures. The toad is able to rapidly acclimate to the cold using physiological plasticity, though there is also evidence that more northerly populations of cane toads in the United States are better cold-adapted than more southerly populations. These adaptations have allowed the cane toad to establish invasive populations across the world. The toad's ability to rapidly acclimate to thermal changes suggests that current models may underestimate the potential range of habitats that the toad can populate. The cane toad has a high tolerance to water loss; some can withstand a 52.6% loss of body water, allowing them to survive outside tropical environments.
Most frogs identify prey by movement, and vision appears to be the primary method by which the cane toad detects prey; however, it can also locate food using its sense of smell. They eat a wide range of material; in addition to the normal prey of small rodents, other small mammals, reptiles, other amphibians, birds, and even bats and a range of invertebrates (such as ants, beetles, earwigs, dragonflies, grasshoppers, true bugs, crustaceans, and gastropods), they also eat plants, dog food, cat food, feces, and household refuse.
The skin of the adult cane toad is toxic, as well as the enlarged parotoid glands behind the eyes, and other glands across its back. When the toad is threatened, its glands secrete a milky-white fluid known as bufotoxin. Components of bufotoxin are toxic to many animals; even human deaths have been recorded due to the consumption of cane toads. Dogs are especially prone to be poisoned by licking or biting toads. Pets showing excessive drooling, extremely red gums, head-shaking, crying, loss of coordination, and/or convulsions require immediate veterinary attention.
Bufotenin, one of the chemicals excreted by the cane toad, is classified as a schedule 9 drug under Australian law, alongside heroin and LSD. The effects of bufotenin are thought to be similar to those of mild poisoning; the stimulation, which includes mild hallucinations, lasts less than an hour. As the cane toad excretes bufotenin in small amounts, and other toxins in relatively large quantities, toad licking could result in serious illness or death.
In addition to releasing toxin, the cane toad is capable of inflating its lungs, puffing up, and lifting its body off the ground to appear taller and larger to a potential predator.
Since 2011, experimenters in the Kimberley region of Western Australia have used poisonous sausages containing toad meat in an attempt to protect native animals from cane toads' deadly impact. The Western Australian Department of Environment and Conservation, along with the University of Sydney, developed these sausage-shaped baits as a tool in order to train native animals not to eat the toads. By blending bits of toad with a nausea-inducing chemical, the baits train the animals to stay away from the amphibians.
Many species prey on the cane toad and its tadpoles in its native habitat, including the broad-snouted caiman (Caiman latirostris), the banded cat-eyed snake (Leptodeira annulata), eels (family Anguillidae), various species of killifish, the rock flagtail (Kuhlia rupestris), some species of catfish (order Siluriformes), some species of ibis (subfamily Threskiornithinae), and Paraponera clavata (bullet ants).
Predators outside the cane toad's native range include the whistling kite (Haliastur sphenurus), the rakali (Hydromys chrysogaster), the black rat (Rattus rattus) and the water monitor (Varanus salvator). The tawny frogmouth (Podargus strigoides) and the Papuan frogmouth (Podargus papuensis) have been reported as feeding on cane toads; some Australian crows (Corvus spp.) have also learned strategies allowing them to feed on cane toads, such as using their beak to flip toads onto their backs. Kookaburras also prey on the amphibians.
Opossums of the genus Didelphis likely can eat cane toads with impunity. Meat ants are unaffected by the cane toads' toxins, so are able to kill them. The cane toad's normal response to attack is to stand still and let its toxin kill the attacker, which allows the ants to attack and eat the toad. Saw-shelled turtles have also been seen successfully and safely eating cane toads.
The cane toad is native to the Americas, and its range stretches from the Rio Grande Valley in South Texas to the central Amazon and southeastern Peru, and some of the continental islands near Venezuela (such as Trinidad and Tobago). This area encompasses both tropical and semiarid environments. The density of the cane toad is significantly lower within its native distribution than in places where it has been introduced. In South America, the density was recorded to be 20 adults per 100 m (110 yd) of shoreline, 1 to 2% of the density in Australia.
The cane toad has been introduced to many regions of the world—particularly the Pacific—for the biological control of agricultural pests. These introductions have generally been well documented, and the cane toad may be one of the most studied of any introduced species.
Before the early 1840s, the cane toad had been introduced into Martinique and Barbados, from French Guiana and Guyana. An introduction to Jamaica was made in 1844 in an attempt to reduce the rat population. Despite its failure to control the rodents, the cane toad was introduced to Puerto Rico in the early 20th century in the hope that it would counter a beetle infestation ravaging the sugarcane plantations. The Puerto Rican scheme was successful and halted the economic damage caused by the beetles, prompting scientists in the 1930s to promote it as an ideal solution to agricultural pests.
As a result, many countries in the Pacific region emulated the lead of Puerto Rico and introduced the toad in the 1930s. Introduced populations are in Australia, Florida, Papua New Guinea, the Philippines, the Ogasawara, Ishigaki Island and the Daitō Islands of Japan, Taiwan Nantou Caotun, most Caribbean islands, Fiji and many other Pacific islands, including Hawaiʻi. Since then, the cane toad has become a pest in many host countries, and poses a serious threat to native animals.
Following the apparent success of the cane toad in eating the beetles threatening the sugarcane plantations of Puerto Rico, and the fruitful introductions into Hawaiʻi and the Philippines, a strong push was made for the cane toad to be released in Australia to negate the pests ravaging the Queensland cane fields. As a result, 102 toads were collected from Hawaiʻi and brought to Australia. Queensland's sugar scientists released the toad into cane fields in August 1935. After this initial release, the Commonwealth Department of Health decided to ban future introductions until a study was conducted into the feeding habits of the toad. The study was completed in 1936 and the ban lifted, when large-scale releases were undertaken; by March 1937, 62,000 toadlets had been released into the wild. The toads became firmly established in Queensland, increasing exponentially in number and extending their range into the Northern Territory and New South Wales. In 2010, one was found on the far western coast in Broome, Western Australia.
However, the toad was generally unsuccessful in reducing the targeted grey-backed cane beetles (Dermolepida albohirtum), in part because the cane fields provided insufficient shelter for the predators during the day, and in part because the beetles live at the tops of sugar cane—and cane toads are not good climbers. Since its original introduction, the cane toad has had a particularly marked effect on Australian biodiversity. The population of a number of native predatory reptiles has declined, such as the varanid lizards Varanus mertensi, V. mitchelli, and V. panoptes, the land snakes Pseudechis australis and Acanthophis antarcticus, and the crocodile species Crocodylus johnstoni; in contrast, the population of the agamid lizard Amphibolurus gilberti—known to be a prey item of V. panoptes—has increased. Meat ants, however, are able to kill cane toads. The cane toad has also been linked to decreases in northern quolls in the southern region of Kakadu National Park and even their local extinction.
The cane toad was introduced to various Caribbean islands to counter a number of pests infesting local crops. While it was able to establish itself on some islands, such as Barbados, Jamaica, Hispaniola and Puerto Rico, other introductions, such as in Cuba before 1900 and in 1946, and on the islands of Dominica and Grand Cayman, were unsuccessful.
The earliest recorded introductions were to Barbados and Martinique. The Barbados introductions were focused on the biological control of pests damaging the sugarcane crops, and while the toads became abundant, they have done even less to control the pests than in Australia. The toad was introduced to Martinique from French Guiana before 1944 and became established. Today, they reduce the mosquito and mole cricket populations. A third introduction to the region occurred in 1884, when toads appeared in Jamaica, reportedly imported from Barbados to help control the rodent population. While they had no significant effect on the rats, they nevertheless became well established. Other introductions include the release on Antigua—possibly before 1916, although this initial population may have died out by 1934 and been reintroduced at a later date—and Montserrat, which had an introduction before 1879 that led to the establishment of a solid population, which was apparently sufficient to survive the Soufrière Hills volcano eruption in 1995.
In 1920, the cane toad was introduced into Puerto Rico to control the populations of white grub (Phyllophaga spp.), a sugarcane pest. Before this, the pests were manually collected by humans, so the introduction of the toad eliminated labor costs. A second group of toads was imported in 1923, and by 1932, the cane toad was well established. The population of white grubs dramatically decreased, and this was attributed to the cane toad at the annual meeting of the International Sugar Cane Technologists in Puerto Rico. However, there may have been other factors. The six-year period after 1931—when the cane toad was most prolific, and the white grub had a dramatic decline—had the highest-ever rainfall for Puerto Rico. Nevertheless, the cane toad was assumed to have controlled the white grub; this view was reinforced by a Nature article titled "Toads save sugar crop", and this led to large-scale introductions throughout many parts of the Pacific.
The cane toad has been spotted in Carriacou and Dominica, the latter appearance occurring in spite of the failure of the earlier introductions. On September 8, 2013, the cane toad was also discovered on the island of New Providence in the Bahamas.
The cane toad was first introduced deliberately into the Philippines in 1930 as a biological control agent of pests in sugarcane plantations, after the success of the experimental introductions into Puerto Rico. It subsequently became the most ubiquitous amphibian in the islands. It still retains the common name of bakî or kamprag in the Visayan languages, a corruption of 'American frog', referring to its origins. It is also commonly known as "bullfrog" in Philippine English.
The cane toad was introduced into Fiji to combat insects that infested sugarcane plantations. The introduction of the cane toad to the region was first suggested in 1933, following the successes in Puerto Rico and Hawaiʻi. After considering the possible side effects, the national government of Fiji decided to release the toad in 1953, and 67 specimens were subsequently imported from Hawaiʻi. Once the toads were established, a 1963 study concluded, as the toad's diet included both harmful and beneficial invertebrates, it was considered "economically neutral". Today, the cane toad can be found on all major islands in Fiji, although they tend to be smaller than their counterparts in other regions.
The cane toad was introduced into New Guinea to control the hawk moth larvae eating sweet potato crops. The first release occurred in 1937 using toads imported from Hawaiʻi, with a second release the same year using specimens from the Australian mainland. Evidence suggests a third release in 1938, consisting of toads being used for human pregnancy tests—many species of toad were found to be effective for this task, and were employed for about 20 years after the discovery was announced in 1948. Initial reports argued the toads were effective in reducing the levels of cutworms and sweet potato yields were thought to be improving. As a result, these first releases were followed by further distributions across much of the region, although their effectiveness on other crops, such as cabbages, has been questioned; when the toads were released at Wau, the cabbages provided insufficient shelter and the toads rapidly left the immediate area for the superior shelter offered by the forest. A similar situation had previously arisen in the Australian cane fields, but this experience was either unknown or ignored in New Guinea. The cane toad has since become abundant in rural and urban areas.
The cane toad naturally exists in South Texas, but attempts (both deliberate and accidental) have been made to introduce the species to other parts of the country. These include introductions to Florida and to Hawaiʻi, as well as largely unsuccessful introductions to Louisiana.
Initial releases into Florida failed. Attempted introductions before 1936 and 1944, intended to control sugarcane pests, were unsuccessful as the toads failed to proliferate. Later attempts failed in the same way. However, the toad gained a foothold in the state after an accidental release by an importer at Miami International Airport in 1957, and deliberate releases by animal dealers in 1963 and 1964 established the toad in other parts of Florida. Today, the cane toad is well established in the state, from the Keys to north of Tampa, and they are gradually extending further northward. In Florida, the toad is a regarded as a threat to native species and pets; so much so, the Florida Fish and Wildlife Conservation Commission recommends residents to kill them.
Around 150 cane toads were introduced to Oʻahu in Hawaiʻi in 1932, and the population swelled to 105,517 after 17 months. The toads were sent to the other islands, and more than 100,000 toads were distributed by July 1934; eventually over 600,000 were transported.
Other than the use as a biological control for pests, the cane toad has been employed in a number of commercial and noncommercial applications. Traditionally, within the toad's natural range in South America, the Embera-Wounaan would "milk" the toads for their toxin, which was then employed as an arrow poison. The toxins may have been used as an entheogen by the Olmec people. The toad has been hunted as a food source in parts of Peru, and eaten after the careful removal of the skin and parotoid glands. When properly prepared, the meat of the toad is considered healthy and as a source of omega-3 fatty acids. More recently, the toad's toxins have been used in a number of new ways: bufotenin has been used in Japan as an aphrodisiac and a hair restorer, and in cardiac surgery in China to lower the heart rates of patients. New research has suggested that the cane toad's poison may have some applications in treating prostate cancer.
Other modern applications of the cane toad include pregnancy testing, as pets, laboratory research, and the production of leather goods. Pregnancy testing was conducted in the mid-20th century by injecting urine from a woman into a male toad's lymph sacs, and if spermatozoa appeared in the toad's urine, the patient was deemed to be pregnant. The tests using toads were faster than those employing mammals; the toads were easier to raise, and, although the initial 1948 discovery employed Bufo arenarum for the tests, it soon became clear that a variety of anuran species were suitable, including the cane toad. As a result, toads were employed in this task for around 20 years. As a laboratory animal, the cane toad has numerous advantages: they are plentiful, and easy and inexpensive to maintain and handle. The use of the cane toad in experiments started in the 1950s, and by the end of the 1960s, large numbers were being collected and exported to high schools and universities. Since then, a number of Australian states have introduced or tightened importation regulations.
There are several commercial uses for dead cane toads. Cane toad skin is made into leather and novelty items. Stuffed cane toads, posed and accessorised, are merchandised at souvenir shops for tourists. Attempts have been made to produce fertiliser from toad carcasses. | [
{
"paragraph_id": 0,
"text": "The cane toad (Rhinella marina), also known as the giant neotropical toad or marine toad, is a large, terrestrial true toad native to South and mainland Central America, but which has been introduced to various islands throughout Oceania and the Caribbean, as well as Northern Australia. It is a member of the genus Rhinella, which includes many true toad species found throughout Central and South America, but it was formerly assigned to the genus Bufo.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The cane toad is an old species. A fossil toad (specimen UCMP 41159) from the La Venta fauna of the late Miocene in Colombia is indistinguishable from modern cane toads from northern South America. It was discovered in a floodplain deposit, which suggests the R. marina habitat preferences have long been for open areas. The cane toad is a prolific breeder; females lay single-clump spawns with thousands of eggs. Its reproductive success is partly because of opportunistic feeding: it has a diet, unusual among anurans, of both dead and living matter. Adults average 10–15 cm (4–6 in) in length; the largest recorded specimen had a snout-vent length of 24 cm (9.4 in).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The cane toad has poison glands, and the tadpoles are highly toxic to most animals if ingested. Its toxic skin can kill many animals, both wild and domesticated, and cane toads are particularly dangerous to dogs. Because of its voracious appetite, the cane toad has been introduced to many regions of the Pacific and the Caribbean islands as a method of agricultural pest control. The common name of the species is derived from its use against the cane beetle (Dermolepida albohirtum), which damages sugar cane. The cane toad is now considered a pest and an invasive species in many of its introduced regions. The 1988 film Cane Toads: An Unnatural History documented the trials and tribulations of the introduction of cane toads in Australia.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Historically, the cane toad was used to eradicate pests from sugarcane, giving rise to its common name. The cane toad has many other common names, including \"giant toad\" and \"marine toad\"; the former refers to its size, and the latter to the binomial name, R. marina. It was one of many species described by Carl Linnaeus in his 18th-century work Systema Naturae (1758). Linnaeus based the specific epithet marina on an illustration by Dutch zoologist Albertus Seba, who mistakenly believed the cane toad to inhabit both terrestrial and marine environments. Other common names include \"giant neotropical toad\", \"Dominican toad\", \"giant marine toad\", and \"South American cane toad\". In Trinidadian English, they are commonly called crapaud, the French word for toad.",
"title": "Taxonomy"
},
{
"paragraph_id": 4,
"text": "The genus Rhinella is considered to constitute a distinct genus of its own, thus changing the scientific name of the cane toad. In this case, the specific name marinus (masculine) changes to marina (feminine) to conform with the rules of gender agreement as set out by the International Code of Zoological Nomenclature, changing the binomial name from Bufo marinus to Rhinella marina; the binomial Rhinella marinus was subsequently introduced as a synonym through misspelling by Pramuk, Robertson, Sites, and Noonan (2008). Though controversial (with many traditional herpetologists still using Bufo marinus) the binomial Rhinella marina is gaining in acceptance with such bodies as the IUCN, Encyclopaedia of Life, Amphibian Species of the World and increasing numbers of scientific publications adopting its usage.",
"title": "Taxonomy"
},
{
"paragraph_id": 5,
"text": "Since 2016, cane toad populations native to Mesoamerica and northwestern South America are sometimes considered to be a separate species, Rhinella horribilis.",
"title": "Taxonomy"
},
{
"paragraph_id": 6,
"text": "In Australia, the adults may be confused with large native frogs from the genera Limnodynastes, Cyclorana, and Mixophyes. These species can be distinguished from the cane toad by the absence of large parotoid glands behind their eyes and the lack of a ridge between the nostril and the eye. Cane toads have been confused with the giant burrowing frog (Heleioporus australiacus), because both are large and warty in appearance; however, the latter can be readily distinguished from the former by its vertical pupils and its silver-grey (as opposed to gold) irises. Juvenile cane toads may be confused with species of the genus Uperoleia, but their adult colleagues can be distinguished by the lack of bright colouring on the groin and thighs.",
"title": "Taxonomy"
},
{
"paragraph_id": 7,
"text": "In the United States, the cane toad closely resembles many bufonid species. In particular, it could be confused with the southern toad (Bufo terrestris), which can be distinguished by the presence of two bulbs in front of the parotoid glands.",
"title": "Taxonomy"
},
{
"paragraph_id": 8,
"text": "The cane toad genome has been sequenced and certain Australian academics believe this will help in understanding how the toad can quickly evolve to adapt to new environments, the workings of its infamous toxin, and hopefully provide new options for halting this species' march across Australia and other places it has spread as an invasive pest.",
"title": "Taxonomy"
},
{
"paragraph_id": 9,
"text": "Studies of the genome confirm its evolutionary origins in northern part of South America and its close genetic relation to Rhinella diptycha and other similar species of the genus. Recent studies suggest that R. marina diverged between 2.75 and 9.40 million years ago.",
"title": "Taxonomy"
},
{
"paragraph_id": 10,
"text": "A recent split in the species into further subspecies may have occurred approximately 2.7 million years ago following the isolation of population groups by the rising Venezuelan Andes.",
"title": "Taxonomy"
},
{
"paragraph_id": 11,
"text": "Considered the largest species in the Bufonidae, the cane toad is very large; the females are significantly longer than males, reaching a typical length of 10–15 cm (4–6 in), with a maximum of 24 cm (9.4 in). Larger toads tend to be found in areas of lower population density. They have a life expectancy of 10 to 15 years in the wild, and can live considerably longer in captivity, with one specimen reportedly surviving for 35 years.",
"title": "Description"
},
{
"paragraph_id": 12,
"text": "The skin of the cane toad is dry and warty. Distinct ridges above the eyes run down the snout. Individual cane toads can be grey, yellowish, red-brown, or olive-brown, with varying patterns. A large parotoid gland lies behind each eye. The ventral surface is cream-coloured and may have blotches in shades of black or brown. The pupils are horizontal and the irises golden. The toes have a fleshy webbing at their base, and the fingers are free of webbing.",
"title": "Description"
},
{
"paragraph_id": 13,
"text": "Typically, juvenile cane toads have smooth, dark skin, although some specimens have a red wash. Juveniles lack the adults' large parotoid glands, so they are usually less poisonous. The tadpoles are small and uniformly black, and are bottom-dwellers, tending to form schools. Tadpoles range from 10 to 25 mm (0.4 to 1.0 in) in length.",
"title": "Description"
},
{
"paragraph_id": 14,
"text": "The common name \"marine toad\" and the scientific name Rhinella marina suggest a link to marine life, but cane toads do not live in the sea. However, laboratory experiments suggest that tadpoles can tolerate salt concentrations equivalent to 15% of seawater (~5.4‰), and recent field observations found living tadpoles and toadlets at salinities of 27.5‰ on Coiba Island, Panama. The cane toad inhabits open grassland and woodland, and has displayed a \"distinct preference\" for areas modified by humans, such as gardens and drainage ditches. In their native habitats, the toads can be found in subtropical forests, although dense foliage tends to limit their dispersal.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 15,
"text": "The cane toad begins life as an egg, which is laid as part of long strings of jelly in water. A female lays 8,000–25,000 eggs at once and the strings can stretch up to 20 m (66 ft) in length. The black eggs are covered by a membrane and their diameter is about 1.7–2.0 mm (0.067–0.079 in). The rate at which an egg grows into a tadpole increases with temperature. Tadpoles typically hatch within 48 hours, but the period can vary from 14 hours to almost a week. This process usually involves thousands of tadpoles—which are small, black, and have short tails—forming into groups. Between 12 and 60 days are needed for the tadpoles to develop into juveniles, with four weeks being typical. Similarly to their adult counterparts, eggs and tadpoles are toxic to many animals.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 16,
"text": "When they emerge, toadlets typically are about 10–11 mm (0.39–0.43 in) in length, and grow rapidly. While the rate of growth varies by region, time of year, and gender, an average initial growth rate of 0.647 mm (0.0255 in) per day is seen, followed by an average rate of 0.373 mm (0.0147 in) per day. Growth typically slows once the toads reach sexual maturity. This rapid growth is important for their survival; in the period between metamorphosis and subadulthood, the young toads lose the toxicity that protected them as eggs and tadpoles, but have yet to fully develop the parotoid glands that produce bufotoxin. Only an estimated 0.5% of cane toads reach adulthood, in part because they lack this key defense—but also due to tadpole cannibalism. Although cannibalism does occur in the native population in South America, the rapid evolution occurring in the unnaturally large population in Australia has produced tadpoles 30x more likely to be interested in cannibalising their siblings, and 2.6x more likely to actually do so. They have also evolved to shorten their tadpole phase in response to the presence of older tadpoles. These changes are likely genetic, although no genetic basis has been determined.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 17,
"text": "As with rates of growth, the point at which the toads become sexually mature varies across different regions. In New Guinea, sexual maturity is reached by female toads with a snout–vent length between 70 and 80 mm (2.8 and 3.1 in), while toads in Panama achieve maturity when they are between 90 and 100 mm (3.5 and 3.9 in) in length. In tropical regions, such as their native habitats, breeding occurs throughout the year, but in subtropical areas, breeding occurs only during warmer periods that coincide with the onset of the wet season.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 18,
"text": "The cane toad is estimated to have a critical thermal maximum of 40–42 °C (104–108 °F) and a minimum of around 10–15 °C (50–59 °F). The ranges can change due to adaptation to the local environment. Cane toads from some populations can adjust their thermal tolerance within a few hours of encountering low temperatures. The toad is able to rapidly acclimate to the cold using physiological plasticity, though there is also evidence that more northerly populations of cane toads in the United States are better cold-adapted than more southerly populations. These adaptations have allowed the cane toad to establish invasive populations across the world. The toad's ability to rapidly acclimate to thermal changes suggests that current models may underestimate the potential range of habitats that the toad can populate. The cane toad has a high tolerance to water loss; some can withstand a 52.6% loss of body water, allowing them to survive outside tropical environments.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 19,
"text": "Most frogs identify prey by movement, and vision appears to be the primary method by which the cane toad detects prey; however, it can also locate food using its sense of smell. They eat a wide range of material; in addition to the normal prey of small rodents, other small mammals, reptiles, other amphibians, birds, and even bats and a range of invertebrates (such as ants, beetles, earwigs, dragonflies, grasshoppers, true bugs, crustaceans, and gastropods), they also eat plants, dog food, cat food, feces, and household refuse.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 20,
"text": "The skin of the adult cane toad is toxic, as well as the enlarged parotoid glands behind the eyes, and other glands across its back. When the toad is threatened, its glands secrete a milky-white fluid known as bufotoxin. Components of bufotoxin are toxic to many animals; even human deaths have been recorded due to the consumption of cane toads. Dogs are especially prone to be poisoned by licking or biting toads. Pets showing excessive drooling, extremely red gums, head-shaking, crying, loss of coordination, and/or convulsions require immediate veterinary attention.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 21,
"text": "Bufotenin, one of the chemicals excreted by the cane toad, is classified as a schedule 9 drug under Australian law, alongside heroin and LSD. The effects of bufotenin are thought to be similar to those of mild poisoning; the stimulation, which includes mild hallucinations, lasts less than an hour. As the cane toad excretes bufotenin in small amounts, and other toxins in relatively large quantities, toad licking could result in serious illness or death.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 22,
"text": "In addition to releasing toxin, the cane toad is capable of inflating its lungs, puffing up, and lifting its body off the ground to appear taller and larger to a potential predator.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 23,
"text": "Since 2011, experimenters in the Kimberley region of Western Australia have used poisonous sausages containing toad meat in an attempt to protect native animals from cane toads' deadly impact. The Western Australian Department of Environment and Conservation, along with the University of Sydney, developed these sausage-shaped baits as a tool in order to train native animals not to eat the toads. By blending bits of toad with a nausea-inducing chemical, the baits train the animals to stay away from the amphibians.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 24,
"text": "Many species prey on the cane toad and its tadpoles in its native habitat, including the broad-snouted caiman (Caiman latirostris), the banded cat-eyed snake (Leptodeira annulata), eels (family Anguillidae), various species of killifish, the rock flagtail (Kuhlia rupestris), some species of catfish (order Siluriformes), some species of ibis (subfamily Threskiornithinae), and Paraponera clavata (bullet ants).",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 25,
"text": "Predators outside the cane toad's native range include the whistling kite (Haliastur sphenurus), the rakali (Hydromys chrysogaster), the black rat (Rattus rattus) and the water monitor (Varanus salvator). The tawny frogmouth (Podargus strigoides) and the Papuan frogmouth (Podargus papuensis) have been reported as feeding on cane toads; some Australian crows (Corvus spp.) have also learned strategies allowing them to feed on cane toads, such as using their beak to flip toads onto their backs. Kookaburras also prey on the amphibians.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 26,
"text": "Opossums of the genus Didelphis likely can eat cane toads with impunity. Meat ants are unaffected by the cane toads' toxins, so are able to kill them. The cane toad's normal response to attack is to stand still and let its toxin kill the attacker, which allows the ants to attack and eat the toad. Saw-shelled turtles have also been seen successfully and safely eating cane toads.",
"title": "Ecology, behaviour and life history"
},
{
"paragraph_id": 27,
"text": "The cane toad is native to the Americas, and its range stretches from the Rio Grande Valley in South Texas to the central Amazon and southeastern Peru, and some of the continental islands near Venezuela (such as Trinidad and Tobago). This area encompasses both tropical and semiarid environments. The density of the cane toad is significantly lower within its native distribution than in places where it has been introduced. In South America, the density was recorded to be 20 adults per 100 m (110 yd) of shoreline, 1 to 2% of the density in Australia.",
"title": "Distribution"
},
{
"paragraph_id": 28,
"text": "The cane toad has been introduced to many regions of the world—particularly the Pacific—for the biological control of agricultural pests. These introductions have generally been well documented, and the cane toad may be one of the most studied of any introduced species.",
"title": "Distribution"
},
{
"paragraph_id": 29,
"text": "Before the early 1840s, the cane toad had been introduced into Martinique and Barbados, from French Guiana and Guyana. An introduction to Jamaica was made in 1844 in an attempt to reduce the rat population. Despite its failure to control the rodents, the cane toad was introduced to Puerto Rico in the early 20th century in the hope that it would counter a beetle infestation ravaging the sugarcane plantations. The Puerto Rican scheme was successful and halted the economic damage caused by the beetles, prompting scientists in the 1930s to promote it as an ideal solution to agricultural pests.",
"title": "Distribution"
},
{
"paragraph_id": 30,
"text": "As a result, many countries in the Pacific region emulated the lead of Puerto Rico and introduced the toad in the 1930s. Introduced populations are in Australia, Florida, Papua New Guinea, the Philippines, the Ogasawara, Ishigaki Island and the Daitō Islands of Japan, Taiwan Nantou Caotun, most Caribbean islands, Fiji and many other Pacific islands, including Hawaiʻi. Since then, the cane toad has become a pest in many host countries, and poses a serious threat to native animals.",
"title": "Distribution"
},
{
"paragraph_id": 31,
"text": "Following the apparent success of the cane toad in eating the beetles threatening the sugarcane plantations of Puerto Rico, and the fruitful introductions into Hawaiʻi and the Philippines, a strong push was made for the cane toad to be released in Australia to negate the pests ravaging the Queensland cane fields. As a result, 102 toads were collected from Hawaiʻi and brought to Australia. Queensland's sugar scientists released the toad into cane fields in August 1935. After this initial release, the Commonwealth Department of Health decided to ban future introductions until a study was conducted into the feeding habits of the toad. The study was completed in 1936 and the ban lifted, when large-scale releases were undertaken; by March 1937, 62,000 toadlets had been released into the wild. The toads became firmly established in Queensland, increasing exponentially in number and extending their range into the Northern Territory and New South Wales. In 2010, one was found on the far western coast in Broome, Western Australia.",
"title": "Distribution"
},
{
"paragraph_id": 32,
"text": "However, the toad was generally unsuccessful in reducing the targeted grey-backed cane beetles (Dermolepida albohirtum), in part because the cane fields provided insufficient shelter for the predators during the day, and in part because the beetles live at the tops of sugar cane—and cane toads are not good climbers. Since its original introduction, the cane toad has had a particularly marked effect on Australian biodiversity. The population of a number of native predatory reptiles has declined, such as the varanid lizards Varanus mertensi, V. mitchelli, and V. panoptes, the land snakes Pseudechis australis and Acanthophis antarcticus, and the crocodile species Crocodylus johnstoni; in contrast, the population of the agamid lizard Amphibolurus gilberti—known to be a prey item of V. panoptes—has increased. Meat ants, however, are able to kill cane toads. The cane toad has also been linked to decreases in northern quolls in the southern region of Kakadu National Park and even their local extinction.",
"title": "Distribution"
},
{
"paragraph_id": 33,
"text": "The cane toad was introduced to various Caribbean islands to counter a number of pests infesting local crops. While it was able to establish itself on some islands, such as Barbados, Jamaica, Hispaniola and Puerto Rico, other introductions, such as in Cuba before 1900 and in 1946, and on the islands of Dominica and Grand Cayman, were unsuccessful.",
"title": "Distribution"
},
{
"paragraph_id": 34,
"text": "The earliest recorded introductions were to Barbados and Martinique. The Barbados introductions were focused on the biological control of pests damaging the sugarcane crops, and while the toads became abundant, they have done even less to control the pests than in Australia. The toad was introduced to Martinique from French Guiana before 1944 and became established. Today, they reduce the mosquito and mole cricket populations. A third introduction to the region occurred in 1884, when toads appeared in Jamaica, reportedly imported from Barbados to help control the rodent population. While they had no significant effect on the rats, they nevertheless became well established. Other introductions include the release on Antigua—possibly before 1916, although this initial population may have died out by 1934 and been reintroduced at a later date—and Montserrat, which had an introduction before 1879 that led to the establishment of a solid population, which was apparently sufficient to survive the Soufrière Hills volcano eruption in 1995.",
"title": "Distribution"
},
{
"paragraph_id": 35,
"text": "In 1920, the cane toad was introduced into Puerto Rico to control the populations of white grub (Phyllophaga spp.), a sugarcane pest. Before this, the pests were manually collected by humans, so the introduction of the toad eliminated labor costs. A second group of toads was imported in 1923, and by 1932, the cane toad was well established. The population of white grubs dramatically decreased, and this was attributed to the cane toad at the annual meeting of the International Sugar Cane Technologists in Puerto Rico. However, there may have been other factors. The six-year period after 1931—when the cane toad was most prolific, and the white grub had a dramatic decline—had the highest-ever rainfall for Puerto Rico. Nevertheless, the cane toad was assumed to have controlled the white grub; this view was reinforced by a Nature article titled \"Toads save sugar crop\", and this led to large-scale introductions throughout many parts of the Pacific.",
"title": "Distribution"
},
{
"paragraph_id": 36,
"text": "The cane toad has been spotted in Carriacou and Dominica, the latter appearance occurring in spite of the failure of the earlier introductions. On September 8, 2013, the cane toad was also discovered on the island of New Providence in the Bahamas.",
"title": "Distribution"
},
{
"paragraph_id": 37,
"text": "The cane toad was first introduced deliberately into the Philippines in 1930 as a biological control agent of pests in sugarcane plantations, after the success of the experimental introductions into Puerto Rico. It subsequently became the most ubiquitous amphibian in the islands. It still retains the common name of bakî or kamprag in the Visayan languages, a corruption of 'American frog', referring to its origins. It is also commonly known as \"bullfrog\" in Philippine English.",
"title": "Distribution"
},
{
"paragraph_id": 38,
"text": "The cane toad was introduced into Fiji to combat insects that infested sugarcane plantations. The introduction of the cane toad to the region was first suggested in 1933, following the successes in Puerto Rico and Hawaiʻi. After considering the possible side effects, the national government of Fiji decided to release the toad in 1953, and 67 specimens were subsequently imported from Hawaiʻi. Once the toads were established, a 1963 study concluded, as the toad's diet included both harmful and beneficial invertebrates, it was considered \"economically neutral\". Today, the cane toad can be found on all major islands in Fiji, although they tend to be smaller than their counterparts in other regions.",
"title": "Distribution"
},
{
"paragraph_id": 39,
"text": "The cane toad was introduced into New Guinea to control the hawk moth larvae eating sweet potato crops. The first release occurred in 1937 using toads imported from Hawaiʻi, with a second release the same year using specimens from the Australian mainland. Evidence suggests a third release in 1938, consisting of toads being used for human pregnancy tests—many species of toad were found to be effective for this task, and were employed for about 20 years after the discovery was announced in 1948. Initial reports argued the toads were effective in reducing the levels of cutworms and sweet potato yields were thought to be improving. As a result, these first releases were followed by further distributions across much of the region, although their effectiveness on other crops, such as cabbages, has been questioned; when the toads were released at Wau, the cabbages provided insufficient shelter and the toads rapidly left the immediate area for the superior shelter offered by the forest. A similar situation had previously arisen in the Australian cane fields, but this experience was either unknown or ignored in New Guinea. The cane toad has since become abundant in rural and urban areas.",
"title": "Distribution"
},
{
"paragraph_id": 40,
"text": "The cane toad naturally exists in South Texas, but attempts (both deliberate and accidental) have been made to introduce the species to other parts of the country. These include introductions to Florida and to Hawaiʻi, as well as largely unsuccessful introductions to Louisiana.",
"title": "Distribution"
},
{
"paragraph_id": 41,
"text": "Initial releases into Florida failed. Attempted introductions before 1936 and 1944, intended to control sugarcane pests, were unsuccessful as the toads failed to proliferate. Later attempts failed in the same way. However, the toad gained a foothold in the state after an accidental release by an importer at Miami International Airport in 1957, and deliberate releases by animal dealers in 1963 and 1964 established the toad in other parts of Florida. Today, the cane toad is well established in the state, from the Keys to north of Tampa, and they are gradually extending further northward. In Florida, the toad is a regarded as a threat to native species and pets; so much so, the Florida Fish and Wildlife Conservation Commission recommends residents to kill them.",
"title": "Distribution"
},
{
"paragraph_id": 42,
"text": "Around 150 cane toads were introduced to Oʻahu in Hawaiʻi in 1932, and the population swelled to 105,517 after 17 months. The toads were sent to the other islands, and more than 100,000 toads were distributed by July 1934; eventually over 600,000 were transported.",
"title": "Distribution"
},
{
"paragraph_id": 43,
"text": "Other than the use as a biological control for pests, the cane toad has been employed in a number of commercial and noncommercial applications. Traditionally, within the toad's natural range in South America, the Embera-Wounaan would \"milk\" the toads for their toxin, which was then employed as an arrow poison. The toxins may have been used as an entheogen by the Olmec people. The toad has been hunted as a food source in parts of Peru, and eaten after the careful removal of the skin and parotoid glands. When properly prepared, the meat of the toad is considered healthy and as a source of omega-3 fatty acids. More recently, the toad's toxins have been used in a number of new ways: bufotenin has been used in Japan as an aphrodisiac and a hair restorer, and in cardiac surgery in China to lower the heart rates of patients. New research has suggested that the cane toad's poison may have some applications in treating prostate cancer.",
"title": "Uses"
},
{
"paragraph_id": 44,
"text": "Other modern applications of the cane toad include pregnancy testing, as pets, laboratory research, and the production of leather goods. Pregnancy testing was conducted in the mid-20th century by injecting urine from a woman into a male toad's lymph sacs, and if spermatozoa appeared in the toad's urine, the patient was deemed to be pregnant. The tests using toads were faster than those employing mammals; the toads were easier to raise, and, although the initial 1948 discovery employed Bufo arenarum for the tests, it soon became clear that a variety of anuran species were suitable, including the cane toad. As a result, toads were employed in this task for around 20 years. As a laboratory animal, the cane toad has numerous advantages: they are plentiful, and easy and inexpensive to maintain and handle. The use of the cane toad in experiments started in the 1950s, and by the end of the 1960s, large numbers were being collected and exported to high schools and universities. Since then, a number of Australian states have introduced or tightened importation regulations.",
"title": "Uses"
},
{
"paragraph_id": 45,
"text": "There are several commercial uses for dead cane toads. Cane toad skin is made into leather and novelty items. Stuffed cane toads, posed and accessorised, are merchandised at souvenir shops for tourists. Attempts have been made to produce fertiliser from toad carcasses.",
"title": "Uses"
}
] | The cane toad, also known as the giant neotropical toad or marine toad, is a large, terrestrial true toad native to South and mainland Central America, but which has been introduced to various islands throughout Oceania and the Caribbean, as well as Northern Australia. It is a member of the genus Rhinella, which includes many true toad species found throughout Central and South America, but it was formerly assigned to the genus Bufo. The cane toad is an old species. A fossil toad from the La Venta fauna of the late Miocene in Colombia is indistinguishable from modern cane toads from northern South America. It was discovered in a floodplain deposit, which suggests the R. marina habitat preferences have long been for open areas. The cane toad is a prolific breeder; females lay single-clump spawns with thousands of eggs. Its reproductive success is partly because of opportunistic feeding: it has a diet, unusual among anurans, of both dead and living matter. Adults average 10–15 cm (4–6 in) in length; the largest recorded specimen had a snout-vent length of 24 cm (9.4 in). The cane toad has poison glands, and the tadpoles are highly toxic to most animals if ingested. Its toxic skin can kill many animals, both wild and domesticated, and cane toads are particularly dangerous to dogs. Because of its voracious appetite, the cane toad has been introduced to many regions of the Pacific and the Caribbean islands as a method of agricultural pest control. The common name of the species is derived from its use against the cane beetle, which damages sugar cane. The cane toad is now considered a pest and an invasive species in many of its introduced regions. The 1988 film Cane Toads: An Unnatural History documented the trials and tribulations of the introduction of cane toads in Australia. | 2001-10-01T05:53:34Z | 2023-12-19T18:49:57Z | [
"Template:Featured article",
"Template:Harvnb",
"Template:Cite journal",
"Template:Commons category",
"Template:Pp-semi-indef",
"Template:Convert",
"Template:Reflist",
"Template:Cite report",
"Template:Wikispecies",
"Template:Spoken Wikipedia",
"Template:Portal bar",
"Template:Authority control",
"Template:Pp-move",
"Template:Cite web",
"Template:Cite conference",
"Template:In lang",
"Template:Refend",
"Template:Taxonbar",
"Template:Short description",
"Template:Speciesbox",
"Template:Sfn",
"Template:Main",
"Template:Cite news",
"Template:Cite book",
"Template:Refbegin",
"Template:Other uses"
] | https://en.wikipedia.org/wiki/Cane_toad |
6,643 | Croquet | Croquet (UK: /ˈkroʊkeɪ, -ki/ or US: /kroʊˈkeɪ/) is a sport that involves hitting wooden or plastic balls with a mallet through hoops (often called "wickets" in the United States) embedded in a grass playing court.
In all forms of croquet, individual players or teams take turns to strike the balls, scoring points by striking them through a hoop. The game ends when a player or team reaches a predetermined number of points. Several variations exist which differ in when and how a stroke may be legally played, when points are scored, the layout of the lawn, and the target score. Commonly, social games adopt further non-standard variations to adapt play to the conditions. In all versions, players of all ages and genders compete on equal terms and are ranked together.
Two versions of the game are directly governed by the World Croquet Federation, who organise individual and team World Championships. Other regional variants that developed in parallel remain common in parts of the world.
Association croquet is played between two individuals or teams, each playing with two balls. The object of the game is to be the first to pass each of their balls through all six hoops in both directions - in a fixed order - and to strike the central peg. Each of these actions scores a point, with the maximum score being 26 points.
The first four turns must be taken to play the balls onto the lawn from one of two "baulk lines" found one yard into the lawn on the western half of the south boundary and the eastern half of the nor boundary. After this, players elect which of their two balls to play for the duration of each turn.
On a turn, a player may earn extra shots in two ways. The player earns a single extra shot by scoring a hoop point, or two extra shots by causing their ball to contact another ball - an action called a "roquet". When a roquet is made, the player may pick up their ball and place it in contact with the roqueted ball. The next shot must move both the player's ball and the roqueted ball, and is the "croquet" stroke that gives the game its name. After a successful croquet stroke, the player has a single further shot, known as the "continuation". During a turn, each of the other three balls may only be croqueted once between hoop points, but by stringing together a series of roquets, croquets, and scored hoops, several points may be scored in a single turn.
Advanced variants of association croquet give further penalties to dissuade skilled players from running every hoop with a ball on a single break, while handicap versions give weaker players chances to continue play after making an error. These extra turns, called "bisques" are effective in levelling the odds of winning.
Golf croquet is played between two individual players or teams, each playing with two balls. The object of the game is to reach a certain number of points, typically seven, earned by being the first to run a hoop.
The game opens by playing each ball into the lawn from the fourth (south-eastern) corner of the lawn. Balls must be played in order (for the primary ball colours, this is blue, red, black, and yellow), and this order of play is maintained throughout the game.
Hoops are contested in a fixed order, with a point awarded to the owner of the first ball to pass through the hoop in the correct direction. After a point is awarded, all players move on to contest the next hoop. Balls that are played more than halfway to the next hoop before a point is scored are considered offside, and are moved to penalty areas.
Golf croquet is the fastest-growing version of the game, owing largely to its simplicity and competitiveness. There is an especially large interest in competitive success by players in Egypt. By comparison with association croquet, golf croquet requires a smaller variety of shots, and emphasises strategic skills and accurate shot making. Play is faster and balls are more likely to be hit harder or lifted off the ground.
The American-rules version of croquet is the dominant version of the game in the United States and is also widely played in Canada. It is governed by the United States Croquet Association. Its genesis is mostly in association croquet, but it differs in a number of important ways that reflect the home-grown traditions of American "backyard" croquet. Official rules we're first published in 1894 by the Spalding Athletic Library as adopted by the National American Croquet Association.
American six-wicket uses the same six-wicket layout as both association croquet and golf croquet, and is also played by two individuals or teams, each owning two balls.
Like association croquet, the object of the game is to be the first to pass each of their balls through all six hoops in both directions and to strike the central peg, for a total of 26 points.
Unlike association croquet, balls are always played in the same sequence (blue, red, black, yellow). The limitation of roqueting each ball once between hoop points is, unlike in association croquet, carried over from turn to turn until the ball scores the next hoop. In American six-wicket, this is termed "deadness", and a separate board is required to keep track of the deadness for all four balls.
A further difference is the more restrictive boundary-line rules of American croquet. In the American game, roqueting a ball out of bounds or running a hoop so that the ball goes out of bounds causes the turn to end, and balls that go out of bounds are replaced only nine inches (23 cm) from the boundary rather than one yard (91 cm) as in association croquet. "Attacking" balls on the boundary line to bring them into play is thus far more challenging.
Nine-wicket croquet, sometimes called "backyard croquet", is played mainly in Canada and the United States, and is the game most recreational players in those countries call simply "croquet". In this version of croquet, there are nine wickets, two stakes, and up to six balls. The course is arranged in a double-diamond pattern, with one stake at each end of the course. Players start at one stake, navigate one side of the double diamond, hit the turning stake, then navigate the opposite side of the double diamond and hit the starting stake to end. If playing individually (Cutthroat), the first player to stake out is the winner. In partnership play, all members of a team must stake out, and a player might choose to avoid staking out (becoming a Rover) in order to help a lagging teammate.
Each time a ball is roqueted, the striker gets two bonus shots. For the first bonus shot, the player has four options:
The second bonus shot ("continuation shot") is an ordinary shot played from where the striker ball came to rest.
An alternative endgame is "poison": in this variant, a player who has scored the last wicket but not hit the starting stake becomes a "poison ball", which may eliminate other balls from the game by roqueting them. A non-poison ball that roquets a poison ball has the normal options. A poison ball that hits a stake or passes through any wicket (possibly by the action of a non-poison player) is eliminated. The last person remaining is the winner.
As well as club-level games, county-level tournaments and leagues, there are regular world championships and international matches between croquet-playing countries. The sport has particularly strong followings in the UK, US, New Zealand, Australia and Egypt; many other countries also play. Every four years, the top countries play in the World Team Championships in AC (the MacRobertson Shield) and GC (the Openshaw Shield). The current world rankings show England in top place for AC, followed by Australia in second place, and New Zealand in third place, with the United States in fourth position. The same four countries appear in the top six of the GC country rankings, below Egypt in top position, and with Spain at number six.
Individual World Championships usually take place every two or three years. The 2023 AC World Championships took place in London; the winner was Robert Fulford. The current Women's Association Croquet World Champion (2023) is Debbie Lines of England.
The most prestigious international team competition in association croquet is the MacRobertson International Croquet Shield. It is contested every three to four years between Australia, England (formerly Great Britain), the United States and New Zealand. Other nations compete in Tier 2 and Tier 3 World Team Championships. Teams are promoted and relegated between the lower tiers, but there is no relegation to or promotion from the MacRobertson Shield. The current holders of the MacRobertson Shield are England, who won the title in 2023.
At the Golf Croquet World Team Championships, eight nations contest the Openshaw Shield. There is promotion and relegation between Tier 1, Tier 2, and Tier 3. The current holders of the Openshaw Shield are New Zealand, who won in 2020.
The world's top 10 association croquet players as of October 2023 were Robert Fletcher (Australia), Robert Fulford (England), Paddy Chapman (New Zealand), Jamie Burch (England), Reg Bamford (South Africa), Matthew Essick (USA), Mark Avery (England), Simon Hockey (Australia), Harry Fisher (England), Jose Riva (Spain).
In April 2013, Reg Bamford of South Africa beat Ahmed Nasr of Egypt in the final of the Golf Croquet World Championship in Cairo, becoming the first person to simultaneously hold the title in both association croquet and golf croquet. As of 2023, the Golf Croquet World Champion was Matthew Essick (USA) and the Women's Golf Croquet World Champion was Jamie Gumbrell (Australia).
In 2018, two international championships open to both sexes were won by women: in May, Rachel Gee of England beat Pierre Beaudry of Belgium to win the European Golf Croquet championship, and in October, Hanan Rashad of Egypt beat Yasser Fathy (also from Egypt) to win the World over-50s Golf Croquet championship.
Croquet was an event at the 1900 Summer Olympics. Roque, an American variation on croquet, was an event at the 1904 Summer Olympics.
The oldest document to bear the word croquet with a description of the modern game is the set of rules registered by Isaac Spratt in November 1856 with the Stationers' Company of London. This record is now in the Public Record Office. In 1868, the first croquet all-comers meet was held at Moreton-in-Marsh, Gloucestershire and in the same year the All England Croquet Club was formed at Wimbledon, London.
Regardless when and by what route it reached the British Isles and the British colonies in its recognizable form, croquet is, like pall-mall and trucco, among the later forms of ground billiards, which as a class have been popular in Western Europe back to at least the Late Middle Ages, with roots in classical antiquity, including sometimes the use of arches and pegs along with balls and mallets or other striking sticks (some more akin to modern field hockey sticks). By the 12th century, a team ball game called la soule or choule, akin to a chaotic version of hockey or football (depending on whether sticks were used), was regularly played in France and southern Britain between villages or parishes; it was attested in Cornwall as early as 1283.
In the book Queen of Games: The History of Croquet, Nicky Smith presents two theories of the origin of the modern game of croquet, which took England by storm in the 1860s and then spread overseas.
The first explanation is that the ancestral game was introduced to Britain from France during the 1660–1685 reign of Charles II of England, Scotland and Ireland, and was played under the name of paille-maille (among other spellings, today usually pall-mall), derived ultimately from Latin words for 'ball and mallet' (the latter also found in the name of the earlier French game, jeu de mail). This was the explanation given in the ninth edition of Encyclopædia Britannica, dated 1877.
In his 1801 book The Sports and Pastimes of the People of England, Joseph Strutt described the way pall-mall was played in England at the time:
"Pale-maille is a game wherein a round box[wood] ball is struck with a mallet through a high arch of iron, which he that can do at the fewest blows, or at the number agreed upon, wins. It is to be observed, that there are two of these arches, that is one at either end of the alley. The game of mall was a fashionable amusement in the reign of Charles the Second, and the walk in Saint James's Park, now called the Mall, received its name from having been appropriated to the purpose of playing at mall, where Charles himself and his courtiers frequently exercised themselves in the practice of this pastime."
While the name pall-mall and various games bearing this name also appeared elsewhere (France and Italy), the description above suggests that the croquet-like games in particular were popular in England by the early 17th century. Some other early modern sources refer to pall-mall being played over a large distance (as in golf); however, an image in Strutt's 1801 book shows a croquet-like ground billiards game (balls on ground, hoop, bats, and peg) being played over a short, garden-sized distance. The image's caption describes the game as "a curious ancient pastime", confirming that croquet games were not new in early-19th-century England.
In Samuel Johnson's 1755 dictionary, his definition of "pall-mall" clearly describes a game with similarities to modern croquet: "A play in which the ball is struck with a mallet through an iron ring". However, there is no evidence that pall-mall involved the croquet stroke which is the distinguishing characteristic of the modern game.
The second theory is that the rules of the modern game of croquet arrived from Ireland during the 1850s, perhaps after being brought there from Brittany, where a similar game was played on the beaches. Regular contact between Ireland and France had continued since the Norman invasion of Ireland in 1169. By no later than the early 15th century, the game jeu de mail (itself ancestral to pall-mall and perhaps to indoor billiards) was popular in France, including in the courts of Henry II in the 16th century and Louis XIV of the 17th.
At least one version of it, rouët ('wheel') was a multi-ball lawn game. Records show a game called "crookey", similar to croquet, being played at Castlebellingham in County Louth, Ireland, in 1834, which was introduced to Galway in 1835 and played on the bishop's palace garden, and in the same year to the genteel Dublin suburb of Kingstown (today Dún Laoghaire) where it was first spelt as "croquet". There is, however, no pre-1858 Irish document that describes the way game was played, in particular, there is no reference to the distinctive croquet stroke, which is described above under "Variations: Association". The noted croquet historian Dr Prior, in his book of 1872, makes the categoric statement "One thing only is certain: it is from Ireland that croquet came to England and it was on the lawn of the late Lord Lonsdale that it was first played in this country." This was about 1851.
John Jaques apparently claimed in a letter to Arthur Lillie in 1873 that he had himself seen the game played in Ireland, writing "I made the implements and published directions (such as they were) before Mr. Spratt [mentioned above] introduced the subject to me." Whatever the truth of the matter, Jaques certainly played an important role in popularising the game, producing editions of the rules in 1857, 1860, and 1864.
Croquet became highly popular as a social pastime in England during the 1860s. It was enthusiastically adopted and promoted by the Earl of Essex who held lavish croquet parties at Cassiobury House, his stately home in Watford, Hertfordshire, and the Earl even launched his own Cassiobury brand croquet set. By 1867, Jaques had printed 65,000 copies of his Laws and Regulations of the game. It quickly spread to other Anglophone countries, including Australia, Canada, New Zealand, South Africa, and the United States. No doubt one of the attractions was that the game could be played by both sexes; this also ensured a certain amount of adverse comment.
It is no coincidence that the game became popular at the same time as the cylinder lawn mower, since croquet can only be played well on a lawn that is flat and finely-cut.
By the late 1870s, however, croquet had been eclipsed by another fashionable game, lawn tennis, and many of the newly created croquet clubs, including the All England Club at Wimbledon, converted some or all of their lawns into tennis courts.
There was a revival in the 1890s, but from then onwards, croquet was always a minority sport, with national individual participation amounting to a few thousand players. The All England Lawn Tennis and Croquet Club still has a croquet lawn, but has not hosted any significant tournaments. Its championship was won 38 times by Bernard Neal. The English headquarters for the game is now in Cheltenham.
The earliest known reference to croquet in Scotland is the booklet The Game of Croquet, its Laws and Regulations which was published in the mid-1860s for the proprietor of Eglinton Castle, the Earl of Eglinton. On the page facing the title page is a picture of Eglinton Castle with a game of "croquet" in full swing.
The croquet lawn existed on the northern terrace, between Eglinton Castle and the Lugton Water. The 13th Earl developed a variation on croquet named Captain Moreton's Eglinton Castle croquet, which had small bells on the eight hoops "to ring the changes", two pegs, a double hoop with a bell and two tunnels for the ball to pass through. In 1865 the 'Rules of the Eglinton Castle and Cassiobury Croquet' was published by Edmund Routledge. Several incomplete sets of this form of croquet are known to exist, and one complete set is still used for demonstration games in the West of Scotland.
Croquet is popularly believed to be viciously competitive. This may derive from the fact that (unlike in golf) players will often attempt to move their opponents' balls to unfavourable positions. However, purely negative play is rarely a winning strategy; successful players (in all versions other than golf croquet) will use all four balls to set up a break for themselves, rather than simply making the game as difficult as possible for their opponents.
The way croquet is depicted in paintings and books says much about popular perceptions of the game, though little about the reality of modern play.
About 200 croquet clubs across the United States are members of the United States Croquet Association.
Many colleges have croquet clubs as well, such as The University of Virginia, The University of Chicago, Pennsylvania State University, Bates College, SUNY New Paltz, Harvard University, and Dartmouth College. Notably, St. John's College and the US Naval Academy engage in a yearly match in Annapolis, Maryland. Both schools also compete at the collegiate level and the rivalry continues to be an Annapolis tradition, attracting thousands of spectators each April.
In England and Wales, there are around 170 clubs affiliated with the Croquet Association. The All England Lawn Tennis and Croquet Club at Wimbledon is famous for its lawn tennis tournament, but retains an active croquet section. There are also clubs in many universities and colleges, with an annual Varsity match being played between Oxford and Cambridge. With over 1800 participants, the 2011 Oxford University "Cuppers" (inter-college) tournament claimed to be not only the largest croquet tournament ever, but the largest sporting event in the university's history.
There are 112 clubs in New Zealand, affiliated with 19 associations. These are governed by Croquet New Zealand. | [
{
"paragraph_id": 0,
"text": "Croquet (UK: /ˈkroʊkeɪ, -ki/ or US: /kroʊˈkeɪ/) is a sport that involves hitting wooden or plastic balls with a mallet through hoops (often called \"wickets\" in the United States) embedded in a grass playing court.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In all forms of croquet, individual players or teams take turns to strike the balls, scoring points by striking them through a hoop. The game ends when a player or team reaches a predetermined number of points. Several variations exist which differ in when and how a stroke may be legally played, when points are scored, the layout of the lawn, and the target score. Commonly, social games adopt further non-standard variations to adapt play to the conditions. In all versions, players of all ages and genders compete on equal terms and are ranked together.",
"title": "Variations"
},
{
"paragraph_id": 2,
"text": "Two versions of the game are directly governed by the World Croquet Federation, who organise individual and team World Championships. Other regional variants that developed in parallel remain common in parts of the world.",
"title": "Variations"
},
{
"paragraph_id": 3,
"text": "Association croquet is played between two individuals or teams, each playing with two balls. The object of the game is to be the first to pass each of their balls through all six hoops in both directions - in a fixed order - and to strike the central peg. Each of these actions scores a point, with the maximum score being 26 points.",
"title": "Variations"
},
{
"paragraph_id": 4,
"text": "The first four turns must be taken to play the balls onto the lawn from one of two \"baulk lines\" found one yard into the lawn on the western half of the south boundary and the eastern half of the nor boundary. After this, players elect which of their two balls to play for the duration of each turn.",
"title": "Variations"
},
{
"paragraph_id": 5,
"text": "On a turn, a player may earn extra shots in two ways. The player earns a single extra shot by scoring a hoop point, or two extra shots by causing their ball to contact another ball - an action called a \"roquet\". When a roquet is made, the player may pick up their ball and place it in contact with the roqueted ball. The next shot must move both the player's ball and the roqueted ball, and is the \"croquet\" stroke that gives the game its name. After a successful croquet stroke, the player has a single further shot, known as the \"continuation\". During a turn, each of the other three balls may only be croqueted once between hoop points, but by stringing together a series of roquets, croquets, and scored hoops, several points may be scored in a single turn.",
"title": "Variations"
},
{
"paragraph_id": 6,
"text": "Advanced variants of association croquet give further penalties to dissuade skilled players from running every hoop with a ball on a single break, while handicap versions give weaker players chances to continue play after making an error. These extra turns, called \"bisques\" are effective in levelling the odds of winning.",
"title": "Variations"
},
{
"paragraph_id": 7,
"text": "Golf croquet is played between two individual players or teams, each playing with two balls. The object of the game is to reach a certain number of points, typically seven, earned by being the first to run a hoop.",
"title": "Variations"
},
{
"paragraph_id": 8,
"text": "The game opens by playing each ball into the lawn from the fourth (south-eastern) corner of the lawn. Balls must be played in order (for the primary ball colours, this is blue, red, black, and yellow), and this order of play is maintained throughout the game.",
"title": "Variations"
},
{
"paragraph_id": 9,
"text": "Hoops are contested in a fixed order, with a point awarded to the owner of the first ball to pass through the hoop in the correct direction. After a point is awarded, all players move on to contest the next hoop. Balls that are played more than halfway to the next hoop before a point is scored are considered offside, and are moved to penalty areas.",
"title": "Variations"
},
{
"paragraph_id": 10,
"text": "Golf croquet is the fastest-growing version of the game, owing largely to its simplicity and competitiveness. There is an especially large interest in competitive success by players in Egypt. By comparison with association croquet, golf croquet requires a smaller variety of shots, and emphasises strategic skills and accurate shot making. Play is faster and balls are more likely to be hit harder or lifted off the ground.",
"title": "Variations"
},
{
"paragraph_id": 11,
"text": "The American-rules version of croquet is the dominant version of the game in the United States and is also widely played in Canada. It is governed by the United States Croquet Association. Its genesis is mostly in association croquet, but it differs in a number of important ways that reflect the home-grown traditions of American \"backyard\" croquet. Official rules we're first published in 1894 by the Spalding Athletic Library as adopted by the National American Croquet Association.",
"title": "Variations"
},
{
"paragraph_id": 12,
"text": "American six-wicket uses the same six-wicket layout as both association croquet and golf croquet, and is also played by two individuals or teams, each owning two balls.",
"title": "Variations"
},
{
"paragraph_id": 13,
"text": "Like association croquet, the object of the game is to be the first to pass each of their balls through all six hoops in both directions and to strike the central peg, for a total of 26 points.",
"title": "Variations"
},
{
"paragraph_id": 14,
"text": "Unlike association croquet, balls are always played in the same sequence (blue, red, black, yellow). The limitation of roqueting each ball once between hoop points is, unlike in association croquet, carried over from turn to turn until the ball scores the next hoop. In American six-wicket, this is termed \"deadness\", and a separate board is required to keep track of the deadness for all four balls.",
"title": "Variations"
},
{
"paragraph_id": 15,
"text": "A further difference is the more restrictive boundary-line rules of American croquet. In the American game, roqueting a ball out of bounds or running a hoop so that the ball goes out of bounds causes the turn to end, and balls that go out of bounds are replaced only nine inches (23 cm) from the boundary rather than one yard (91 cm) as in association croquet. \"Attacking\" balls on the boundary line to bring them into play is thus far more challenging.",
"title": "Variations"
},
{
"paragraph_id": 16,
"text": "Nine-wicket croquet, sometimes called \"backyard croquet\", is played mainly in Canada and the United States, and is the game most recreational players in those countries call simply \"croquet\". In this version of croquet, there are nine wickets, two stakes, and up to six balls. The course is arranged in a double-diamond pattern, with one stake at each end of the course. Players start at one stake, navigate one side of the double diamond, hit the turning stake, then navigate the opposite side of the double diamond and hit the starting stake to end. If playing individually (Cutthroat), the first player to stake out is the winner. In partnership play, all members of a team must stake out, and a player might choose to avoid staking out (becoming a Rover) in order to help a lagging teammate.",
"title": "Variations"
},
{
"paragraph_id": 17,
"text": "Each time a ball is roqueted, the striker gets two bonus shots. For the first bonus shot, the player has four options:",
"title": "Variations"
},
{
"paragraph_id": 18,
"text": "The second bonus shot (\"continuation shot\") is an ordinary shot played from where the striker ball came to rest.",
"title": "Variations"
},
{
"paragraph_id": 19,
"text": "An alternative endgame is \"poison\": in this variant, a player who has scored the last wicket but not hit the starting stake becomes a \"poison ball\", which may eliminate other balls from the game by roqueting them. A non-poison ball that roquets a poison ball has the normal options. A poison ball that hits a stake or passes through any wicket (possibly by the action of a non-poison player) is eliminated. The last person remaining is the winner.",
"title": "Variations"
},
{
"paragraph_id": 20,
"text": "As well as club-level games, county-level tournaments and leagues, there are regular world championships and international matches between croquet-playing countries. The sport has particularly strong followings in the UK, US, New Zealand, Australia and Egypt; many other countries also play. Every four years, the top countries play in the World Team Championships in AC (the MacRobertson Shield) and GC (the Openshaw Shield). The current world rankings show England in top place for AC, followed by Australia in second place, and New Zealand in third place, with the United States in fourth position. The same four countries appear in the top six of the GC country rankings, below Egypt in top position, and with Spain at number six.",
"title": "International Croquet"
},
{
"paragraph_id": 21,
"text": "Individual World Championships usually take place every two or three years. The 2023 AC World Championships took place in London; the winner was Robert Fulford. The current Women's Association Croquet World Champion (2023) is Debbie Lines of England.",
"title": "International Croquet"
},
{
"paragraph_id": 22,
"text": "The most prestigious international team competition in association croquet is the MacRobertson International Croquet Shield. It is contested every three to four years between Australia, England (formerly Great Britain), the United States and New Zealand. Other nations compete in Tier 2 and Tier 3 World Team Championships. Teams are promoted and relegated between the lower tiers, but there is no relegation to or promotion from the MacRobertson Shield. The current holders of the MacRobertson Shield are England, who won the title in 2023.",
"title": "International Croquet"
},
{
"paragraph_id": 23,
"text": "At the Golf Croquet World Team Championships, eight nations contest the Openshaw Shield. There is promotion and relegation between Tier 1, Tier 2, and Tier 3. The current holders of the Openshaw Shield are New Zealand, who won in 2020.",
"title": "International Croquet"
},
{
"paragraph_id": 24,
"text": "The world's top 10 association croquet players as of October 2023 were Robert Fletcher (Australia), Robert Fulford (England), Paddy Chapman (New Zealand), Jamie Burch (England), Reg Bamford (South Africa), Matthew Essick (USA), Mark Avery (England), Simon Hockey (Australia), Harry Fisher (England), Jose Riva (Spain).",
"title": "International Croquet"
},
{
"paragraph_id": 25,
"text": "In April 2013, Reg Bamford of South Africa beat Ahmed Nasr of Egypt in the final of the Golf Croquet World Championship in Cairo, becoming the first person to simultaneously hold the title in both association croquet and golf croquet. As of 2023, the Golf Croquet World Champion was Matthew Essick (USA) and the Women's Golf Croquet World Champion was Jamie Gumbrell (Australia).",
"title": "International Croquet"
},
{
"paragraph_id": 26,
"text": "In 2018, two international championships open to both sexes were won by women: in May, Rachel Gee of England beat Pierre Beaudry of Belgium to win the European Golf Croquet championship, and in October, Hanan Rashad of Egypt beat Yasser Fathy (also from Egypt) to win the World over-50s Golf Croquet championship.",
"title": "International Croquet"
},
{
"paragraph_id": 27,
"text": "Croquet was an event at the 1900 Summer Olympics. Roque, an American variation on croquet, was an event at the 1904 Summer Olympics.",
"title": "International Croquet"
},
{
"paragraph_id": 28,
"text": "The oldest document to bear the word croquet with a description of the modern game is the set of rules registered by Isaac Spratt in November 1856 with the Stationers' Company of London. This record is now in the Public Record Office. In 1868, the first croquet all-comers meet was held at Moreton-in-Marsh, Gloucestershire and in the same year the All England Croquet Club was formed at Wimbledon, London.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Regardless when and by what route it reached the British Isles and the British colonies in its recognizable form, croquet is, like pall-mall and trucco, among the later forms of ground billiards, which as a class have been popular in Western Europe back to at least the Late Middle Ages, with roots in classical antiquity, including sometimes the use of arches and pegs along with balls and mallets or other striking sticks (some more akin to modern field hockey sticks). By the 12th century, a team ball game called la soule or choule, akin to a chaotic version of hockey or football (depending on whether sticks were used), was regularly played in France and southern Britain between villages or parishes; it was attested in Cornwall as early as 1283.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In the book Queen of Games: The History of Croquet, Nicky Smith presents two theories of the origin of the modern game of croquet, which took England by storm in the 1860s and then spread overseas.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The first explanation is that the ancestral game was introduced to Britain from France during the 1660–1685 reign of Charles II of England, Scotland and Ireland, and was played under the name of paille-maille (among other spellings, today usually pall-mall), derived ultimately from Latin words for 'ball and mallet' (the latter also found in the name of the earlier French game, jeu de mail). This was the explanation given in the ninth edition of Encyclopædia Britannica, dated 1877.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "In his 1801 book The Sports and Pastimes of the People of England, Joseph Strutt described the way pall-mall was played in England at the time:",
"title": "History"
},
{
"paragraph_id": 33,
"text": "\"Pale-maille is a game wherein a round box[wood] ball is struck with a mallet through a high arch of iron, which he that can do at the fewest blows, or at the number agreed upon, wins. It is to be observed, that there are two of these arches, that is one at either end of the alley. The game of mall was a fashionable amusement in the reign of Charles the Second, and the walk in Saint James's Park, now called the Mall, received its name from having been appropriated to the purpose of playing at mall, where Charles himself and his courtiers frequently exercised themselves in the practice of this pastime.\"",
"title": "History"
},
{
"paragraph_id": 34,
"text": "While the name pall-mall and various games bearing this name also appeared elsewhere (France and Italy), the description above suggests that the croquet-like games in particular were popular in England by the early 17th century. Some other early modern sources refer to pall-mall being played over a large distance (as in golf); however, an image in Strutt's 1801 book shows a croquet-like ground billiards game (balls on ground, hoop, bats, and peg) being played over a short, garden-sized distance. The image's caption describes the game as \"a curious ancient pastime\", confirming that croquet games were not new in early-19th-century England.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "In Samuel Johnson's 1755 dictionary, his definition of \"pall-mall\" clearly describes a game with similarities to modern croquet: \"A play in which the ball is struck with a mallet through an iron ring\". However, there is no evidence that pall-mall involved the croquet stroke which is the distinguishing characteristic of the modern game.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The second theory is that the rules of the modern game of croquet arrived from Ireland during the 1850s, perhaps after being brought there from Brittany, where a similar game was played on the beaches. Regular contact between Ireland and France had continued since the Norman invasion of Ireland in 1169. By no later than the early 15th century, the game jeu de mail (itself ancestral to pall-mall and perhaps to indoor billiards) was popular in France, including in the courts of Henry II in the 16th century and Louis XIV of the 17th.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "At least one version of it, rouët ('wheel') was a multi-ball lawn game. Records show a game called \"crookey\", similar to croquet, being played at Castlebellingham in County Louth, Ireland, in 1834, which was introduced to Galway in 1835 and played on the bishop's palace garden, and in the same year to the genteel Dublin suburb of Kingstown (today Dún Laoghaire) where it was first spelt as \"croquet\". There is, however, no pre-1858 Irish document that describes the way game was played, in particular, there is no reference to the distinctive croquet stroke, which is described above under \"Variations: Association\". The noted croquet historian Dr Prior, in his book of 1872, makes the categoric statement \"One thing only is certain: it is from Ireland that croquet came to England and it was on the lawn of the late Lord Lonsdale that it was first played in this country.\" This was about 1851.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "John Jaques apparently claimed in a letter to Arthur Lillie in 1873 that he had himself seen the game played in Ireland, writing \"I made the implements and published directions (such as they were) before Mr. Spratt [mentioned above] introduced the subject to me.\" Whatever the truth of the matter, Jaques certainly played an important role in popularising the game, producing editions of the rules in 1857, 1860, and 1864.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Croquet became highly popular as a social pastime in England during the 1860s. It was enthusiastically adopted and promoted by the Earl of Essex who held lavish croquet parties at Cassiobury House, his stately home in Watford, Hertfordshire, and the Earl even launched his own Cassiobury brand croquet set. By 1867, Jaques had printed 65,000 copies of his Laws and Regulations of the game. It quickly spread to other Anglophone countries, including Australia, Canada, New Zealand, South Africa, and the United States. No doubt one of the attractions was that the game could be played by both sexes; this also ensured a certain amount of adverse comment.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "It is no coincidence that the game became popular at the same time as the cylinder lawn mower, since croquet can only be played well on a lawn that is flat and finely-cut.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "By the late 1870s, however, croquet had been eclipsed by another fashionable game, lawn tennis, and many of the newly created croquet clubs, including the All England Club at Wimbledon, converted some or all of their lawns into tennis courts.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "There was a revival in the 1890s, but from then onwards, croquet was always a minority sport, with national individual participation amounting to a few thousand players. The All England Lawn Tennis and Croquet Club still has a croquet lawn, but has not hosted any significant tournaments. Its championship was won 38 times by Bernard Neal. The English headquarters for the game is now in Cheltenham.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "The earliest known reference to croquet in Scotland is the booklet The Game of Croquet, its Laws and Regulations which was published in the mid-1860s for the proprietor of Eglinton Castle, the Earl of Eglinton. On the page facing the title page is a picture of Eglinton Castle with a game of \"croquet\" in full swing.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "The croquet lawn existed on the northern terrace, between Eglinton Castle and the Lugton Water. The 13th Earl developed a variation on croquet named Captain Moreton's Eglinton Castle croquet, which had small bells on the eight hoops \"to ring the changes\", two pegs, a double hoop with a bell and two tunnels for the ball to pass through. In 1865 the 'Rules of the Eglinton Castle and Cassiobury Croquet' was published by Edmund Routledge. Several incomplete sets of this form of croquet are known to exist, and one complete set is still used for demonstration games in the West of Scotland.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "Croquet is popularly believed to be viciously competitive. This may derive from the fact that (unlike in golf) players will often attempt to move their opponents' balls to unfavourable positions. However, purely negative play is rarely a winning strategy; successful players (in all versions other than golf croquet) will use all four balls to set up a break for themselves, rather than simply making the game as difficult as possible for their opponents.",
"title": "In art and literature"
},
{
"paragraph_id": 46,
"text": "The way croquet is depicted in paintings and books says much about popular perceptions of the game, though little about the reality of modern play.",
"title": "In art and literature"
},
{
"paragraph_id": 47,
"text": "About 200 croquet clubs across the United States are members of the United States Croquet Association.",
"title": "Clubs"
},
{
"paragraph_id": 48,
"text": "Many colleges have croquet clubs as well, such as The University of Virginia, The University of Chicago, Pennsylvania State University, Bates College, SUNY New Paltz, Harvard University, and Dartmouth College. Notably, St. John's College and the US Naval Academy engage in a yearly match in Annapolis, Maryland. Both schools also compete at the collegiate level and the rivalry continues to be an Annapolis tradition, attracting thousands of spectators each April.",
"title": "Clubs"
},
{
"paragraph_id": 49,
"text": "In England and Wales, there are around 170 clubs affiliated with the Croquet Association. The All England Lawn Tennis and Croquet Club at Wimbledon is famous for its lawn tennis tournament, but retains an active croquet section. There are also clubs in many universities and colleges, with an annual Varsity match being played between Oxford and Cambridge. With over 1800 participants, the 2011 Oxford University \"Cuppers\" (inter-college) tournament claimed to be not only the largest croquet tournament ever, but the largest sporting event in the university's history.",
"title": "Clubs"
},
{
"paragraph_id": 50,
"text": "There are 112 clubs in New Zealand, affiliated with 19 associations. These are governed by Croquet New Zealand.",
"title": "Clubs"
}
] | Croquet is a sport that involves hitting wooden or plastic balls with a mallet through hoops embedded in a grass playing court. | 2001-10-01T09:46:21Z | 2023-12-29T17:07:37Z | [
"Template:Convert",
"Template:Cite magazine",
"Template:Reflist",
"Template:Nowrap",
"Template:Page needed",
"Template:Div col",
"Template:Citation",
"Template:Authority control",
"Template:About",
"Template:Use dmy dates",
"Template:Infobox sport",
"Template:Lang",
"Template:More citations needed section",
"Template:Cite web",
"Template:Commons",
"Template:Short description",
"Template:IPAc-en",
"Template:Em",
"Template:Shamos 1999",
"Template:Cite book",
"Template:Stein & Rubino 2008",
"Template:Distinguish",
"Template:Portal",
"Template:Cite news",
"Template:Cuegloss",
"Template:Tone",
"Template:Citation needed",
"Template:Div col end",
"Template:Cite journal"
] | https://en.wikipedia.org/wiki/Croquet |
6,644 | Curling | Curling is a sport in which players slide stones on a sheet of ice toward a target area which is segmented into four concentric circles. It is related to bowls, boules and shuffleboard. Two teams, each with four players, take turns sliding heavy, polished granite stones, also called rocks, across the ice curling sheet toward the house, a circular target marked on the ice. Each team has eight stones, with each player throwing two. The purpose is to accumulate the highest score for a game; points are scored for the stones resting closest to the centre of the house at the conclusion of each end, which is completed when both teams have thrown all of their stones once. A game usually consists of eight or ten ends.
The player can induce a curved path, described as curl, by causing the stone to slowly rotate as it slides. The path of the rock may be further influenced by two sweepers with brooms or brushes, who accompany it as it slides down the sheet and sweep the ice in front of the stone. "Sweeping a rock" decreases the friction, which makes the stone travel a straighter path (with less curl) and a longer distance. A great deal of strategy and teamwork go into choosing the ideal path and placement of a stone for each situation, and the skills of the curlers determine the degree to which the stone will achieve the desired result.
Evidence that curling existed in Scotland in the early 16th century includes a curling stone inscribed with the date 1511 found (along with another bearing the date 1551) when an old pond was drained at Dunblane, Scotland. The world's oldest curling stone and the world's oldest football are now kept in the same museum (the Stirling Smith Art Gallery and Museum) in Stirling. The first written reference to a contest using stones on ice coming from the records of Paisley Abbey, Renfrewshire, in February 1541. Two paintings, "Winter Landscape with a Bird Trap" and "The Hunters in the Snow" (both dated 1565) by Pieter Bruegel the Elder, depict Flemish peasants curling, albeit without brooms; Scotland and the Low Countries had strong trading and cultural links during this period, which is also evident in the history of golf.
The word curling first appears in print in 1620 in Perth, Scotland, in the preface and the verses of a poem by Henry Adamson. The sport was (and still is, in Scotland and Scottish-settled regions like southern New Zealand) also known as "the roaring game" because of the sound the stones make while traveling over the pebble (droplets of water applied to the playing surface). The verbal noun curling is formed from the Scots (and English) verb curl, which describes the motion of the stone.
Kilsyth Curling Club claims to be the first club in the world, having been formally constituted in 1716; it is still in existence today. Kilsyth also claims the oldest purpose-built curling pond in the world at Colzium, in the form of a low dam creating a shallow pool some 100 by 250 metres (330 by 820 ft) in size. The International Olympic Committee recognises the Royal Caledonian Curling Club (founded as the Grand Caledonian Curling Club in 1838) as developing the first official rules for the sport. However, although not written as a "rule book", this is preceded by Rev James Ramsay of Gladsmuir, a member of the Duddingston Curling Club, who wrote An Account of the Game of Curling in 1811, which speculates on its origin and explains the method of play.
In the early history of curling, the playing stones were simply flat-bottomed stones from rivers or fields, which lacked a handle and were of inconsistent size, shape, and smoothness. Some early stones had holes for a finger and the thumb, akin to ten-pin bowling balls. Unlike today, the thrower had little control over the 'curl' or velocity and relied more on luck than on precision, skill, and strategy. The sport was often played on frozen rivers although purpose-built ponds were later created in many Scottish towns. For example, the Scottish poet David Gray describes whisky-drinking curlers on the Luggie Water at Kirkintilloch.
In Darvel, East Ayrshire, the weavers relaxed by playing curling matches using the heavy stone weights from the looms' warp beams, fitted with a detachable handle for the purpose. Central Canadian curlers often used 'irons' rather than stones until the early 1900s; Canada is the only country known to have done so, while others experimented with wood or ice-filled tins.
Outdoor curling was very popular in Scotland between the 16th and 19th centuries because the climate provided good ice conditions every winter. Scotland is home to the international governing body for curling, the World Curling Federation in Perth, which originated as a committee of the Royal Caledonian Curling Club, the mother club of curling.
In the 19th century several private railway stations in the United Kingdom were built to serve curlers attending bonspiels, such as those at Aboyne, Carsbreck, and Drummuir.
Today, the sport is most firmly established in Canada, having been taken there by Scottish emigrants. The Royal Montreal Curling Club, the oldest established sports club still active in North America, was established in 1807. The first curling club in the United States was established in 1830, and the sport was introduced to Switzerland and Sweden before the end of the 19th century, also by Scots. Today, curling is played all over Europe and has spread to Brazil, Japan, Australia, New Zealand, China, and Korea.
The first world championship for curling was limited to men and was known as the Scotch Cup, held in Falkirk and Edinburgh, Scotland, in 1959. The first world title was won by the Canadian team from Regina, Saskatchewan, skipped by Ernie Richardson. (The skip is the team member who calls the shots; see below.)
Curling was one of the first sports that was popular with women and girls.
Curling has been a medal sport in the Winter Olympic Games since the 1998 Winter Olympics. It currently includes men's, women's, and mixed doubles tournaments (the mixed doubles event was held for the first time in 2018).
In February 2002, the International Olympic Committee retroactively decided that the curling competition from the 1924 Winter Olympics (originally called Semaine des Sports d'Hiver, or International Winter Sports Week) would be considered official Olympic events and no longer be considered demonstration events. Thus, the first Olympic medals in curling, which at the time was played outdoors, were awarded for the 1924 Winter Games, with the gold medal won by Great Britain, two silver medals by Sweden, and the bronze by France. A demonstration tournament was also held during the 1932 Winter Olympic Games between four teams from Canada and four teams from the United States, with Canada winning 12 games to 4.
Since the sport's official addition in the 1998 Olympics, Canada has dominated the sport with their men's teams winning gold in 2006, 2010, and 2014, and silver in 1998 and 2002. The women's team won gold in 1998 and 2014, a silver in 2010, and a bronze in 2002 and 2006. The mixed doubles team won gold in 2018.
The playing surface or curling sheet is defined by the World Curling Federation Rules of Curling. It is a rectangular area of ice, carefully prepared to be as flat and level as possible, 146 to 150 feet (45 to 46 m) in length by 14.5 to 16.5 feet (4.4 to 5.0 m) in width. The shorter borders of the sheet are called the backboards.
A target, the house, is centred on the intersection of the centre line, drawn lengthwise down the centre of the sheet and the tee line, drawn 16 feet (4.9 m) from, and parallel to, the backboard. These lines divide the house into quarters. The house consists of a centre circle (the button) and three concentric rings, of diameters 4, 8, and 12 feet, formed by painting or laying a coloured vinyl sheet under the ice and are usually distinguished by colour. A stone must at least touch the outer ring in order to score (see Scoring below); otherwise, the rings are merely a visual aid for aiming and judging which stone is closer to the button. Two hog lines are drawn 37 feet (11 m) from, and parallel to, the backboard.
The hacks, which give the thrower something to push against when making the throw, are fixed 12 feet (3.7 m) behind each button. On indoor rinks, there are usually two fixed hacks, rubber-lined holes, one on each side of the centre line, with the inside edge no more than 3 inches (76 mm) from the centre line and the front edge on the hack line. A single moveable hack may also be used.
The ice may be natural but is usually frozen by a refrigeration plant pumping a brine solution through numerous pipes fixed lengthwise at the bottom of a shallow pan of water. Most curling clubs have an ice maker whose main job is to care for the ice. At the major curling championships, ice maintenance is extremely important. Large events, such as national/international championships, are typically held in an arena that presents a challenge to the ice maker, who must constantly monitor and adjust the ice and air temperatures as well as air humidity levels to ensure a consistent playing surface. It is common for each sheet of ice to have multiple sensors embedded in order to monitor surface temperature, as well as probes set up in the seating area (to monitor humidity) and in the compressor room (to monitor brine supply and return temperatures). The surface of the ice is maintained at a temperature of around 23 °F (−5 °C).
A key part of the preparation of the playing surface is the spraying of water droplets onto the ice, which form pebble on freezing. The pebbled ice surface resembles an orange peel, and the stone moves on top of the pebbled ice. The pebble, along with the concave bottom of the stone, decreases the friction between the stone and the ice, allowing the stone to travel farther. As the stone moves over the pebble, any rotation of the stone causes it to curl, or travel along a curved path. The amount of curl (commonly referred to as the feet of curl) can change during a game as the pebble wears; the ice maker must monitor this and be prepared to scrape and re-pebble the surface prior to each game.
The curling stone (also sometimes called a rock in North America) is made of granite and is specified by the World Curling Federation, which requires a weight between 17.24 and 19.96 kilograms (38.0 and 44.0 lb), a maximum circumference of 914 millimetres (36.0 in), and a minimum height of 114.3 millimetres (4.5 in). The only part of the stone in contact with the ice is the running surface, a narrow, flat annulus or ring, 6.4 to 12.7 millimetres (0.25 to 0.50 in) wide and about 130 millimetres (5.1 in) in diameter; the sides of the stone bulge convex down to the ring, with the inside of the ring hollowed concave to clear the ice. This concave bottom was first proposed by J. S. Russell of Toronto, Ontario, Canada sometime after 1870, and was subsequently adopted by Scottish stone manufacturer Andrew Kay.
The granite for the stones comes from two sources: Ailsa Craig, an island off the Ayrshire coast of Scotland, and the Trefor Granite Quarry, North of the Llŷn Peninsula, Gwynedd in Wales. These locations provide four variations in colour known as Ailsa Craig Common Green, Ailsa Craig Blue Hone, Blue Trefor and Red Trefor.
Blue Hone has very low water absorption, which prevents the action of repeatedly freezing water from eroding the stone. Ailsa Craig Common Green is a lesser quality granite than Blue Hone. In the past, most curling stones were made from Blue Hone, but the island is now a wildlife reserve, and the quarry is restricted by environmental conditions that exclude blasting.
Kays of Scotland has been making curling stones in Mauchline, Ayrshire, since 1851 and has the exclusive rights to the Ailsa Craig granite, granted by the Marquess of Ailsa, whose family has owned the island since 1560. According to the 1881 Census, Andrew Kay employed 30 people in his curling stone factory in Mauchline. The last harvest of Ailsa Craig granite by Kays took place in 2013, after a hiatus of 11 years; 2,000 tons were harvested, sufficient to fill anticipated orders through at least 2020. Kays have been involved in providing curling stones for the Winter Olympics since Chamonix in 1924 and has been the exclusive manufacturer of curling stones for the Olympics since the 2006 Winter Olympics.
Trefor granite comes from the Yr Eifl or Trefor Granite Quarry in the village of Trefor on the north coast of the Llŷn Peninsula in Gwynedd, Wales and has produced granite since 1850. Trefor granite comes in shades of pink, blue, and grey. The quarry supplies curling stone granite exclusively to the Canada Curling Stone Company, which has been producing stones since 1992 and supplied the stones for the 2002 Winter Olympics.
A handle is attached by a bolt running vertically through a hole in the centre of the stone. The handle allows the stone to be gripped and rotated upon release; on properly prepared ice the rotation will bend (curl) the path of the stone in the direction in which the front edge of the stone is turning, especially as the stone slows. Handles are coloured to identify each team, two popular colours in major tournaments being red and yellow. In competition, an electronic handle known as the Eye on the Hog may be fitted to detect hog line violations. This electronically detects whether the thrower's hand is in contact with the handle as it passes the hog line and indicates a violation by lights at the base of the handle (see delivery below). The eye on the hog eliminates human error and the need for hog line officials. It is mandatory in high-level national and international competition, but its cost, around US$650 each, currently puts it beyond the reach of most curling clubs.
The curling broom, or brush, is used to sweep the ice surface in the path of the stone (see sweeping) and is also often used as a balancing aid during delivery of the stone.
Prior to the 1950s, most curling brooms were made of corn strands and were similar to household brooms of the day. In 1958, Fern Marchessault of Montreal inverted the corn straw in the centre of the broom. This style of corn broom was referred to as the Blackjack.
Artificial brooms made from human-made fabrics rather than corn, such as the Rink Rat, also became common later during this time period. Prior to the late sixties, Scottish curling brushes were used primarily by some of the Scots, as well as by recreational and elderly curlers, as a substitute for corn brooms, since the technique was easier to learn. In the late sixties, competitive curlers from Calgary, Alberta, such as John Mayer, Bruce Stewart, and, later, the world junior championship teams skipped by Paul Gowsell, proved that the curling brush could be just as (or more) effective without all the blisters common to corn broom use. During that time period, there was much debate in competitive curling circles as to which sweeping device was more effective: brush or broom. Eventually, the brush won out with the majority of curlers making the switch to the less costly and more efficient brush. Today, brushes have replaced traditional corn brooms at every level of curling; it is rare now to see a curler using a corn broom on a regular basis.
Curling brushes may have fabric, hog hair, or horsehair heads. Modern curling brush handles are usually hollow tubes made of fibreglass or carbon fibre instead of a solid length of wooden dowel. These hollow tube handles are lighter and stronger than wooden handles, allowing faster sweeping and also enabling more downward force to be applied to the broom head with reduced shaft flex.
New "directional fabric" brooms were introduced in 2014. Dubbed the "broomgate" controversy, they were able to better navigate the path of a curling stone than existing brooms. Players were worried that these brooms would alter the fundamentals of the sport by reducing the level of skill required, accusing them of giving players an unfair advantage, and at least thirty-four elite teams signed a statement pledging not to use them. The new brooms were temporarily banned by the World Curling Federation and Curling Canada for the 2015–2016 season. As a result of the "broomgate" controversy, as of 2016, only one standardized brush head is approved by the World Curling Federation for competitive play.
Curling shoes are similar to ordinary athletic shoes except for special soles; the slider shoe (usually known as a "slider") is designed for the sliding foot and the "gripper shoe" (usually known as a gripper) for the foot that kicks off from the hack.
The slider is designed to slide and typically has a Teflon sole. It is worn by the thrower during delivery from the hack and by sweepers or the skip to glide down the ice when sweeping or otherwise traveling down the sheet quickly. Stainless steel and "red brick" sliders with lateral blocks of PVC on the sole are also available as alternatives to Teflon. Most shoes have a full-sole sliding surface, but some shoes have a sliding surface covering only the outline of the shoe and other enhancements with the full-sole slider. Some shoes have small disc sliders covering the front and heel portions or only the front portion of the foot, which allow more flexibility in the sliding foot for curlers playing with tuck deliveries. When a player is not throwing, the player's slider shoe can be temporarily rendered non-slippery by using a slip-on gripper. Ordinary athletic shoes may be converted to sliders by using a step-on or slip-on Teflon slider or by applying electrical or gaffer tape directly to the sole or over a piece of cardboard. This arrangement often suits casual or beginning players.
The gripper is worn by the thrower on the foot that kicks off from the hack during delivery and is designed to grip the ice. It may have a normal athletic shoe sole or a special layer of rubbery material applied to the sole of a thickness to match the sliding shoe. The toe of the hack foot shoe may also have a rubberised coating on the top surface or a flap that hangs over the toe to reduce wear on the top of the shoe as it drags on the ice behind the thrower.
Other types of equipment include:
The purpose of a game is to score points by getting stones closer to the house centre, or the "button", than the other team's stones. Players from either team alternate in taking shots from the far side of the sheet. An end is complete when all eight rocks from each team have been delivered, a total of sixteen stones. If the teams are tied at the end of regulation, often extra ends are played to break the tie. The winner is the team with the highest score after all ends have been completed (see Scoring below). A game may be conceded if winning the game is infeasible.
International competitive games are generally ten ends, so most of the national championships that send a representative to the World Championships or Olympics also play ten ends. However, there is a movement on the World Curling Tour to make the games only eight ends. Most tournaments on that tour are eight ends, as are the vast majority of recreational games.
In international competition, each side is given 73 minutes to complete all of its throws. Each team is also allowed two minute-long timeouts per 10-end game. If extra ends are required, each team is allowed 10 minutes of playing time to complete its throws and one added 60-second timeout for each extra end. However, the "thinking time" system, in which the delivering team's game timer stops as soon as the shooter's rock crosses the t-line during the delivery, is becoming more popular, especially in Canada. This system allows each team 38 minutes per 10 ends, or 30 minutes per 8 ends, to make strategic and tactical decisions, with 4 minutes and 30 seconds an end for extra ends. The "thinking time" system was implemented after it was recognized that using shots which take more time for the stones to come to rest was being penalized in terms of the time the teams had available compared to teams which primarily use hits which require far less time per shot.
The process of sliding a stone down the sheet is known as the delivery or throw. Players, with the exception of the skip, take turns throwing and sweeping; when one player (e.g., the lead) throws, the players not delivering (the second and third) sweep (see Sweeping, below). When the skip throws, the vice-skip takes their role.
The skip, or the captain of the team, determines the desired stone placement and the required weight, turn, and line that will allow the stone to stop there. The placement will be influenced by the tactics at this point in the game, which may involve taking out, blocking, or tapping another stone.
The skip may communicate the weight, turn, line, and other tactics by calling or tapping a broom on the ice. In the case of a takeout, guard, or a tap, the skip will indicate the stones involved.
Before delivery, the running surface of the stone is wiped clean and the path across the ice swept with the broom if necessary, since any dirt on the bottom of a stone or in its path can alter the trajectory and ruin the shot. Intrusion by a foreign object is called a pick-up or pick.
The thrower starts from the hack. The thrower's gripper shoe (with the non-slippery sole) is positioned against one of the hacks; for a right-handed curler the right foot is placed against the left hack and vice versa for a left-hander. The thrower, now in the hack, lines the body up with shoulders square to the skip's broom at the far end for line.
The stone is placed in front of the foot now in the hack. Rising slightly from the hack, the thrower pulls the stone back (some older curlers may actually raise the stone in this backward movement) then lunges smoothly out from the hack pushing the stone ahead while the slider foot is moved in front of the gripper foot, which trails behind. The thrust from this lunge determines the weight, and hence the distance the stone will travel. Balance may be assisted by a broom held in the free hand with the back of the broom down so that it slides. One older writer suggests the player keep "a basilisk glance" at the mark.
There are two common types of delivery currently, the typical flat-foot delivery and the Manitoba tuck delivery where the curler slides on the front ball of their foot.
When the player releases the stone, a rotation (called the turn) is imparted by a slight clockwise or counter-clockwise twist of the handle from around the two or ten o'clock position to the twelve o'clock on release. A typical rate of turn is about 2+1⁄2 rotations before coming to a rest.
The stone must be released before its front edge crosses the near hog line. In major tournaments, the "Eye on the Hog" sensor is commonly used to enforce this rule. The sensor is in the handle of the stone and will indicate whether the stone was released before the near hog line. The lights on the stone handle will either light up green, indicating that the stone has been legally thrown, or red, in which case the illegally thrown stone will be immediately pulled from play instead of waiting for the stone to come to rest.
The stone must clear the far hog line or else be removed from play (hogged); an exception is made if a stone fails to come to rest beyond the far hog line after rebounding from a stone in play just past the hog line.
After the stone is delivered, its trajectory is influenced by the two sweepers under instruction from the skip. Sweeping is done for several reasons: to make the stone travel farther, to decrease the amount of curl, and to clean debris from the stone's path. Sweeping is able to make the stone travel farther and straighter by slightly melting the ice under the brooms, thus decreasing the friction as the stone travels across that part of the ice. The stones curl more as they slow down, so sweeping early in travel tends to increase distance as well as straighten the path, and sweeping after sideways motion is established can increase the sideways distance.
One of the basic technical aspects of curling is knowing when to sweep. When the ice in front of the stone is swept a stone will usually travel both farther and straighter, and in some situations one of those is not desirable. For example, a stone may be traveling too fast (said to have too much weight) but require sweeping to prevent curling into another stone. The team must decide which is better: getting by the other stone but traveling too far, or hitting the stone.
Much of the yelling that goes on during a curling game is the skip and sweepers exchanging information about the stone's line and weight and deciding whether to sweep. The skip evaluates the path of the stone and calls to the sweepers to sweep as necessary to maintain the intended track. The sweepers themselves are responsible for judging the weight of the stone, ensuring that the length of travel is correct and communicating the weight of the stone back to the skip. Many teams use a number system to communicate in which of 10 zones the sweepers estimate the stone will stop. Some sweepers use stopwatches to time the stone from the back line or tee line to the nearest hog line to aid in estimating how far the stone will travel.
Usually, the two sweepers will be on opposite sides of the stone's path, although depending on which side the sweepers' strengths lie this may not always be the case. Speed and pressure are vital to sweeping. In gripping the broom, one hand should be one third of the way from the top (non-brush end) of the handle while the other hand should be one third of the way from the head of the broom. The angle of the broom to the ice should be such that the most force possible can be exerted on the ice. The precise amount of pressure may vary from relatively light brushing ("just cleaning" - to ensure debris will not alter the stone's path) to maximum-pressure scrubbing.
Sweeping is allowed anywhere on the ice up to the tee line; once the leading edge of a stone crosses the tee line only one player may sweep it. Additionally, if a stone is behind the tee line one player from the opposing team is allowed to sweep it. This is the only case that a stone may be swept by an opposing team member. In international rules, this player must be the skip, but if the skip is throwing, then the sweeping player must be the third.
Occasionally, players may accidentally touch a stone with their broom or a body part. This is often referred to as burning a stone. Players touching a stone in such a manner are expected to call their own infraction as a matter of good sportsmanship. Touching a stationary stone when no stones are in motion (there is no delivery in progress) is not an infraction as long as the stone is struck in such a manner that its position is not altered, and this is a common way for the skip to indicate a stone that is to be taken out.
When a stone is touched when stones are in play, the remedies vary between leaving the stones as they end up after the touch, replacing the stones as they would have been if no stone were touched, or removal of the touched stone from play. In non-officiated league play, the skip of the non-offending team has the final say on where the stones are placed after the infraction.
Many different types of shots are used to carefully place stones for strategic or tactical reasons; they fall into three fundamental categories as follows:
Guards are thrown in front of the house in the free guard zone, usually to protect a stone or to make the opposing team's shot difficult. Guard shots include the centre-guard, on the centreline, and the corner-guards to the left or right sides of the centre line. See Free Guard Zone below.
Draws are thrown only to reach the house. Draw shots include raise, come-around, and freeze shots.
Takeouts are intended to remove stones from play and include the peel, hit-and-roll, and double shots.
For a more complete listing, see Glossary of curling terms.
The free guard zone is the area of the curling sheet between the hog line and tee line, excluding the house. Until five stones have been played (three from the side without hammer and two from the side with hammer), stones in the free guard zone may not be removed by an opponent's stone, although they can be moved within the playing area. If a stone in the free guard zone is knocked out of play, it is placed back in the position it was in before the shot was thrown and the opponent's stone is removed from play. This rule is known as the five-rock rule or the free guard zone rule (previous versions of the free guard zone rule only limited removing guards from play in the first three or four rocks).
This rule, a relatively recent addition to curling, was added in response to a strategy by teams of gaining a lead in the game and then peeling all of the opponents' stones (knocking them out of play at an angle that caused the shooter's stone to also roll out of play, leaving no stones on the ice). By knocking all stones out the opponents could at best score one point, if they had the last stone of the end (called the hammer). If the team peeling the rocks had the hammer they could peel rock after rock which would blank the end (leave the end scoreless), keeping the last rock advantage for another end. This strategy had developed (mostly in Canada) as ice-makers had become skilled at creating a predictable ice surface and newer brushes allowed greater control over the rock. While a sound strategy, this made for an unexciting game. Observers at the time noted that if two teams equally skilled in the peel game faced each other on good ice, the outcome of the game would be predictable from who won the coin flip to have last rock (or had earned it in the schedule) at the beginning of the game. The 1990 Brier (Canadian men's championship) was considered by many curling fans as boring to watch because of the amount of peeling and the quick adoption of the free guard zone rule the following year reflected how disliked this aspect of the game had become.
The free guard zone rule was originally called the Modified Moncton Rule and was developed from a suggestion made by Russ Howard for the Moncton 100 cashspiel in Moncton, New Brunswick, in January 1990. "Howard's Rule" (later known as the Moncton Rule), used for the tournament and based on a practice drill his team used, had the first four rocks in play unable to be removed no matter where they were at any time during the end. This method of play was altered by restricting the area in which a stone was protected to the free guard zone only for the first four rocks thrown and adopted as a four-rock free guard zone rule for international competition shortly after. Canada kept to the traditional rules until a three-rock free guard zone rule was adopted for the 1993–94 season. After several years of having the three-rock rule used for the Canadian championships and the winners then having to adjust to the four-rock rule in the World Championships, the Canadian Curling Association adopted the four-rock free guard zone in the 2002–2003 season.
One strategy that has been developed by curlers in response to the free guard zone (Kevin Martin from Alberta is one of the best examples) is the "tick" game, where a shot is made attempting to knock (tick) the guard to the side, far enough that it is difficult or impossible to use but still remaining in play while the shot itself goes out of play. The effect is functionally identical to peeling the guard but significantly harder, as a shot that hits the guard too hard (knocking it out of play) results in its being replaced, while not hitting it hard enough can result in it still being tactically useful for the opposition. There is also a greater chance that the shot will miss the guard entirely because of the greater accuracy required to make the shot. Because of the difficulty of making this type of shot, only the best teams will normally attempt it, and it does not dominate the game the way the peel formerly did. Steve Gould from Manitoba popularized ticks played across the face of the guard stone. These are easier to make because they impart less speed on the object stone, therefore increasing the chance that it remains in play even if a bigger chunk of it is hit.
With the tick shot reducing the effectiveness of the four-rock rule, the Grand Slam of Curling series of bonspiels adopted a five-rock rule in 2014. In 2017, the five-rock rule was adopted by the World Curling Federation and member organizations for official play, beginning in the 2018–19 season.
The last rock in an end is called the hammer, and throwing the hammer gives a team a tactical advantage. Before the game, teams typically decide who gets the hammer in the first end either by chance (such as a coin toss), by a "draw-to-the-button" contest, where a representative of each team shoots to see who gets closer to the centre of the rings, or, particularly in tournament settings like the Winter Olympics, by a comparison of each team's win–loss record. In all subsequent ends, the team that did not score in the preceding end gets to throw second, thus having the hammer. In the event that neither team scores, called a blanked end, the hammer remains with the same team. Naturally, it is easier to score points with the hammer than without; the team with the hammer generally tries to score two or more points. If only one point is possible, the skip may try to avoid scoring at all in order to retain the hammer the next end, giving the team another chance to use the hammer advantage to try to score two points. Scoring without the hammer is commonly referred to as stealing, or a steal, and is much more difficult.
Curling is a game of strategy, tactics, and skill. The strategy depends on the team's skill, the opponent's skill, the conditions of the ice, the score of the game, how many ends remain and whether the team has last-stone advantage (the hammer). A team may play an end aggressively or defensively. Aggressive playing will put a lot of stones in play by throwing mostly draws; this makes for an exciting game and although risky the rewards can be great. Defensive playing will throw a lot of hits preventing a lot of stones in play; this tends to be less exciting and less risky. A good drawing team will usually opt to play aggressively, while a good hitting team will opt to play defensively.
If a team does not have the hammer in an end, it will opt to try to clog up the four-foot zone in the house to deny the opposing team access to the button. This can be done by throwing "centre line" guards in front of the house on the centre line, which can be tapped into the house later or drawn around. If a team has the hammer, they will try to keep this four-foot zone free so that they have access to the button area at all times. A team with the hammer may throw a corner guard as their first stone of an end placed in front of the house but outside the four-foot zone to utilize the free guard zone. Corner guards are key for a team to score two points in an end, because they can either draw around it later or hit and roll behind it, making the opposing team's shot to remove it more difficult.
Ideally, the strategy in an end for a team with the hammer is to score two points or more. Scoring one point is often a wasted opportunity, as they will then lose last-stone advantage for the next end. If a team cannot score two points, they will often attempt to "blank an end" by removing any leftover opposition stones and rolling out; or, if there are no opposition stones, just throwing the stone through the house so that no team scores any points, and the team with the hammer can try again the next end to score two or more with it. Generally, a team without the hammer would want to either force the team with the hammer to only one point, so that they can get the hammer back, or "steal" the end by scoring one or more points of their own.
Large leads are often defended by displacing the opponent's stones to reduce their opportunity to score multiple points. However, a comfortably leading team that leaves their own stones in play becomes vulnerable as the opponent can draw around guard stones, stones in the house can be "tapped back" if they are in front of the tee line, or "frozen onto" if they are behind the tee line. A frozen stone is placed in front of and touching the opponent's stone and is difficult to remove. At this point, a team may opt for "peels"; throws with a lot of "weight" that can move opposition stones out of play.
It is common at any level for a losing team to terminate the match before all ends are completed if it believes it no longer has a realistic chance of winning. Competitive games end once the losing team has "run out of rocks"—that is, once it has fewer stones in play and available for play than the number of points needed to tie the game.
Most decisions about rules are left to the skips, although in official tournaments, decisions may be left to the officials. However, all scoring disputes are handled by the vice skip. No players other than the vice skip from each team should be in the house while score is being determined. In tournament play, the most frequent circumstance in which a decision has to be made by someone other than the vice skip is the failure of the vice skips to agree on which stone is closest to the button. An independent official (supervisor at Canadian and World championships) then measures the distances using a specially designed device that pivots at the centre of the button. When no independent officials are available, the vice skips measure the distances.
The winner is the team having the highest number of accumulated points at the completion of ten ends. Points are scored at the conclusion of each of these ends as follows: when each team has thrown its eight stones, the team with the stone closest to the button wins that end; the winning team is then awarded one point for each of its own stones lying closer to the button than the opponent's closest stone.
Only stones that are in the house are considered in the scoring. A stone is in the house if it lies within the 12-foot (3.7 m) zone or any portion of its edge lies over the edge of the ring. Since the bottom of the stone is rounded, a stone just barely in the house will not have any actual contact with the ring, which will pass under the rounded edge of the stone, but it still counts. This type of stone is known as a biter.
It may not be obvious to the eye which of the two rocks is closer to the button (centre) or if a rock is actually biting or not. There are specialized devices to make these determinations, but these cannot be brought out until after an end is completed. Therefore, a team may make strategic decisions during an end based on assumptions of rock position that turn out to be incorrect.
The score is marked on a scoreboard, of which there are two types; the baseball type and the club scoreboard.
The baseball-style scoreboard was created for televised games for audiences not familiar with the club scoreboard. The ends are marked by columns 1 through 10 (or 11 for the possibility of an extra end to break ties) plus an additional column for the total. Below this are two rows, one for each team, containing the team's score for that end and their total score in the right-hand column.
The club scoreboard is traditional and used in most curling clubs. Scoring on this board only requires the use of (up to) 11 digit cards, whereas with baseball-type scoring an unknown number of multiples of the digits (especially low digits like 1) may be needed. The numbered centre row represents various possible scores, and the numbers placed in the team rows represent the end in which that team achieved that cumulative score. If the red team scores three points in the first end (called a three-ender), then a 1 (indicating the first end) is placed beside the number 3 in the red row. If they score two more in the second end, then a 2 will be placed beside the 5 in the red row, indicating that the red team has five points in total (3+2). This scoreboard works because only one team can get points in an end. However, some confusion may arise if neither team scores points in an end, this is called a blank end. The blank end numbers are usually listed in the farthest column on the right in the row of the team that has the hammer (last rock advantage), or on a special spot for blank ends.
The following example illustrates the difference between the two types. The example illustrates the men's final at the 2006 Winter Olympics.
Eight points – all the rocks thrown by one team counting – is the highest score possible in an end, and is known as an "eight-ender" or "snowman". Scoring an eight-ender against a competent team is very difficult; in curling, it is the equivalent of pitching a perfect game in baseball. Probably the best-known snowman came at the 2006 Players' Championships. Future (2007) World Champion Kelly Scott scored eight points in one of her games against 1998 World bronze medalist Cathy King.
Competition teams are normally named after the skip, for example, Team Martin after skip Kevin Martin. Amateur league players can (and do) creatively name their teams, but when in competition (a bonspiel) the official team will have a standard name.
Top curling championships are typically played by all-male or all-female teams. It is known as mixed curling when a team consists of two men and two women. For many years, in the absence of world championship or Olympic mixed curling events, national championships (of which the Canadian Mixed Curling Championship was the most prominent) were the highest-level mixed curling competitions. However, a European Mixed Curling Championship was inaugurated in 2005, a World Mixed Doubles Curling Championship was established in 2008, and the European Mixed Championship was replaced with the World Mixed Curling Championship in 2015. A mixed tournament was held at the Olympic level for the first time in 2018, although it was a doubles tournament, not a four-person.
Curling tournaments may use the Schenkel system for determining the participants in matches.
Curling is played in many countries, including Canada, the United Kingdom (especially Scotland), the United States, Norway, Sweden, Switzerland, Denmark, Finland, and Japan, all of which compete in the world championships.
Curling has been depicted by many artists including: George Harvey, John Levack, The Dutch School, Charles Martin Hardie, John Elliot Maguire, John McGhie, and John George Brown.
Curling is particularly popular in Canada. Improvements in ice making and changes in the rules to increase scoring and promote complex strategy have increased the already high popularity of the sport in Canada, and large television audiences watch annual curling telecasts, especially the Scotties Tournament of Hearts (the national championship for women), the Montana's Brier (the national championship for men), and the women's and men's world championships.
Despite the Canadian province of Manitoba's small population (ranked 5th of 10 Canadian provinces), Manitoban teams have won the Brier more times than teams from any other province, except for Alberta. The Tournament of Hearts and the Brier are contested by provincial and territorial champions, and the world championships by national champions.
Curling is the provincial sport of Saskatchewan. From there, Ernie Richardson and his family team dominated Canadian and international curling during the late 1950s and early 1960s and have been considered to be the best male curlers of all time. Sandra Schmirler led her team to the first-ever gold medal in women's curling in the 1998 Winter Olympics. When she died two years later from cancer, over 15,000 people attended her funeral, and it was broadcast on national television.
More so than in many other team sports, good sportsmanship, often referred to as the "Spirit of Curling", is an integral part of curling. The Spirit of Curling also leads teams to congratulate their opponents for making a good shot, strong sweeping, or spectacular form. Perhaps most importantly, the Spirit of Curling dictates that one never cheers mistakes, misses, or gaffes by one's opponent (unlike most team sports), and one should not celebrate one's own good shots during the game beyond modest acknowledgement of the shot such as a head nod, fist bump, or thumbs-up gesture. Modest congratulation, however, may be exchanged between winning team members after the match. On-the-ice celebration is usually reserved for the winners of a major tournament after winning the final game of the championship. It is completely unacceptable to attempt to throw opposing players off their game by way of negative comment, distraction, or heckling.
A match traditionally begins with players shaking hands with and saying "good curling" or "have a pleasant game" to each member of the opposing team. It is also traditional in some areas for the winning team to buy the losing team a drink after the game. Even at the highest levels of play, players are expected to call their own fouls.
It is not uncommon for a team to concede a curling match after it believes it no longer has any hope of winning. Concession is an honourable act and does not carry the stigma associated with quitting. It also allows for more socializing. To concede a match, members of the losing team offer congratulatory handshakes to the winning team. Thanks, wishes of future good luck, and hugs are usually exchanged between the teams. To continue playing when a team has no realistic chance of winning can be seen as a breach of etiquette.
Curling has been adapted for wheelchair users and people otherwise unable to throw the stone from the hack. These curlers may use a device known as a "delivery stick". The cue holds on to the handle of the stone and is then pushed along by the curler. At the end of delivery, the curler pulls back on the cue, which releases it from the stone. The Canadian Curling Association Rules of Curling allows the use of a delivery stick in club play but does not permit it in championships.
The delivery stick was specifically invented for elderly curlers in Canada in 1999. In early 2016 an international initiative started to allow use of the delivery sticks by players over 60 years of age in World Curling Federation Senior Championships, as well as in any projected Masters (60+) Championship that develops in the future.
Terms used to describe the game include:
The ice in the game may be fast (keen) or slow. If the ice is keen, a rock will travel farther with a given amount of weight (throwing force) on it. The speed of the ice is measured in seconds. One such measure, known as "hog-to-hog" time, is the speed of the stone and is the time in seconds the rock takes from the moment it crosses the near hog line until it crosses the far hog line. If this number is lower, the rock is moving faster, so again low numbers mean more speed. The ice in a match will be somewhat consistent and thus this measure of speed can also be used to measure how far down the ice the rock will travel. Once it is determined that a rock taking (for example) 13 seconds to go from hog line to hog line will stop on the tee line, the curler can know that if the hog-to-hog time is matched by a future stone, that stone will likely stop at approximately the same location. As an example, on keen ice, common times might be 16 seconds for guards, 14 seconds for draws, and 8 seconds for peel weight.
The back line to hog line speed is used principally by sweepers to get an initial sense of the weight of a stone. As an example, on keen ice, common times might be 4.0 seconds for guards, 3.8 seconds for draws, 3.2 for normal hit weight, and 2.9 seconds for peel weight. Especially at the club level, this metric can be misleading, due to amateurs sometimes pushing stones on release, causing the stone to travel faster than the back-to-hog speed. | [
{
"paragraph_id": 0,
"text": "Curling is a sport in which players slide stones on a sheet of ice toward a target area which is segmented into four concentric circles. It is related to bowls, boules and shuffleboard. Two teams, each with four players, take turns sliding heavy, polished granite stones, also called rocks, across the ice curling sheet toward the house, a circular target marked on the ice. Each team has eight stones, with each player throwing two. The purpose is to accumulate the highest score for a game; points are scored for the stones resting closest to the centre of the house at the conclusion of each end, which is completed when both teams have thrown all of their stones once. A game usually consists of eight or ten ends.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The player can induce a curved path, described as curl, by causing the stone to slowly rotate as it slides. The path of the rock may be further influenced by two sweepers with brooms or brushes, who accompany it as it slides down the sheet and sweep the ice in front of the stone. \"Sweeping a rock\" decreases the friction, which makes the stone travel a straighter path (with less curl) and a longer distance. A great deal of strategy and teamwork go into choosing the ideal path and placement of a stone for each situation, and the skills of the curlers determine the degree to which the stone will achieve the desired result.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Evidence that curling existed in Scotland in the early 16th century includes a curling stone inscribed with the date 1511 found (along with another bearing the date 1551) when an old pond was drained at Dunblane, Scotland. The world's oldest curling stone and the world's oldest football are now kept in the same museum (the Stirling Smith Art Gallery and Museum) in Stirling. The first written reference to a contest using stones on ice coming from the records of Paisley Abbey, Renfrewshire, in February 1541. Two paintings, \"Winter Landscape with a Bird Trap\" and \"The Hunters in the Snow\" (both dated 1565) by Pieter Bruegel the Elder, depict Flemish peasants curling, albeit without brooms; Scotland and the Low Countries had strong trading and cultural links during this period, which is also evident in the history of golf.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The word curling first appears in print in 1620 in Perth, Scotland, in the preface and the verses of a poem by Henry Adamson. The sport was (and still is, in Scotland and Scottish-settled regions like southern New Zealand) also known as \"the roaring game\" because of the sound the stones make while traveling over the pebble (droplets of water applied to the playing surface). The verbal noun curling is formed from the Scots (and English) verb curl, which describes the motion of the stone.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Kilsyth Curling Club claims to be the first club in the world, having been formally constituted in 1716; it is still in existence today. Kilsyth also claims the oldest purpose-built curling pond in the world at Colzium, in the form of a low dam creating a shallow pool some 100 by 250 metres (330 by 820 ft) in size. The International Olympic Committee recognises the Royal Caledonian Curling Club (founded as the Grand Caledonian Curling Club in 1838) as developing the first official rules for the sport. However, although not written as a \"rule book\", this is preceded by Rev James Ramsay of Gladsmuir, a member of the Duddingston Curling Club, who wrote An Account of the Game of Curling in 1811, which speculates on its origin and explains the method of play.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In the early history of curling, the playing stones were simply flat-bottomed stones from rivers or fields, which lacked a handle and were of inconsistent size, shape, and smoothness. Some early stones had holes for a finger and the thumb, akin to ten-pin bowling balls. Unlike today, the thrower had little control over the 'curl' or velocity and relied more on luck than on precision, skill, and strategy. The sport was often played on frozen rivers although purpose-built ponds were later created in many Scottish towns. For example, the Scottish poet David Gray describes whisky-drinking curlers on the Luggie Water at Kirkintilloch.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In Darvel, East Ayrshire, the weavers relaxed by playing curling matches using the heavy stone weights from the looms' warp beams, fitted with a detachable handle for the purpose. Central Canadian curlers often used 'irons' rather than stones until the early 1900s; Canada is the only country known to have done so, while others experimented with wood or ice-filled tins.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Outdoor curling was very popular in Scotland between the 16th and 19th centuries because the climate provided good ice conditions every winter. Scotland is home to the international governing body for curling, the World Curling Federation in Perth, which originated as a committee of the Royal Caledonian Curling Club, the mother club of curling.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the 19th century several private railway stations in the United Kingdom were built to serve curlers attending bonspiels, such as those at Aboyne, Carsbreck, and Drummuir.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Today, the sport is most firmly established in Canada, having been taken there by Scottish emigrants. The Royal Montreal Curling Club, the oldest established sports club still active in North America, was established in 1807. The first curling club in the United States was established in 1830, and the sport was introduced to Switzerland and Sweden before the end of the 19th century, also by Scots. Today, curling is played all over Europe and has spread to Brazil, Japan, Australia, New Zealand, China, and Korea.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The first world championship for curling was limited to men and was known as the Scotch Cup, held in Falkirk and Edinburgh, Scotland, in 1959. The first world title was won by the Canadian team from Regina, Saskatchewan, skipped by Ernie Richardson. (The skip is the team member who calls the shots; see below.)",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Curling was one of the first sports that was popular with women and girls.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Curling has been a medal sport in the Winter Olympic Games since the 1998 Winter Olympics. It currently includes men's, women's, and mixed doubles tournaments (the mixed doubles event was held for the first time in 2018).",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In February 2002, the International Olympic Committee retroactively decided that the curling competition from the 1924 Winter Olympics (originally called Semaine des Sports d'Hiver, or International Winter Sports Week) would be considered official Olympic events and no longer be considered demonstration events. Thus, the first Olympic medals in curling, which at the time was played outdoors, were awarded for the 1924 Winter Games, with the gold medal won by Great Britain, two silver medals by Sweden, and the bronze by France. A demonstration tournament was also held during the 1932 Winter Olympic Games between four teams from Canada and four teams from the United States, with Canada winning 12 games to 4.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Since the sport's official addition in the 1998 Olympics, Canada has dominated the sport with their men's teams winning gold in 2006, 2010, and 2014, and silver in 1998 and 2002. The women's team won gold in 1998 and 2014, a silver in 2010, and a bronze in 2002 and 2006. The mixed doubles team won gold in 2018.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The playing surface or curling sheet is defined by the World Curling Federation Rules of Curling. It is a rectangular area of ice, carefully prepared to be as flat and level as possible, 146 to 150 feet (45 to 46 m) in length by 14.5 to 16.5 feet (4.4 to 5.0 m) in width. The shorter borders of the sheet are called the backboards.",
"title": "Equipment"
},
{
"paragraph_id": 16,
"text": "A target, the house, is centred on the intersection of the centre line, drawn lengthwise down the centre of the sheet and the tee line, drawn 16 feet (4.9 m) from, and parallel to, the backboard. These lines divide the house into quarters. The house consists of a centre circle (the button) and three concentric rings, of diameters 4, 8, and 12 feet, formed by painting or laying a coloured vinyl sheet under the ice and are usually distinguished by colour. A stone must at least touch the outer ring in order to score (see Scoring below); otherwise, the rings are merely a visual aid for aiming and judging which stone is closer to the button. Two hog lines are drawn 37 feet (11 m) from, and parallel to, the backboard.",
"title": "Equipment"
},
{
"paragraph_id": 17,
"text": "The hacks, which give the thrower something to push against when making the throw, are fixed 12 feet (3.7 m) behind each button. On indoor rinks, there are usually two fixed hacks, rubber-lined holes, one on each side of the centre line, with the inside edge no more than 3 inches (76 mm) from the centre line and the front edge on the hack line. A single moveable hack may also be used.",
"title": "Equipment"
},
{
"paragraph_id": 18,
"text": "The ice may be natural but is usually frozen by a refrigeration plant pumping a brine solution through numerous pipes fixed lengthwise at the bottom of a shallow pan of water. Most curling clubs have an ice maker whose main job is to care for the ice. At the major curling championships, ice maintenance is extremely important. Large events, such as national/international championships, are typically held in an arena that presents a challenge to the ice maker, who must constantly monitor and adjust the ice and air temperatures as well as air humidity levels to ensure a consistent playing surface. It is common for each sheet of ice to have multiple sensors embedded in order to monitor surface temperature, as well as probes set up in the seating area (to monitor humidity) and in the compressor room (to monitor brine supply and return temperatures). The surface of the ice is maintained at a temperature of around 23 °F (−5 °C).",
"title": "Equipment"
},
{
"paragraph_id": 19,
"text": "A key part of the preparation of the playing surface is the spraying of water droplets onto the ice, which form pebble on freezing. The pebbled ice surface resembles an orange peel, and the stone moves on top of the pebbled ice. The pebble, along with the concave bottom of the stone, decreases the friction between the stone and the ice, allowing the stone to travel farther. As the stone moves over the pebble, any rotation of the stone causes it to curl, or travel along a curved path. The amount of curl (commonly referred to as the feet of curl) can change during a game as the pebble wears; the ice maker must monitor this and be prepared to scrape and re-pebble the surface prior to each game.",
"title": "Equipment"
},
{
"paragraph_id": 20,
"text": "The curling stone (also sometimes called a rock in North America) is made of granite and is specified by the World Curling Federation, which requires a weight between 17.24 and 19.96 kilograms (38.0 and 44.0 lb), a maximum circumference of 914 millimetres (36.0 in), and a minimum height of 114.3 millimetres (4.5 in). The only part of the stone in contact with the ice is the running surface, a narrow, flat annulus or ring, 6.4 to 12.7 millimetres (0.25 to 0.50 in) wide and about 130 millimetres (5.1 in) in diameter; the sides of the stone bulge convex down to the ring, with the inside of the ring hollowed concave to clear the ice. This concave bottom was first proposed by J. S. Russell of Toronto, Ontario, Canada sometime after 1870, and was subsequently adopted by Scottish stone manufacturer Andrew Kay.",
"title": "Equipment"
},
{
"paragraph_id": 21,
"text": "The granite for the stones comes from two sources: Ailsa Craig, an island off the Ayrshire coast of Scotland, and the Trefor Granite Quarry, North of the Llŷn Peninsula, Gwynedd in Wales. These locations provide four variations in colour known as Ailsa Craig Common Green, Ailsa Craig Blue Hone, Blue Trefor and Red Trefor.",
"title": "Equipment"
},
{
"paragraph_id": 22,
"text": "Blue Hone has very low water absorption, which prevents the action of repeatedly freezing water from eroding the stone. Ailsa Craig Common Green is a lesser quality granite than Blue Hone. In the past, most curling stones were made from Blue Hone, but the island is now a wildlife reserve, and the quarry is restricted by environmental conditions that exclude blasting.",
"title": "Equipment"
},
{
"paragraph_id": 23,
"text": "Kays of Scotland has been making curling stones in Mauchline, Ayrshire, since 1851 and has the exclusive rights to the Ailsa Craig granite, granted by the Marquess of Ailsa, whose family has owned the island since 1560. According to the 1881 Census, Andrew Kay employed 30 people in his curling stone factory in Mauchline. The last harvest of Ailsa Craig granite by Kays took place in 2013, after a hiatus of 11 years; 2,000 tons were harvested, sufficient to fill anticipated orders through at least 2020. Kays have been involved in providing curling stones for the Winter Olympics since Chamonix in 1924 and has been the exclusive manufacturer of curling stones for the Olympics since the 2006 Winter Olympics.",
"title": "Equipment"
},
{
"paragraph_id": 24,
"text": "Trefor granite comes from the Yr Eifl or Trefor Granite Quarry in the village of Trefor on the north coast of the Llŷn Peninsula in Gwynedd, Wales and has produced granite since 1850. Trefor granite comes in shades of pink, blue, and grey. The quarry supplies curling stone granite exclusively to the Canada Curling Stone Company, which has been producing stones since 1992 and supplied the stones for the 2002 Winter Olympics.",
"title": "Equipment"
},
{
"paragraph_id": 25,
"text": "A handle is attached by a bolt running vertically through a hole in the centre of the stone. The handle allows the stone to be gripped and rotated upon release; on properly prepared ice the rotation will bend (curl) the path of the stone in the direction in which the front edge of the stone is turning, especially as the stone slows. Handles are coloured to identify each team, two popular colours in major tournaments being red and yellow. In competition, an electronic handle known as the Eye on the Hog may be fitted to detect hog line violations. This electronically detects whether the thrower's hand is in contact with the handle as it passes the hog line and indicates a violation by lights at the base of the handle (see delivery below). The eye on the hog eliminates human error and the need for hog line officials. It is mandatory in high-level national and international competition, but its cost, around US$650 each, currently puts it beyond the reach of most curling clubs.",
"title": "Equipment"
},
{
"paragraph_id": 26,
"text": "The curling broom, or brush, is used to sweep the ice surface in the path of the stone (see sweeping) and is also often used as a balancing aid during delivery of the stone.",
"title": "Equipment"
},
{
"paragraph_id": 27,
"text": "Prior to the 1950s, most curling brooms were made of corn strands and were similar to household brooms of the day. In 1958, Fern Marchessault of Montreal inverted the corn straw in the centre of the broom. This style of corn broom was referred to as the Blackjack.",
"title": "Equipment"
},
{
"paragraph_id": 28,
"text": "Artificial brooms made from human-made fabrics rather than corn, such as the Rink Rat, also became common later during this time period. Prior to the late sixties, Scottish curling brushes were used primarily by some of the Scots, as well as by recreational and elderly curlers, as a substitute for corn brooms, since the technique was easier to learn. In the late sixties, competitive curlers from Calgary, Alberta, such as John Mayer, Bruce Stewart, and, later, the world junior championship teams skipped by Paul Gowsell, proved that the curling brush could be just as (or more) effective without all the blisters common to corn broom use. During that time period, there was much debate in competitive curling circles as to which sweeping device was more effective: brush or broom. Eventually, the brush won out with the majority of curlers making the switch to the less costly and more efficient brush. Today, brushes have replaced traditional corn brooms at every level of curling; it is rare now to see a curler using a corn broom on a regular basis.",
"title": "Equipment"
},
{
"paragraph_id": 29,
"text": "Curling brushes may have fabric, hog hair, or horsehair heads. Modern curling brush handles are usually hollow tubes made of fibreglass or carbon fibre instead of a solid length of wooden dowel. These hollow tube handles are lighter and stronger than wooden handles, allowing faster sweeping and also enabling more downward force to be applied to the broom head with reduced shaft flex.",
"title": "Equipment"
},
{
"paragraph_id": 30,
"text": "New \"directional fabric\" brooms were introduced in 2014. Dubbed the \"broomgate\" controversy, they were able to better navigate the path of a curling stone than existing brooms. Players were worried that these brooms would alter the fundamentals of the sport by reducing the level of skill required, accusing them of giving players an unfair advantage, and at least thirty-four elite teams signed a statement pledging not to use them. The new brooms were temporarily banned by the World Curling Federation and Curling Canada for the 2015–2016 season. As a result of the \"broomgate\" controversy, as of 2016, only one standardized brush head is approved by the World Curling Federation for competitive play.",
"title": "Equipment"
},
{
"paragraph_id": 31,
"text": "Curling shoes are similar to ordinary athletic shoes except for special soles; the slider shoe (usually known as a \"slider\") is designed for the sliding foot and the \"gripper shoe\" (usually known as a gripper) for the foot that kicks off from the hack.",
"title": "Equipment"
},
{
"paragraph_id": 32,
"text": "The slider is designed to slide and typically has a Teflon sole. It is worn by the thrower during delivery from the hack and by sweepers or the skip to glide down the ice when sweeping or otherwise traveling down the sheet quickly. Stainless steel and \"red brick\" sliders with lateral blocks of PVC on the sole are also available as alternatives to Teflon. Most shoes have a full-sole sliding surface, but some shoes have a sliding surface covering only the outline of the shoe and other enhancements with the full-sole slider. Some shoes have small disc sliders covering the front and heel portions or only the front portion of the foot, which allow more flexibility in the sliding foot for curlers playing with tuck deliveries. When a player is not throwing, the player's slider shoe can be temporarily rendered non-slippery by using a slip-on gripper. Ordinary athletic shoes may be converted to sliders by using a step-on or slip-on Teflon slider or by applying electrical or gaffer tape directly to the sole or over a piece of cardboard. This arrangement often suits casual or beginning players.",
"title": "Equipment"
},
{
"paragraph_id": 33,
"text": "The gripper is worn by the thrower on the foot that kicks off from the hack during delivery and is designed to grip the ice. It may have a normal athletic shoe sole or a special layer of rubbery material applied to the sole of a thickness to match the sliding shoe. The toe of the hack foot shoe may also have a rubberised coating on the top surface or a flap that hangs over the toe to reduce wear on the top of the shoe as it drags on the ice behind the thrower.",
"title": "Equipment"
},
{
"paragraph_id": 34,
"text": "Other types of equipment include:",
"title": "Equipment"
},
{
"paragraph_id": 35,
"text": "The purpose of a game is to score points by getting stones closer to the house centre, or the \"button\", than the other team's stones. Players from either team alternate in taking shots from the far side of the sheet. An end is complete when all eight rocks from each team have been delivered, a total of sixteen stones. If the teams are tied at the end of regulation, often extra ends are played to break the tie. The winner is the team with the highest score after all ends have been completed (see Scoring below). A game may be conceded if winning the game is infeasible.",
"title": "Gameplay"
},
{
"paragraph_id": 36,
"text": "International competitive games are generally ten ends, so most of the national championships that send a representative to the World Championships or Olympics also play ten ends. However, there is a movement on the World Curling Tour to make the games only eight ends. Most tournaments on that tour are eight ends, as are the vast majority of recreational games.",
"title": "Gameplay"
},
{
"paragraph_id": 37,
"text": "In international competition, each side is given 73 minutes to complete all of its throws. Each team is also allowed two minute-long timeouts per 10-end game. If extra ends are required, each team is allowed 10 minutes of playing time to complete its throws and one added 60-second timeout for each extra end. However, the \"thinking time\" system, in which the delivering team's game timer stops as soon as the shooter's rock crosses the t-line during the delivery, is becoming more popular, especially in Canada. This system allows each team 38 minutes per 10 ends, or 30 minutes per 8 ends, to make strategic and tactical decisions, with 4 minutes and 30 seconds an end for extra ends. The \"thinking time\" system was implemented after it was recognized that using shots which take more time for the stones to come to rest was being penalized in terms of the time the teams had available compared to teams which primarily use hits which require far less time per shot.",
"title": "Gameplay"
},
{
"paragraph_id": 38,
"text": "The process of sliding a stone down the sheet is known as the delivery or throw. Players, with the exception of the skip, take turns throwing and sweeping; when one player (e.g., the lead) throws, the players not delivering (the second and third) sweep (see Sweeping, below). When the skip throws, the vice-skip takes their role.",
"title": "Gameplay"
},
{
"paragraph_id": 39,
"text": "The skip, or the captain of the team, determines the desired stone placement and the required weight, turn, and line that will allow the stone to stop there. The placement will be influenced by the tactics at this point in the game, which may involve taking out, blocking, or tapping another stone.",
"title": "Gameplay"
},
{
"paragraph_id": 40,
"text": "The skip may communicate the weight, turn, line, and other tactics by calling or tapping a broom on the ice. In the case of a takeout, guard, or a tap, the skip will indicate the stones involved.",
"title": "Gameplay"
},
{
"paragraph_id": 41,
"text": "Before delivery, the running surface of the stone is wiped clean and the path across the ice swept with the broom if necessary, since any dirt on the bottom of a stone or in its path can alter the trajectory and ruin the shot. Intrusion by a foreign object is called a pick-up or pick.",
"title": "Gameplay"
},
{
"paragraph_id": 42,
"text": "The thrower starts from the hack. The thrower's gripper shoe (with the non-slippery sole) is positioned against one of the hacks; for a right-handed curler the right foot is placed against the left hack and vice versa for a left-hander. The thrower, now in the hack, lines the body up with shoulders square to the skip's broom at the far end for line.",
"title": "Gameplay"
},
{
"paragraph_id": 43,
"text": "The stone is placed in front of the foot now in the hack. Rising slightly from the hack, the thrower pulls the stone back (some older curlers may actually raise the stone in this backward movement) then lunges smoothly out from the hack pushing the stone ahead while the slider foot is moved in front of the gripper foot, which trails behind. The thrust from this lunge determines the weight, and hence the distance the stone will travel. Balance may be assisted by a broom held in the free hand with the back of the broom down so that it slides. One older writer suggests the player keep \"a basilisk glance\" at the mark.",
"title": "Gameplay"
},
{
"paragraph_id": 44,
"text": "There are two common types of delivery currently, the typical flat-foot delivery and the Manitoba tuck delivery where the curler slides on the front ball of their foot.",
"title": "Gameplay"
},
{
"paragraph_id": 45,
"text": "When the player releases the stone, a rotation (called the turn) is imparted by a slight clockwise or counter-clockwise twist of the handle from around the two or ten o'clock position to the twelve o'clock on release. A typical rate of turn is about 2+1⁄2 rotations before coming to a rest.",
"title": "Gameplay"
},
{
"paragraph_id": 46,
"text": "The stone must be released before its front edge crosses the near hog line. In major tournaments, the \"Eye on the Hog\" sensor is commonly used to enforce this rule. The sensor is in the handle of the stone and will indicate whether the stone was released before the near hog line. The lights on the stone handle will either light up green, indicating that the stone has been legally thrown, or red, in which case the illegally thrown stone will be immediately pulled from play instead of waiting for the stone to come to rest.",
"title": "Gameplay"
},
{
"paragraph_id": 47,
"text": "The stone must clear the far hog line or else be removed from play (hogged); an exception is made if a stone fails to come to rest beyond the far hog line after rebounding from a stone in play just past the hog line.",
"title": "Gameplay"
},
{
"paragraph_id": 48,
"text": "After the stone is delivered, its trajectory is influenced by the two sweepers under instruction from the skip. Sweeping is done for several reasons: to make the stone travel farther, to decrease the amount of curl, and to clean debris from the stone's path. Sweeping is able to make the stone travel farther and straighter by slightly melting the ice under the brooms, thus decreasing the friction as the stone travels across that part of the ice. The stones curl more as they slow down, so sweeping early in travel tends to increase distance as well as straighten the path, and sweeping after sideways motion is established can increase the sideways distance.",
"title": "Gameplay"
},
{
"paragraph_id": 49,
"text": "One of the basic technical aspects of curling is knowing when to sweep. When the ice in front of the stone is swept a stone will usually travel both farther and straighter, and in some situations one of those is not desirable. For example, a stone may be traveling too fast (said to have too much weight) but require sweeping to prevent curling into another stone. The team must decide which is better: getting by the other stone but traveling too far, or hitting the stone.",
"title": "Gameplay"
},
{
"paragraph_id": 50,
"text": "Much of the yelling that goes on during a curling game is the skip and sweepers exchanging information about the stone's line and weight and deciding whether to sweep. The skip evaluates the path of the stone and calls to the sweepers to sweep as necessary to maintain the intended track. The sweepers themselves are responsible for judging the weight of the stone, ensuring that the length of travel is correct and communicating the weight of the stone back to the skip. Many teams use a number system to communicate in which of 10 zones the sweepers estimate the stone will stop. Some sweepers use stopwatches to time the stone from the back line or tee line to the nearest hog line to aid in estimating how far the stone will travel.",
"title": "Gameplay"
},
{
"paragraph_id": 51,
"text": "Usually, the two sweepers will be on opposite sides of the stone's path, although depending on which side the sweepers' strengths lie this may not always be the case. Speed and pressure are vital to sweeping. In gripping the broom, one hand should be one third of the way from the top (non-brush end) of the handle while the other hand should be one third of the way from the head of the broom. The angle of the broom to the ice should be such that the most force possible can be exerted on the ice. The precise amount of pressure may vary from relatively light brushing (\"just cleaning\" - to ensure debris will not alter the stone's path) to maximum-pressure scrubbing.",
"title": "Gameplay"
},
{
"paragraph_id": 52,
"text": "Sweeping is allowed anywhere on the ice up to the tee line; once the leading edge of a stone crosses the tee line only one player may sweep it. Additionally, if a stone is behind the tee line one player from the opposing team is allowed to sweep it. This is the only case that a stone may be swept by an opposing team member. In international rules, this player must be the skip, but if the skip is throwing, then the sweeping player must be the third.",
"title": "Gameplay"
},
{
"paragraph_id": 53,
"text": "Occasionally, players may accidentally touch a stone with their broom or a body part. This is often referred to as burning a stone. Players touching a stone in such a manner are expected to call their own infraction as a matter of good sportsmanship. Touching a stationary stone when no stones are in motion (there is no delivery in progress) is not an infraction as long as the stone is struck in such a manner that its position is not altered, and this is a common way for the skip to indicate a stone that is to be taken out.",
"title": "Gameplay"
},
{
"paragraph_id": 54,
"text": "When a stone is touched when stones are in play, the remedies vary between leaving the stones as they end up after the touch, replacing the stones as they would have been if no stone were touched, or removal of the touched stone from play. In non-officiated league play, the skip of the non-offending team has the final say on where the stones are placed after the infraction.",
"title": "Gameplay"
},
{
"paragraph_id": 55,
"text": "Many different types of shots are used to carefully place stones for strategic or tactical reasons; they fall into three fundamental categories as follows:",
"title": "Gameplay"
},
{
"paragraph_id": 56,
"text": "Guards are thrown in front of the house in the free guard zone, usually to protect a stone or to make the opposing team's shot difficult. Guard shots include the centre-guard, on the centreline, and the corner-guards to the left or right sides of the centre line. See Free Guard Zone below.",
"title": "Gameplay"
},
{
"paragraph_id": 57,
"text": "Draws are thrown only to reach the house. Draw shots include raise, come-around, and freeze shots.",
"title": "Gameplay"
},
{
"paragraph_id": 58,
"text": "Takeouts are intended to remove stones from play and include the peel, hit-and-roll, and double shots.",
"title": "Gameplay"
},
{
"paragraph_id": 59,
"text": "For a more complete listing, see Glossary of curling terms.",
"title": "Gameplay"
},
{
"paragraph_id": 60,
"text": "The free guard zone is the area of the curling sheet between the hog line and tee line, excluding the house. Until five stones have been played (three from the side without hammer and two from the side with hammer), stones in the free guard zone may not be removed by an opponent's stone, although they can be moved within the playing area. If a stone in the free guard zone is knocked out of play, it is placed back in the position it was in before the shot was thrown and the opponent's stone is removed from play. This rule is known as the five-rock rule or the free guard zone rule (previous versions of the free guard zone rule only limited removing guards from play in the first three or four rocks).",
"title": "Gameplay"
},
{
"paragraph_id": 61,
"text": "This rule, a relatively recent addition to curling, was added in response to a strategy by teams of gaining a lead in the game and then peeling all of the opponents' stones (knocking them out of play at an angle that caused the shooter's stone to also roll out of play, leaving no stones on the ice). By knocking all stones out the opponents could at best score one point, if they had the last stone of the end (called the hammer). If the team peeling the rocks had the hammer they could peel rock after rock which would blank the end (leave the end scoreless), keeping the last rock advantage for another end. This strategy had developed (mostly in Canada) as ice-makers had become skilled at creating a predictable ice surface and newer brushes allowed greater control over the rock. While a sound strategy, this made for an unexciting game. Observers at the time noted that if two teams equally skilled in the peel game faced each other on good ice, the outcome of the game would be predictable from who won the coin flip to have last rock (or had earned it in the schedule) at the beginning of the game. The 1990 Brier (Canadian men's championship) was considered by many curling fans as boring to watch because of the amount of peeling and the quick adoption of the free guard zone rule the following year reflected how disliked this aspect of the game had become.",
"title": "Gameplay"
},
{
"paragraph_id": 62,
"text": "The free guard zone rule was originally called the Modified Moncton Rule and was developed from a suggestion made by Russ Howard for the Moncton 100 cashspiel in Moncton, New Brunswick, in January 1990. \"Howard's Rule\" (later known as the Moncton Rule), used for the tournament and based on a practice drill his team used, had the first four rocks in play unable to be removed no matter where they were at any time during the end. This method of play was altered by restricting the area in which a stone was protected to the free guard zone only for the first four rocks thrown and adopted as a four-rock free guard zone rule for international competition shortly after. Canada kept to the traditional rules until a three-rock free guard zone rule was adopted for the 1993–94 season. After several years of having the three-rock rule used for the Canadian championships and the winners then having to adjust to the four-rock rule in the World Championships, the Canadian Curling Association adopted the four-rock free guard zone in the 2002–2003 season.",
"title": "Gameplay"
},
{
"paragraph_id": 63,
"text": "One strategy that has been developed by curlers in response to the free guard zone (Kevin Martin from Alberta is one of the best examples) is the \"tick\" game, where a shot is made attempting to knock (tick) the guard to the side, far enough that it is difficult or impossible to use but still remaining in play while the shot itself goes out of play. The effect is functionally identical to peeling the guard but significantly harder, as a shot that hits the guard too hard (knocking it out of play) results in its being replaced, while not hitting it hard enough can result in it still being tactically useful for the opposition. There is also a greater chance that the shot will miss the guard entirely because of the greater accuracy required to make the shot. Because of the difficulty of making this type of shot, only the best teams will normally attempt it, and it does not dominate the game the way the peel formerly did. Steve Gould from Manitoba popularized ticks played across the face of the guard stone. These are easier to make because they impart less speed on the object stone, therefore increasing the chance that it remains in play even if a bigger chunk of it is hit.",
"title": "Gameplay"
},
{
"paragraph_id": 64,
"text": "With the tick shot reducing the effectiveness of the four-rock rule, the Grand Slam of Curling series of bonspiels adopted a five-rock rule in 2014. In 2017, the five-rock rule was adopted by the World Curling Federation and member organizations for official play, beginning in the 2018–19 season.",
"title": "Gameplay"
},
{
"paragraph_id": 65,
"text": "The last rock in an end is called the hammer, and throwing the hammer gives a team a tactical advantage. Before the game, teams typically decide who gets the hammer in the first end either by chance (such as a coin toss), by a \"draw-to-the-button\" contest, where a representative of each team shoots to see who gets closer to the centre of the rings, or, particularly in tournament settings like the Winter Olympics, by a comparison of each team's win–loss record. In all subsequent ends, the team that did not score in the preceding end gets to throw second, thus having the hammer. In the event that neither team scores, called a blanked end, the hammer remains with the same team. Naturally, it is easier to score points with the hammer than without; the team with the hammer generally tries to score two or more points. If only one point is possible, the skip may try to avoid scoring at all in order to retain the hammer the next end, giving the team another chance to use the hammer advantage to try to score two points. Scoring without the hammer is commonly referred to as stealing, or a steal, and is much more difficult.",
"title": "Gameplay"
},
{
"paragraph_id": 66,
"text": "Curling is a game of strategy, tactics, and skill. The strategy depends on the team's skill, the opponent's skill, the conditions of the ice, the score of the game, how many ends remain and whether the team has last-stone advantage (the hammer). A team may play an end aggressively or defensively. Aggressive playing will put a lot of stones in play by throwing mostly draws; this makes for an exciting game and although risky the rewards can be great. Defensive playing will throw a lot of hits preventing a lot of stones in play; this tends to be less exciting and less risky. A good drawing team will usually opt to play aggressively, while a good hitting team will opt to play defensively.",
"title": "Gameplay"
},
{
"paragraph_id": 67,
"text": "If a team does not have the hammer in an end, it will opt to try to clog up the four-foot zone in the house to deny the opposing team access to the button. This can be done by throwing \"centre line\" guards in front of the house on the centre line, which can be tapped into the house later or drawn around. If a team has the hammer, they will try to keep this four-foot zone free so that they have access to the button area at all times. A team with the hammer may throw a corner guard as their first stone of an end placed in front of the house but outside the four-foot zone to utilize the free guard zone. Corner guards are key for a team to score two points in an end, because they can either draw around it later or hit and roll behind it, making the opposing team's shot to remove it more difficult.",
"title": "Gameplay"
},
{
"paragraph_id": 68,
"text": "Ideally, the strategy in an end for a team with the hammer is to score two points or more. Scoring one point is often a wasted opportunity, as they will then lose last-stone advantage for the next end. If a team cannot score two points, they will often attempt to \"blank an end\" by removing any leftover opposition stones and rolling out; or, if there are no opposition stones, just throwing the stone through the house so that no team scores any points, and the team with the hammer can try again the next end to score two or more with it. Generally, a team without the hammer would want to either force the team with the hammer to only one point, so that they can get the hammer back, or \"steal\" the end by scoring one or more points of their own.",
"title": "Gameplay"
},
{
"paragraph_id": 69,
"text": "Large leads are often defended by displacing the opponent's stones to reduce their opportunity to score multiple points. However, a comfortably leading team that leaves their own stones in play becomes vulnerable as the opponent can draw around guard stones, stones in the house can be \"tapped back\" if they are in front of the tee line, or \"frozen onto\" if they are behind the tee line. A frozen stone is placed in front of and touching the opponent's stone and is difficult to remove. At this point, a team may opt for \"peels\"; throws with a lot of \"weight\" that can move opposition stones out of play.",
"title": "Gameplay"
},
{
"paragraph_id": 70,
"text": "It is common at any level for a losing team to terminate the match before all ends are completed if it believes it no longer has a realistic chance of winning. Competitive games end once the losing team has \"run out of rocks\"—that is, once it has fewer stones in play and available for play than the number of points needed to tie the game.",
"title": "Gameplay"
},
{
"paragraph_id": 71,
"text": "Most decisions about rules are left to the skips, although in official tournaments, decisions may be left to the officials. However, all scoring disputes are handled by the vice skip. No players other than the vice skip from each team should be in the house while score is being determined. In tournament play, the most frequent circumstance in which a decision has to be made by someone other than the vice skip is the failure of the vice skips to agree on which stone is closest to the button. An independent official (supervisor at Canadian and World championships) then measures the distances using a specially designed device that pivots at the centre of the button. When no independent officials are available, the vice skips measure the distances.",
"title": "Gameplay"
},
{
"paragraph_id": 72,
"text": "The winner is the team having the highest number of accumulated points at the completion of ten ends. Points are scored at the conclusion of each of these ends as follows: when each team has thrown its eight stones, the team with the stone closest to the button wins that end; the winning team is then awarded one point for each of its own stones lying closer to the button than the opponent's closest stone.",
"title": "Scoring"
},
{
"paragraph_id": 73,
"text": "Only stones that are in the house are considered in the scoring. A stone is in the house if it lies within the 12-foot (3.7 m) zone or any portion of its edge lies over the edge of the ring. Since the bottom of the stone is rounded, a stone just barely in the house will not have any actual contact with the ring, which will pass under the rounded edge of the stone, but it still counts. This type of stone is known as a biter.",
"title": "Scoring"
},
{
"paragraph_id": 74,
"text": "It may not be obvious to the eye which of the two rocks is closer to the button (centre) or if a rock is actually biting or not. There are specialized devices to make these determinations, but these cannot be brought out until after an end is completed. Therefore, a team may make strategic decisions during an end based on assumptions of rock position that turn out to be incorrect.",
"title": "Scoring"
},
{
"paragraph_id": 75,
"text": "The score is marked on a scoreboard, of which there are two types; the baseball type and the club scoreboard.",
"title": "Scoring"
},
{
"paragraph_id": 76,
"text": "The baseball-style scoreboard was created for televised games for audiences not familiar with the club scoreboard. The ends are marked by columns 1 through 10 (or 11 for the possibility of an extra end to break ties) plus an additional column for the total. Below this are two rows, one for each team, containing the team's score for that end and their total score in the right-hand column.",
"title": "Scoring"
},
{
"paragraph_id": 77,
"text": "The club scoreboard is traditional and used in most curling clubs. Scoring on this board only requires the use of (up to) 11 digit cards, whereas with baseball-type scoring an unknown number of multiples of the digits (especially low digits like 1) may be needed. The numbered centre row represents various possible scores, and the numbers placed in the team rows represent the end in which that team achieved that cumulative score. If the red team scores three points in the first end (called a three-ender), then a 1 (indicating the first end) is placed beside the number 3 in the red row. If they score two more in the second end, then a 2 will be placed beside the 5 in the red row, indicating that the red team has five points in total (3+2). This scoreboard works because only one team can get points in an end. However, some confusion may arise if neither team scores points in an end, this is called a blank end. The blank end numbers are usually listed in the farthest column on the right in the row of the team that has the hammer (last rock advantage), or on a special spot for blank ends.",
"title": "Scoring"
},
{
"paragraph_id": 78,
"text": "The following example illustrates the difference between the two types. The example illustrates the men's final at the 2006 Winter Olympics.",
"title": "Scoring"
},
{
"paragraph_id": 79,
"text": "Eight points – all the rocks thrown by one team counting – is the highest score possible in an end, and is known as an \"eight-ender\" or \"snowman\". Scoring an eight-ender against a competent team is very difficult; in curling, it is the equivalent of pitching a perfect game in baseball. Probably the best-known snowman came at the 2006 Players' Championships. Future (2007) World Champion Kelly Scott scored eight points in one of her games against 1998 World bronze medalist Cathy King.",
"title": "Scoring"
},
{
"paragraph_id": 80,
"text": "Competition teams are normally named after the skip, for example, Team Martin after skip Kevin Martin. Amateur league players can (and do) creatively name their teams, but when in competition (a bonspiel) the official team will have a standard name.",
"title": "Culture"
},
{
"paragraph_id": 81,
"text": "Top curling championships are typically played by all-male or all-female teams. It is known as mixed curling when a team consists of two men and two women. For many years, in the absence of world championship or Olympic mixed curling events, national championships (of which the Canadian Mixed Curling Championship was the most prominent) were the highest-level mixed curling competitions. However, a European Mixed Curling Championship was inaugurated in 2005, a World Mixed Doubles Curling Championship was established in 2008, and the European Mixed Championship was replaced with the World Mixed Curling Championship in 2015. A mixed tournament was held at the Olympic level for the first time in 2018, although it was a doubles tournament, not a four-person.",
"title": "Culture"
},
{
"paragraph_id": 82,
"text": "Curling tournaments may use the Schenkel system for determining the participants in matches.",
"title": "Culture"
},
{
"paragraph_id": 83,
"text": "Curling is played in many countries, including Canada, the United Kingdom (especially Scotland), the United States, Norway, Sweden, Switzerland, Denmark, Finland, and Japan, all of which compete in the world championships.",
"title": "Culture"
},
{
"paragraph_id": 84,
"text": "Curling has been depicted by many artists including: George Harvey, John Levack, The Dutch School, Charles Martin Hardie, John Elliot Maguire, John McGhie, and John George Brown.",
"title": "Culture"
},
{
"paragraph_id": 85,
"text": "Curling is particularly popular in Canada. Improvements in ice making and changes in the rules to increase scoring and promote complex strategy have increased the already high popularity of the sport in Canada, and large television audiences watch annual curling telecasts, especially the Scotties Tournament of Hearts (the national championship for women), the Montana's Brier (the national championship for men), and the women's and men's world championships.",
"title": "Culture"
},
{
"paragraph_id": 86,
"text": "Despite the Canadian province of Manitoba's small population (ranked 5th of 10 Canadian provinces), Manitoban teams have won the Brier more times than teams from any other province, except for Alberta. The Tournament of Hearts and the Brier are contested by provincial and territorial champions, and the world championships by national champions.",
"title": "Culture"
},
{
"paragraph_id": 87,
"text": "Curling is the provincial sport of Saskatchewan. From there, Ernie Richardson and his family team dominated Canadian and international curling during the late 1950s and early 1960s and have been considered to be the best male curlers of all time. Sandra Schmirler led her team to the first-ever gold medal in women's curling in the 1998 Winter Olympics. When she died two years later from cancer, over 15,000 people attended her funeral, and it was broadcast on national television.",
"title": "Culture"
},
{
"paragraph_id": 88,
"text": "More so than in many other team sports, good sportsmanship, often referred to as the \"Spirit of Curling\", is an integral part of curling. The Spirit of Curling also leads teams to congratulate their opponents for making a good shot, strong sweeping, or spectacular form. Perhaps most importantly, the Spirit of Curling dictates that one never cheers mistakes, misses, or gaffes by one's opponent (unlike most team sports), and one should not celebrate one's own good shots during the game beyond modest acknowledgement of the shot such as a head nod, fist bump, or thumbs-up gesture. Modest congratulation, however, may be exchanged between winning team members after the match. On-the-ice celebration is usually reserved for the winners of a major tournament after winning the final game of the championship. It is completely unacceptable to attempt to throw opposing players off their game by way of negative comment, distraction, or heckling.",
"title": "Culture"
},
{
"paragraph_id": 89,
"text": "A match traditionally begins with players shaking hands with and saying \"good curling\" or \"have a pleasant game\" to each member of the opposing team. It is also traditional in some areas for the winning team to buy the losing team a drink after the game. Even at the highest levels of play, players are expected to call their own fouls.",
"title": "Culture"
},
{
"paragraph_id": 90,
"text": "It is not uncommon for a team to concede a curling match after it believes it no longer has any hope of winning. Concession is an honourable act and does not carry the stigma associated with quitting. It also allows for more socializing. To concede a match, members of the losing team offer congratulatory handshakes to the winning team. Thanks, wishes of future good luck, and hugs are usually exchanged between the teams. To continue playing when a team has no realistic chance of winning can be seen as a breach of etiquette.",
"title": "Culture"
},
{
"paragraph_id": 91,
"text": "Curling has been adapted for wheelchair users and people otherwise unable to throw the stone from the hack. These curlers may use a device known as a \"delivery stick\". The cue holds on to the handle of the stone and is then pushed along by the curler. At the end of delivery, the curler pulls back on the cue, which releases it from the stone. The Canadian Curling Association Rules of Curling allows the use of a delivery stick in club play but does not permit it in championships.",
"title": "Culture"
},
{
"paragraph_id": 92,
"text": "The delivery stick was specifically invented for elderly curlers in Canada in 1999. In early 2016 an international initiative started to allow use of the delivery sticks by players over 60 years of age in World Curling Federation Senior Championships, as well as in any projected Masters (60+) Championship that develops in the future.",
"title": "Culture"
},
{
"paragraph_id": 93,
"text": "Terms used to describe the game include:",
"title": "Terminology"
},
{
"paragraph_id": 94,
"text": "The ice in the game may be fast (keen) or slow. If the ice is keen, a rock will travel farther with a given amount of weight (throwing force) on it. The speed of the ice is measured in seconds. One such measure, known as \"hog-to-hog\" time, is the speed of the stone and is the time in seconds the rock takes from the moment it crosses the near hog line until it crosses the far hog line. If this number is lower, the rock is moving faster, so again low numbers mean more speed. The ice in a match will be somewhat consistent and thus this measure of speed can also be used to measure how far down the ice the rock will travel. Once it is determined that a rock taking (for example) 13 seconds to go from hog line to hog line will stop on the tee line, the curler can know that if the hog-to-hog time is matched by a future stone, that stone will likely stop at approximately the same location. As an example, on keen ice, common times might be 16 seconds for guards, 14 seconds for draws, and 8 seconds for peel weight.",
"title": "Terminology"
},
{
"paragraph_id": 95,
"text": "The back line to hog line speed is used principally by sweepers to get an initial sense of the weight of a stone. As an example, on keen ice, common times might be 4.0 seconds for guards, 3.8 seconds for draws, 3.2 for normal hit weight, and 2.9 seconds for peel weight. Especially at the club level, this metric can be misleading, due to amateurs sometimes pushing stones on release, causing the stone to travel faster than the back-to-hog speed.",
"title": "Terminology"
}
] | Curling is a sport in which players slide stones on a sheet of ice toward a target area which is segmented into four concentric circles. It is related to bowls, boules and shuffleboard. Two teams, each with four players, take turns sliding heavy, polished granite stones, also called rocks, across the ice curling sheet toward the house, a circular target marked on the ice. Each team has eight stones, with each player throwing two. The purpose is to accumulate the highest score for a game; points are scored for the stones resting closest to the centre of the house at the conclusion of each end, which is completed when both teams have thrown all of their stones once. A game usually consists of eight or ten ends. The player can induce a curved path, described as curl, by causing the stone to slowly rotate as it slides. The path of the rock may be further influenced by two sweepers with brooms or brushes, who accompany it as it slides down the sheet and sweep the ice in front of the stone. "Sweeping a rock" decreases the friction, which makes the stone travel a straighter path and a longer distance. A great deal of strategy and teamwork go into choosing the ideal path and placement of a stone for each situation, and the skills of the curlers determine the degree to which the stone will achieve the desired result. | 2001-10-01T10:16:06Z | 2023-12-20T23:58:19Z | [
"Template:Short description",
"Template:About",
"Template:Convert",
"Template:Wide image",
"Template:Hidden top",
"Template:Hidden bottom",
"Template:Reflist",
"Template:Cite journal",
"Template:Refbegin",
"Template:Use dmy dates",
"Template:Anchor",
"Template:More citations needed section",
"Template:Frac",
"Template:Cbignore",
"Template:Celts",
"Template:Webarchive",
"Template:Winter Olympic sports",
"Template:Infobox sport",
"Template:FIN",
"Template:Div col",
"Template:Cite magazine",
"Template:Refend",
"Template:Commons category",
"Template:Authority control",
"Template:Circa",
"Template:Citation needed",
"Template:Portal",
"Template:Nowrap",
"Template:Cite web",
"Template:Cite press release",
"Template:Isbn",
"Template:Distinguish",
"Template:Main",
"Template:Clear",
"Template:CAN",
"Template:For",
"Template:Team Sport",
"Template:Ice",
"Template:Multiple image",
"Template:Div col end",
"Template:Cite book",
"Template:Cite news",
"Template:EB1911 poster"
] | https://en.wikipedia.org/wiki/Curling |
6,645 | Craven Cottage | Craven Cottage is a football stadium in Fulham, West London, England, which has been the home of Fulham since 1896. The ground's capacity is 22,384; the record attendance is 49,335, for a game against Millwall in 1938. Next to Bishop's Park on the banks of the River Thames, it was originally a royal hunting lodge and has a history dating back over 300 years.
The stadium has also been used by the United States, Australia, Ireland, and Canada men's national teams, and was formerly the home ground for rugby league club Fulham RLFC.
The original Cottage was built in 1780, by William Craven, the sixth Baron Craven and was located close to where the Johnny Haynes Stand is now. At the time, the surrounding areas were woods which made up part of Anne Boleyn's hunting grounds.
The Cottage was lived in by Edward Bulwer-Lytton (who wrote The Last Days of Pompeii) and other somewhat notable (and moneyed) persons until it was destroyed by fire in May 1888. Many rumours persist among Fulham fans of past tenants of Craven Cottage. Sir Arthur Conan Doyle, Jeremy Bentham, Florence Nightingale and even Queen Victoria are reputed to have stayed there, although there is no real evidence for this. Following the fire, the site was abandoned. Fulham had had 8 previous grounds before settling in at Craven Cottage for good. Therefore, The Cottagers have had 12 grounds overall (including a temporary stay at Loftus Road), meaning that only their former 'landlords' and rivals QPR have had more home grounds (14) in British football. Of particular note, was Ranelagh House, Fulham's palatial home from 1886 to 1888.
When representatives of Fulham first came across the land, in 1894, it was so overgrown that it took two years to be made suitable for football to be played on it. A deal was struck for the owners of the ground to carry out the work, in return for which they would receive a proportion of the gate receipts.
The first football match at which there were any gate receipts was when Fulham played against Minerva in the Middlesex Senior Cup, on 10 October 1896. The ground's first stand was built shortly after. Described as looking like an "orange box", it consisted of four wooden structures each holding some 250 seats, and later was affectionately nicknamed the "rabbit hutch".
In 1904 London County Council became concerned with the level of safety at the ground, and tried to get it closed. A court case followed in January 1905, as a result of which Archibald Leitch, a Scottish architect who had risen to prominence after his building of the Ibrox Stadium, a few years earlier, was hired to work on the stadium. In a scheme costing £15,000 (a record for the time), he built a pavilion (the present-day 'Cottage' itself) and the Stevenage Road Stand, in his characteristic red brick style.
The stand on Stevenage Road celebrated its centenary in the 2005–2006 season and, following the death of Fulham FC's favourite son, former England captain Johnny Haynes, in a car accident in October 2005 the Stevenage Road Stand was renamed the Johnny Haynes Stand after the club sought the opinions of Fulham supporters.
Both the Johnny Haynes Stand and Cottage remain among the finest examples of Archibald Leitch football architecture to remain in existence and both have been designated as Grade II listed buildings.
An England v Wales match was played at the ground in 1907, followed by a rugby league international between England and Australia in 1911.
One of the club's directors Henry Norris, and his friend William Hall, took over Arsenal in the early 1910s, the plan being to merge Fulham with Arsenal, to form a "London superclub" at Craven Cottage. This move was largely motivated by Fulham's failure thus far to gain promotion to the top division of English football. There were also plans for Henry Norris to build a larger stadium on the other side of Stevenage Road but there was little need after the merger idea failed. During this era, the Cottage was used for choir singing and marching bands along with other performances, and Mass.
In 1933 there were plans to demolish the ground and start again from scratch with a new 80,000 capacity stadium. These plans never materialised mainly due to the Great Depression.
On 8 October 1938, 49,335 spectators watched Fulham play Millwall. It was the largest attendance ever at Craven Cottage and the record remains today, unlikely to be bettered as it is now an all-seater stadium with currently no room for more than 25,700. The ground hosted several football games for the 1948 Summer Olympics, and is one of the last extant that did.
It was not until after Fulham first reached the top division, in 1949, that further improvements were made to the stadium. In 1962 Fulham became the final side in the first division to erect floodlights. The floodlights were said to be the most expensive in Europe at the time as they were so modern. The lights were like large pylons towering 50 metres over the ground and were similar in appearance to those at the WACA. An electronic scoreboard was installed on the Riverside Terrace at the same time as the floodlights were installed and flagpoles flying the flags of all of the other first division teams were flown from them. Following the sale of Alan Mullery to Tottenham Hotspur in 1964 (for £72,500) the Hammersmith End had a roof put over it at a cost of approximately £42,500.
Although Fulham were relegated, the development of Craven Cottage continued. The Riverside terracing, infamous for the fact that fans occupying it would turn their heads annually to watch The Boat Race pass, was replaced by what was officially named the 'Eric Miller Stand', Eric Miller being a director of the club at the time. The stand, which cost £334,000 and held 4,200 seats, was opened with a friendly game against Benfica in February 1972, (which included Eusébio). Pelé was also to appear on the ground, with a friendly played against his team Santos F.C. The Miller stand brought the seated capacity up to 11,000 out of a total 40,000. Eric Miller committed suicide five years later after a political and financial scandal, and had shady dealings with trying to move Fulham away from the Cottage. The stand is now better known as the Riverside Stand.
On Boxing Day 1963, Craven Cottage was the venue of the fastest hat-trick in the history of the English football league, which was completed in less than three minutes, by Graham Leggat. This helped his Fulham team to beat Ipswich 10–1 (a club record). The international record is held by Jimmy O'Connor, an Irish player who notched up his hat trick in 2 minutes 14 seconds in 1967.
Between 1980 and 1984, Fulham rugby league played their home games at the Cottage. They have since evolved into the London Crusaders, the London Broncos and Harlequins Rugby League before reverting to London Broncos ahead of the 2012 season. Craven Cottage held the team's largest ever crowd at any ground with 15,013, at a game against Wakefield Trinity on 15 February 1981.
When the Hillsborough disaster occurred in 1989, Fulham were in the second bottom rung of The Football League, but following the Taylor report Fulham's ambitious chairman Jimmy Hill tabled plans in 1996 for an all-seater stadium. These plans never came to fruition, partly due to local residents' pressure groups, and by the time Fulham reached the Premier League, they still had standing areas in the ground, something virtually unheard of at the time. A year remained to do something about this (teams reaching the second tier for the first time are allowed a three-year period to reach the required standards for the top two divisions), but by the time the last league game was played there, against Leicester City on 27 April 2002, no building plans had been made. Two more Intertoto Cup games were played there later that year (against FC Haka of Finland and Egaleo FC of Greece), and the eventual solution was to decamp to Loftus Road, home of local rivals QPR. During this time, many Fulham fans only went to away games in protest of moving from Craven Cottage. 'Back to the Cottage', later to become the 'Fulham Supporters Trust', was set up as a fans pressure group to encourage the chairman and his advisers that Craven Cottage was the only viable option for Fulham Football Club.
After one and a half seasons at Loftus Road, no work had been done on the Cottage. In December 2003, plans were unveiled for £8 million worth of major refurbishment work to bring it in line with Premier League requirements. With planning permission granted, work began in January 2004 in order to meet the deadline of the new season. The work proceeded as scheduled and the club were able to return to their home for the start of the 2004–05 season. Their first game in the new-look 22,000 all-seater stadium was a pre-season friendly against Watford on 10 July 2004. Fenway Sports Group originally partnered with Fulham in 2009, due to the perceived heritage and quirks shared between the Cottage and Fenway Park, saying no English club identifies with its stadium as much as Fulham.
The current stadium was one of the Premier League's smallest grounds at the time of Fulham's relegation at the end of the 2013–14 season (it was third-smallest, after the KC Stadium and the Liberty Stadium). Much admired for its fine architecture, the stadium has recently hosted a few international games, mostly including Australia. This venue is suitable for Australia because most of the country's top players are based in Europe, and West London has a significant community of expatriate Australians. Also, Greece vs. South Korea was hosted on 6 February 2007. In 2011 Brazil played Ghana, in an international friendly, and the Women's Champions League Final was hosted.
Craven Cottage often hosts many other events such as 5-a-side football tournaments and weddings. Also, many have Sunday Lunch at the Riverside restaurant or the 'Cottage Cafe' on non-match days. Craven Cottage hosted the Oxbridge Varsity Football match annually between 1991 and 2000 and again in 2003, 2006 (the same day as the famous 'Boat Race'), 2008, 2009, and 2014 as well as having a Soccer Aid warm-up match in 2006. The half-time entertainment often includes the SW6ers (previously called The Cravenettes) which are a group of female cheerleaders. Other events have included brass bands, Michael Jackson (although just walking on the pitch, as opposed to performing), Travis playing, Arabic dancing, keepie uppie professionals and presentational awards. Most games also feature the 'Fulham flutter', a half-time draw; and a shoot-out competition of some kind, usually involving scoring through a 'hoop' or 'beat the goalie'. On the first home game of the season, there is a carnival where every Fulham fan is expected to turn up in black-and-white colours. There is usually live rock bands, player signings, clowns, stilt walkers, a steel (calypso) band, food stalls and a free training session for children in Bishops Park.
The Fulham Ladies (before their demise) and Reserve teams occasionally play home matches at the Cottage. Other than this, they generally play at the club's training ground at Motspur Park. Craven Cottage is known by several affectionate nicknames from fans, including: The (River) Cottage, The Fortress (or Fortress Fulham), Thameside, The Friendly Confines, SW6, Lord of the Banks, The House of Hope, The Pavilion of Perfection, The 'True' Fulham Palace and The Palatial Home. The Thames at the banks of the Cottage is often referred to as 'Old Father' or The River of Dreams.
The most accessible route to the ground is to walk through Bishops Park from Putney Bridge (the nearest Underground station), often known as 'The Green Mile' by Fulham fans (as it is roughly a mile walk through pleasant greenery). The Telegraph ranked the Cottage 9th out of 54 grounds to hold Premier League football.
On 27 July 2012, Fulham FC were granted permission to redevelop the Riverside Stand, increasing the capacity of Craven Cottage to 30,000 seats. Beforehand various rumours arose including plans to return to ground-sharing with QPR in a new 40,000 seater White City stadium, although these now appear firmly on hold with the construction of the Westfield shopping centre on the proposed site. The board seem to have moved away from their ambition to make Fulham the "Manchester United of the south" as it became clear how expensive such a plan would be. With large spaces of land at a premium in south-west London, Fulham appear to be committed to a gradual increase of the ground's capacity often during the summer between seasons. The capacity of Craven Cottage has been increased during summers for instance in 2008 with a small increase in the capacity of the Hammersmith End. Fulham previously announced in 2007 that they are planning to increase the capacity of Craven Cottage by 4,000 seats, but this is yet to be implemented. There was also proposals for a bridge to span the Thames, for a redeveloped Riverside stand and a museum. More substantial plans arose in October 2011 with the 'Fulham Forever' campaign. With Mohamed Al-Fayed selling Harrods department store for £1.5 billion in May 2010 a detailed plan emerged in the Riverside Stand as the only viable area for expansion. The scheme involved the demolition of the back of the Riverside Stand with a new tier of seating added on top of the current one and a row of corporate boxes; bringing Craven Cottage up to 30,000 capacity. Taking into account local residents, the proposal would: reopen the riverside walk; light pollution would be reduced with the removal of floodlight masts; new access points would make match-day crowds more manageable; and the new stand would be respectful in design to its position on the River Thames. Buckingham Group Contracting were chosen in March 2013 as the construction company for the project. In May 2019, the club confirmed that work on the new Riverside Stand would commence in the summer of 2019. During the 2019–20, 2020–21 and 2021–22 seasons, the ground's capacity was temporarily reduced to 19,000. The club announced on 17 March 2022 that the lower tier of the Riverside Stand would be open for the 2022–23 season for over 2000 supporters, with season tickets going on sale from 29 March.
The Hammersmith End is the northernmost stand in the ground, the closest to Hammersmith. The roofing was financed through the sale of Alan Mullery to Tottenham Hotspur F.C. It is traditionally the "home" end where the more vocal Fulham fans sit, and many stand during games at the back rows of the stand. If Fulham win the toss, they usually choose to play towards the Hammersmith End in the second half. The hardcore fans tend to sit (or rather stand) in the back half of the Hammersmith End, plus the entire Block H5 (known as ‘H Block’ to the faithful). The stand had terracing until the reopening of the ground in 2004, when it was replaced with seating in order to comply with league rules following the Taylor Report.
The Putney End is the southernmost stand in the ground, nearest to Putney and backing onto Bishops Park. This stand hosts home and away fans, separated by stewards, with away fans usually allocated blocks P5 and P6. Flags of every nationality in the Fulham squad were hung from the roofing, although they were removed after the 2006–07 season commenced and there is now an electronic scoreboard in place. There is a plane tree in the corner by the river.
The Riverside was originally terracing that backed onto the Thames. It also featured large advertising hoardings above the fans. In 1971–72, an all-seater stand was built, originally known as the Riverside Stand (the name was confirmed in the Fulham v Carlisle United programme on 4 December 1971). Its hard lines and metallic and concrete finish are in stark contrast to the Johnny Haynes stand opposite. The stand was opened for a friendly against S.L. Benfica, who included Eusébio in the team. In the Fulham v Burnley programme on 4 October 1977, it was revealed that the stand would be renamed the Eric Miller Stand, following the recent death of the former vice-chairman. It is sometimes incorrectly stated that, contrary to the above, the name of the stand was changed from the Eric Miller Stand to The Riverside Stand after the discovery of Miller's suicide. He had been under investigation for fraud and embezzlement. The name of the stand actually reverted to "Riverside Stand" in the 1990s.
The Riverside Stand backs onto the River Thames and is elevated above pitch level, unlike the other three stands. It contained corporate hospitality seating alongside Fulham fans. Jimmy Hill once referred to the Riverside being "a bit like the London Palladium" as Blocks V & W (the middle section) were often filled with the rich and famous (including often Al-Fayed). There were then several Harrods advertising hoardings. Above the advertising hoardings was the gantry, for the press and cameras. Tickets in this area were often the easiest to buy, not surprisingly they were also some of the more expensive. The Hammersmith End is to its left, the Putney End to its right and opposite is the Johnny Haynes Stand.
During the 1970s, Craven Cottage flooded, with water fllowing in from the riverside. The stand housed the George Cohen restaurant, while on non-match days there was the Cottage Cafe, located near to the Cottage itself. (The River Café is also located nearby). Under Tommy Trinder's chairmanship in the 60s, flags of all other teams in the Division 1 were proudly flown along the Thames. However, when Fulham were relegated in 1968, Trinder decided not to change the flags as "Fulham won't be in this division next season". True to Trinder's prophecy, Fulham were relegated again. The roof of the stand had been used by sponsors, with VisitFlorida advertising in this way, and Pipex.com, FxPro, Lee Cooper Jeans and LG having previously done so.
After the 2019–20 season, the stand was demolished and rebuilt. Upon completion, the capacity of the ground will increase to around 29,600. On 26 November 2019, the Chairman Shahid Khan announced that the new development will be known as Fulham Pier, a destination venue outside of match-day use. Several issues have postponed completion from 2021 to 2024.
Originally called the Stevenage Road Stand, after the road it backs onto, the Johnny Haynes stand is the oldest remaining football stand in the Football League and professional football, originally constructed in 1905, and is a Grade II listed building. Designed by Archibald Leitch, the stand contains the ticket office and club shop and features original 'Bennet' wooden seating. Following his death in 2005, the stand was renamed after former player Johnny Haynes.
The exterior facing Stevenage Road has a brick façade and features the club's old emblem in the artwork. Decorative pillars show the club's foundation date as 1880 though this is thought to be incorrect. Also, a special stone to commemorate the supporters' fund-raising group Fulham 2000, and The Cottagers' return to Craven Cottage, was engraved on the façade. The family enclosures are located in the two corners of the stand, one nearest to the Hammersmith End and one nearest to the Putney End. The front of the stand now contains plastic seating, but originally was a standing area. Children were often placed at the front of this enclosure and the area had a distinctive white picket fence to keep fans off the pitch up until the 1970s.
The Cottage Pavilion dates back to 1905 along with the Johnny Haynes Stand, built by renowned football architect Archibald Leitch. Besides being the changing rooms, the Cottage (also called The Clubhouse) is traditionally used by the players' families and friends who sit on the balcony to watch the game. In the past, board meetings used to be held in The Cottage itself as well. There is a large tapestry draped from the Cottage which says "Still Believe". It encapsulates the now-famous moment, when fans facing defeat against Hamburg SV in the Europa League semi-final roused the players with the chant of "Stand up if you still believe". In the three other corners of the ground there are what have been described as large 'filing cabinets', which are corporate boxes on three levels, but currently the box on the other side of the Putney End has been removed due to the redevelopment of the Riverside
Craven Cottage hosted the Northern Ireland versus Cyprus 1974 World Cup Qualifier on 8 May 1973, a match moved from Belfast due to The Troubles. Northern Ireland won 3–0, Sammy Morgan and a Trevor Anderson brace concluded the scoring in the first half.
On 22 February 2000, it hosted England's under 21s international under 21 friendly against Argentina’s under 21s team. The hosts won 1–0 with Lee Hendrie’s sixty seventh-minute goal with 15,747 in attendance.
In recent years, Craven Cottage has hosted several International Friendly matches, including the Ireland national team who played Colombia and Nigeria there in May 2008 and May 2009 respectively and Oman in 2012. The South Korea national football team have also used the ground thrice in recent years for international friendlies, first against Greece in February 2007 second against Serbia in November 2009, and then against Croatia in February 2013. On 17 November 2007 Australia beat Nigeria 1–0 in an international friendly at Craven Cottage. On 26 May 2011, Craven Cottage hosted the game of 2011 UEFA Women's Champions League Final between Lyon and Potsdam. In September 2011, a friendly between Ghana and Brazil was also held at Craven Cottage. On 15 October 2013, Australia beat Canada 3–0 at Craven Cottage. On 28 May 2014 Scotland played out a 2–2 draw with a Nigerian team who had qualified for the 2014 World Cup Finals.
On 27 March 2018, Australia played host to Colombia in the international friendlies, the match ended 0-0, both teams having qualified for the 2018 World Cup Finals in Russia.
51°28′30″N 0°13′18″W / 51.47500°N 0.22167°W / 51.47500; -0.22167 | [
{
"paragraph_id": 0,
"text": "Craven Cottage is a football stadium in Fulham, West London, England, which has been the home of Fulham since 1896. The ground's capacity is 22,384; the record attendance is 49,335, for a game against Millwall in 1938. Next to Bishop's Park on the banks of the River Thames, it was originally a royal hunting lodge and has a history dating back over 300 years.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The stadium has also been used by the United States, Australia, Ireland, and Canada men's national teams, and was formerly the home ground for rugby league club Fulham RLFC.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The original Cottage was built in 1780, by William Craven, the sixth Baron Craven and was located close to where the Johnny Haynes Stand is now. At the time, the surrounding areas were woods which made up part of Anne Boleyn's hunting grounds.",
"title": "Life"
},
{
"paragraph_id": 3,
"text": "The Cottage was lived in by Edward Bulwer-Lytton (who wrote The Last Days of Pompeii) and other somewhat notable (and moneyed) persons until it was destroyed by fire in May 1888. Many rumours persist among Fulham fans of past tenants of Craven Cottage. Sir Arthur Conan Doyle, Jeremy Bentham, Florence Nightingale and even Queen Victoria are reputed to have stayed there, although there is no real evidence for this. Following the fire, the site was abandoned. Fulham had had 8 previous grounds before settling in at Craven Cottage for good. Therefore, The Cottagers have had 12 grounds overall (including a temporary stay at Loftus Road), meaning that only their former 'landlords' and rivals QPR have had more home grounds (14) in British football. Of particular note, was Ranelagh House, Fulham's palatial home from 1886 to 1888.",
"title": "Life"
},
{
"paragraph_id": 4,
"text": "When representatives of Fulham first came across the land, in 1894, it was so overgrown that it took two years to be made suitable for football to be played on it. A deal was struck for the owners of the ground to carry out the work, in return for which they would receive a proportion of the gate receipts.",
"title": "Life"
},
{
"paragraph_id": 5,
"text": "The first football match at which there were any gate receipts was when Fulham played against Minerva in the Middlesex Senior Cup, on 10 October 1896. The ground's first stand was built shortly after. Described as looking like an \"orange box\", it consisted of four wooden structures each holding some 250 seats, and later was affectionately nicknamed the \"rabbit hutch\".",
"title": "Life"
},
{
"paragraph_id": 6,
"text": "In 1904 London County Council became concerned with the level of safety at the ground, and tried to get it closed. A court case followed in January 1905, as a result of which Archibald Leitch, a Scottish architect who had risen to prominence after his building of the Ibrox Stadium, a few years earlier, was hired to work on the stadium. In a scheme costing £15,000 (a record for the time), he built a pavilion (the present-day 'Cottage' itself) and the Stevenage Road Stand, in his characteristic red brick style.",
"title": "Life"
},
{
"paragraph_id": 7,
"text": "The stand on Stevenage Road celebrated its centenary in the 2005–2006 season and, following the death of Fulham FC's favourite son, former England captain Johnny Haynes, in a car accident in October 2005 the Stevenage Road Stand was renamed the Johnny Haynes Stand after the club sought the opinions of Fulham supporters.",
"title": "Life"
},
{
"paragraph_id": 8,
"text": "Both the Johnny Haynes Stand and Cottage remain among the finest examples of Archibald Leitch football architecture to remain in existence and both have been designated as Grade II listed buildings.",
"title": "Life"
},
{
"paragraph_id": 9,
"text": "An England v Wales match was played at the ground in 1907, followed by a rugby league international between England and Australia in 1911.",
"title": "Life"
},
{
"paragraph_id": 10,
"text": "One of the club's directors Henry Norris, and his friend William Hall, took over Arsenal in the early 1910s, the plan being to merge Fulham with Arsenal, to form a \"London superclub\" at Craven Cottage. This move was largely motivated by Fulham's failure thus far to gain promotion to the top division of English football. There were also plans for Henry Norris to build a larger stadium on the other side of Stevenage Road but there was little need after the merger idea failed. During this era, the Cottage was used for choir singing and marching bands along with other performances, and Mass.",
"title": "Life"
},
{
"paragraph_id": 11,
"text": "In 1933 there were plans to demolish the ground and start again from scratch with a new 80,000 capacity stadium. These plans never materialised mainly due to the Great Depression.",
"title": "Life"
},
{
"paragraph_id": 12,
"text": "On 8 October 1938, 49,335 spectators watched Fulham play Millwall. It was the largest attendance ever at Craven Cottage and the record remains today, unlikely to be bettered as it is now an all-seater stadium with currently no room for more than 25,700. The ground hosted several football games for the 1948 Summer Olympics, and is one of the last extant that did.",
"title": "Life"
},
{
"paragraph_id": 13,
"text": "It was not until after Fulham first reached the top division, in 1949, that further improvements were made to the stadium. In 1962 Fulham became the final side in the first division to erect floodlights. The floodlights were said to be the most expensive in Europe at the time as they were so modern. The lights were like large pylons towering 50 metres over the ground and were similar in appearance to those at the WACA. An electronic scoreboard was installed on the Riverside Terrace at the same time as the floodlights were installed and flagpoles flying the flags of all of the other first division teams were flown from them. Following the sale of Alan Mullery to Tottenham Hotspur in 1964 (for £72,500) the Hammersmith End had a roof put over it at a cost of approximately £42,500.",
"title": "Life"
},
{
"paragraph_id": 14,
"text": "Although Fulham were relegated, the development of Craven Cottage continued. The Riverside terracing, infamous for the fact that fans occupying it would turn their heads annually to watch The Boat Race pass, was replaced by what was officially named the 'Eric Miller Stand', Eric Miller being a director of the club at the time. The stand, which cost £334,000 and held 4,200 seats, was opened with a friendly game against Benfica in February 1972, (which included Eusébio). Pelé was also to appear on the ground, with a friendly played against his team Santos F.C. The Miller stand brought the seated capacity up to 11,000 out of a total 40,000. Eric Miller committed suicide five years later after a political and financial scandal, and had shady dealings with trying to move Fulham away from the Cottage. The stand is now better known as the Riverside Stand.",
"title": "Life"
},
{
"paragraph_id": 15,
"text": "On Boxing Day 1963, Craven Cottage was the venue of the fastest hat-trick in the history of the English football league, which was completed in less than three minutes, by Graham Leggat. This helped his Fulham team to beat Ipswich 10–1 (a club record). The international record is held by Jimmy O'Connor, an Irish player who notched up his hat trick in 2 minutes 14 seconds in 1967.",
"title": "Life"
},
{
"paragraph_id": 16,
"text": "Between 1980 and 1984, Fulham rugby league played their home games at the Cottage. They have since evolved into the London Crusaders, the London Broncos and Harlequins Rugby League before reverting to London Broncos ahead of the 2012 season. Craven Cottage held the team's largest ever crowd at any ground with 15,013, at a game against Wakefield Trinity on 15 February 1981.",
"title": "Life"
},
{
"paragraph_id": 17,
"text": "When the Hillsborough disaster occurred in 1989, Fulham were in the second bottom rung of The Football League, but following the Taylor report Fulham's ambitious chairman Jimmy Hill tabled plans in 1996 for an all-seater stadium. These plans never came to fruition, partly due to local residents' pressure groups, and by the time Fulham reached the Premier League, they still had standing areas in the ground, something virtually unheard of at the time. A year remained to do something about this (teams reaching the second tier for the first time are allowed a three-year period to reach the required standards for the top two divisions), but by the time the last league game was played there, against Leicester City on 27 April 2002, no building plans had been made. Two more Intertoto Cup games were played there later that year (against FC Haka of Finland and Egaleo FC of Greece), and the eventual solution was to decamp to Loftus Road, home of local rivals QPR. During this time, many Fulham fans only went to away games in protest of moving from Craven Cottage. 'Back to the Cottage', later to become the 'Fulham Supporters Trust', was set up as a fans pressure group to encourage the chairman and his advisers that Craven Cottage was the only viable option for Fulham Football Club.",
"title": "Life"
},
{
"paragraph_id": 18,
"text": "After one and a half seasons at Loftus Road, no work had been done on the Cottage. In December 2003, plans were unveiled for £8 million worth of major refurbishment work to bring it in line with Premier League requirements. With planning permission granted, work began in January 2004 in order to meet the deadline of the new season. The work proceeded as scheduled and the club were able to return to their home for the start of the 2004–05 season. Their first game in the new-look 22,000 all-seater stadium was a pre-season friendly against Watford on 10 July 2004. Fenway Sports Group originally partnered with Fulham in 2009, due to the perceived heritage and quirks shared between the Cottage and Fenway Park, saying no English club identifies with its stadium as much as Fulham.",
"title": "Life"
},
{
"paragraph_id": 19,
"text": "The current stadium was one of the Premier League's smallest grounds at the time of Fulham's relegation at the end of the 2013–14 season (it was third-smallest, after the KC Stadium and the Liberty Stadium). Much admired for its fine architecture, the stadium has recently hosted a few international games, mostly including Australia. This venue is suitable for Australia because most of the country's top players are based in Europe, and West London has a significant community of expatriate Australians. Also, Greece vs. South Korea was hosted on 6 February 2007. In 2011 Brazil played Ghana, in an international friendly, and the Women's Champions League Final was hosted.",
"title": "Life"
},
{
"paragraph_id": 20,
"text": "Craven Cottage often hosts many other events such as 5-a-side football tournaments and weddings. Also, many have Sunday Lunch at the Riverside restaurant or the 'Cottage Cafe' on non-match days. Craven Cottage hosted the Oxbridge Varsity Football match annually between 1991 and 2000 and again in 2003, 2006 (the same day as the famous 'Boat Race'), 2008, 2009, and 2014 as well as having a Soccer Aid warm-up match in 2006. The half-time entertainment often includes the SW6ers (previously called The Cravenettes) which are a group of female cheerleaders. Other events have included brass bands, Michael Jackson (although just walking on the pitch, as opposed to performing), Travis playing, Arabic dancing, keepie uppie professionals and presentational awards. Most games also feature the 'Fulham flutter', a half-time draw; and a shoot-out competition of some kind, usually involving scoring through a 'hoop' or 'beat the goalie'. On the first home game of the season, there is a carnival where every Fulham fan is expected to turn up in black-and-white colours. There is usually live rock bands, player signings, clowns, stilt walkers, a steel (calypso) band, food stalls and a free training session for children in Bishops Park.",
"title": "Life"
},
{
"paragraph_id": 21,
"text": "The Fulham Ladies (before their demise) and Reserve teams occasionally play home matches at the Cottage. Other than this, they generally play at the club's training ground at Motspur Park. Craven Cottage is known by several affectionate nicknames from fans, including: The (River) Cottage, The Fortress (or Fortress Fulham), Thameside, The Friendly Confines, SW6, Lord of the Banks, The House of Hope, The Pavilion of Perfection, The 'True' Fulham Palace and The Palatial Home. The Thames at the banks of the Cottage is often referred to as 'Old Father' or The River of Dreams.",
"title": "Life"
},
{
"paragraph_id": 22,
"text": "The most accessible route to the ground is to walk through Bishops Park from Putney Bridge (the nearest Underground station), often known as 'The Green Mile' by Fulham fans (as it is roughly a mile walk through pleasant greenery). The Telegraph ranked the Cottage 9th out of 54 grounds to hold Premier League football.",
"title": "Life"
},
{
"paragraph_id": 23,
"text": "On 27 July 2012, Fulham FC were granted permission to redevelop the Riverside Stand, increasing the capacity of Craven Cottage to 30,000 seats. Beforehand various rumours arose including plans to return to ground-sharing with QPR in a new 40,000 seater White City stadium, although these now appear firmly on hold with the construction of the Westfield shopping centre on the proposed site. The board seem to have moved away from their ambition to make Fulham the \"Manchester United of the south\" as it became clear how expensive such a plan would be. With large spaces of land at a premium in south-west London, Fulham appear to be committed to a gradual increase of the ground's capacity often during the summer between seasons. The capacity of Craven Cottage has been increased during summers for instance in 2008 with a small increase in the capacity of the Hammersmith End. Fulham previously announced in 2007 that they are planning to increase the capacity of Craven Cottage by 4,000 seats, but this is yet to be implemented. There was also proposals for a bridge to span the Thames, for a redeveloped Riverside stand and a museum. More substantial plans arose in October 2011 with the 'Fulham Forever' campaign. With Mohamed Al-Fayed selling Harrods department store for £1.5 billion in May 2010 a detailed plan emerged in the Riverside Stand as the only viable area for expansion. The scheme involved the demolition of the back of the Riverside Stand with a new tier of seating added on top of the current one and a row of corporate boxes; bringing Craven Cottage up to 30,000 capacity. Taking into account local residents, the proposal would: reopen the riverside walk; light pollution would be reduced with the removal of floodlight masts; new access points would make match-day crowds more manageable; and the new stand would be respectful in design to its position on the River Thames. Buckingham Group Contracting were chosen in March 2013 as the construction company for the project. In May 2019, the club confirmed that work on the new Riverside Stand would commence in the summer of 2019. During the 2019–20, 2020–21 and 2021–22 seasons, the ground's capacity was temporarily reduced to 19,000. The club announced on 17 March 2022 that the lower tier of the Riverside Stand would be open for the 2022–23 season for over 2000 supporters, with season tickets going on sale from 29 March.",
"title": "Life"
},
{
"paragraph_id": 24,
"text": "The Hammersmith End is the northernmost stand in the ground, the closest to Hammersmith. The roofing was financed through the sale of Alan Mullery to Tottenham Hotspur F.C. It is traditionally the \"home\" end where the more vocal Fulham fans sit, and many stand during games at the back rows of the stand. If Fulham win the toss, they usually choose to play towards the Hammersmith End in the second half. The hardcore fans tend to sit (or rather stand) in the back half of the Hammersmith End, plus the entire Block H5 (known as ‘H Block’ to the faithful). The stand had terracing until the reopening of the ground in 2004, when it was replaced with seating in order to comply with league rules following the Taylor Report.",
"title": "The ground"
},
{
"paragraph_id": 25,
"text": "The Putney End is the southernmost stand in the ground, nearest to Putney and backing onto Bishops Park. This stand hosts home and away fans, separated by stewards, with away fans usually allocated blocks P5 and P6. Flags of every nationality in the Fulham squad were hung from the roofing, although they were removed after the 2006–07 season commenced and there is now an electronic scoreboard in place. There is a plane tree in the corner by the river.",
"title": "The ground"
},
{
"paragraph_id": 26,
"text": "The Riverside was originally terracing that backed onto the Thames. It also featured large advertising hoardings above the fans. In 1971–72, an all-seater stand was built, originally known as the Riverside Stand (the name was confirmed in the Fulham v Carlisle United programme on 4 December 1971). Its hard lines and metallic and concrete finish are in stark contrast to the Johnny Haynes stand opposite. The stand was opened for a friendly against S.L. Benfica, who included Eusébio in the team. In the Fulham v Burnley programme on 4 October 1977, it was revealed that the stand would be renamed the Eric Miller Stand, following the recent death of the former vice-chairman. It is sometimes incorrectly stated that, contrary to the above, the name of the stand was changed from the Eric Miller Stand to The Riverside Stand after the discovery of Miller's suicide. He had been under investigation for fraud and embezzlement. The name of the stand actually reverted to \"Riverside Stand\" in the 1990s.",
"title": "The ground"
},
{
"paragraph_id": 27,
"text": "The Riverside Stand backs onto the River Thames and is elevated above pitch level, unlike the other three stands. It contained corporate hospitality seating alongside Fulham fans. Jimmy Hill once referred to the Riverside being \"a bit like the London Palladium\" as Blocks V & W (the middle section) were often filled with the rich and famous (including often Al-Fayed). There were then several Harrods advertising hoardings. Above the advertising hoardings was the gantry, for the press and cameras. Tickets in this area were often the easiest to buy, not surprisingly they were also some of the more expensive. The Hammersmith End is to its left, the Putney End to its right and opposite is the Johnny Haynes Stand.",
"title": "The ground"
},
{
"paragraph_id": 28,
"text": "During the 1970s, Craven Cottage flooded, with water fllowing in from the riverside. The stand housed the George Cohen restaurant, while on non-match days there was the Cottage Cafe, located near to the Cottage itself. (The River Café is also located nearby). Under Tommy Trinder's chairmanship in the 60s, flags of all other teams in the Division 1 were proudly flown along the Thames. However, when Fulham were relegated in 1968, Trinder decided not to change the flags as \"Fulham won't be in this division next season\". True to Trinder's prophecy, Fulham were relegated again. The roof of the stand had been used by sponsors, with VisitFlorida advertising in this way, and Pipex.com, FxPro, Lee Cooper Jeans and LG having previously done so.",
"title": "The ground"
},
{
"paragraph_id": 29,
"text": "After the 2019–20 season, the stand was demolished and rebuilt. Upon completion, the capacity of the ground will increase to around 29,600. On 26 November 2019, the Chairman Shahid Khan announced that the new development will be known as Fulham Pier, a destination venue outside of match-day use. Several issues have postponed completion from 2021 to 2024.",
"title": "The ground"
},
{
"paragraph_id": 30,
"text": "Originally called the Stevenage Road Stand, after the road it backs onto, the Johnny Haynes stand is the oldest remaining football stand in the Football League and professional football, originally constructed in 1905, and is a Grade II listed building. Designed by Archibald Leitch, the stand contains the ticket office and club shop and features original 'Bennet' wooden seating. Following his death in 2005, the stand was renamed after former player Johnny Haynes.",
"title": "The ground"
},
{
"paragraph_id": 31,
"text": "The exterior facing Stevenage Road has a brick façade and features the club's old emblem in the artwork. Decorative pillars show the club's foundation date as 1880 though this is thought to be incorrect. Also, a special stone to commemorate the supporters' fund-raising group Fulham 2000, and The Cottagers' return to Craven Cottage, was engraved on the façade. The family enclosures are located in the two corners of the stand, one nearest to the Hammersmith End and one nearest to the Putney End. The front of the stand now contains plastic seating, but originally was a standing area. Children were often placed at the front of this enclosure and the area had a distinctive white picket fence to keep fans off the pitch up until the 1970s.",
"title": "The ground"
},
{
"paragraph_id": 32,
"text": "The Cottage Pavilion dates back to 1905 along with the Johnny Haynes Stand, built by renowned football architect Archibald Leitch. Besides being the changing rooms, the Cottage (also called The Clubhouse) is traditionally used by the players' families and friends who sit on the balcony to watch the game. In the past, board meetings used to be held in The Cottage itself as well. There is a large tapestry draped from the Cottage which says \"Still Believe\". It encapsulates the now-famous moment, when fans facing defeat against Hamburg SV in the Europa League semi-final roused the players with the chant of \"Stand up if you still believe\". In the three other corners of the ground there are what have been described as large 'filing cabinets', which are corporate boxes on three levels, but currently the box on the other side of the Putney End has been removed due to the redevelopment of the Riverside",
"title": "The ground"
},
{
"paragraph_id": 33,
"text": "Craven Cottage hosted the Northern Ireland versus Cyprus 1974 World Cup Qualifier on 8 May 1973, a match moved from Belfast due to The Troubles. Northern Ireland won 3–0, Sammy Morgan and a Trevor Anderson brace concluded the scoring in the first half.",
"title": "International matches"
},
{
"paragraph_id": 34,
"text": "On 22 February 2000, it hosted England's under 21s international under 21 friendly against Argentina’s under 21s team. The hosts won 1–0 with Lee Hendrie’s sixty seventh-minute goal with 15,747 in attendance.",
"title": "International matches"
},
{
"paragraph_id": 35,
"text": "In recent years, Craven Cottage has hosted several International Friendly matches, including the Ireland national team who played Colombia and Nigeria there in May 2008 and May 2009 respectively and Oman in 2012. The South Korea national football team have also used the ground thrice in recent years for international friendlies, first against Greece in February 2007 second against Serbia in November 2009, and then against Croatia in February 2013. On 17 November 2007 Australia beat Nigeria 1–0 in an international friendly at Craven Cottage. On 26 May 2011, Craven Cottage hosted the game of 2011 UEFA Women's Champions League Final between Lyon and Potsdam. In September 2011, a friendly between Ghana and Brazil was also held at Craven Cottage. On 15 October 2013, Australia beat Canada 3–0 at Craven Cottage. On 28 May 2014 Scotland played out a 2–2 draw with a Nigerian team who had qualified for the 2014 World Cup Finals.",
"title": "International matches"
},
{
"paragraph_id": 36,
"text": "On 27 March 2018, Australia played host to Colombia in the international friendlies, the match ended 0-0, both teams having qualified for the 2018 World Cup Finals in Russia.",
"title": "International matches"
},
{
"paragraph_id": 37,
"text": "51°28′30″N 0°13′18″W / 51.47500°N 0.22167°W / 51.47500; -0.22167",
"title": "External links"
}
] | Craven Cottage is a football stadium in Fulham, West London, England, which has been the home of Fulham since 1896. The ground's capacity is 22,384; the record attendance is 49,335, for a game against Millwall in 1938. Next to Bishop's Park on the banks of the River Thames, it was originally a royal hunting lodge and has a history dating back over 300 years. The stadium has also been used by the United States, Australia, Ireland, and Canada men's national teams, and was formerly the home ground for rugby league club Fulham RLFC. | 2002-02-25T15:51:15Z | 2023-12-28T00:39:00Z | [
"Template:Infobox venue",
"Template:Cite book",
"Template:Cbignore",
"Template:London Broncos",
"Template:Short description",
"Template:Reflist",
"Template:Subscription required",
"Template:S-start",
"Template:S-end",
"Template:UEFA Women's Champions League Final venues",
"Template:WikidataCoord",
"Template:Fulham F.C.",
"Template:Use dmy dates",
"Template:Use British English",
"Template:Cite web",
"Template:Cite news",
"Template:Cite AV media",
"Template:Webarchive",
"Template:Olympic venues football",
"Template:Commons category",
"Template:Succession box",
"Template:Premier League venues",
"Template:1948 Summer Olympic venues"
] | https://en.wikipedia.org/wiki/Craven_Cottage |
6,649 | Constantine | Constantine most often refers to:
Constantine may also refer to: | [
{
"paragraph_id": 0,
"text": "Constantine most often refers to:",
"title": ""
},
{
"paragraph_id": 1,
"text": "Constantine may also refer to:",
"title": ""
}
] | Constantine most often refers to: Constantine the Great, Roman emperor from 306 to 337, also known as Constantine I
Constantine, Algeria, a city in Algeria Constantine may also refer to: | 2001-10-09T21:19:25Z | 2023-08-11T21:10:59Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Constantine |
6,650 | Lists of composers | This is a list of lists of composers grouped by various criteria. | [
{
"paragraph_id": 0,
"text": "This is a list of lists of composers grouped by various criteria.",
"title": ""
}
] | This is a list of lists of composers grouped by various criteria. | 2001-11-14T10:40:41Z | 2023-09-10T23:23:18Z | [
"Template:See also",
"Template:Musical composition",
"Template:Lists of composers by nationality",
"Template:List of lists",
"Template:Short description",
"Template:Composers by era"
] | https://en.wikipedia.org/wiki/Lists_of_composers |
6,651 | Cedar Falls, Iowa | Cedar Falls is a city in Black Hawk County, Iowa, United States. As of the 2020 census, the city population was 40,713. Cedar Falls is home to the University of Northern Iowa, a public university.
Cedar Falls along with neighboring city Waterloo, Iowa are the two principal municipalities within the Waterloo-Cedar Falls Metropolitan Statistical Area. This area is known locally as the "Cedar Valley," due to the Cedar River that traverses the vicinity.
Cedar Falls was first settled in March 1845 by brothers-in-law William R. Sturgis and Erasmus D. Adams. Initially, the city was named Sturgis Falls. The city was called Sturgis Falls until it was merged with Cedar City (another city on the other side of the Cedar River), creating Cedar Falls. The city's founders are honored each year with a week long community-wide celebration named in their honor – the Sturgis Falls Celebration.
Because of the availability of water power, Cedar Falls developed as a milling and industrial center prior to the Civil War. The establishment of the Civil War Soldiers' Orphans Home in Cedar Falls changed the direction in which the city developed when, following the war, it became the first building on the campus of the Iowa State Normal School (now the University of Northern Iowa).
Cedar Falls is located at 42°31′24″N 92°26′45″W / 42.52333°N 92.44583°W / 42.52333; -92.44583 (42.523520, −92.446402). According to the United States Census Bureau, the city has a total area of 29.61 square miles (76.69 km), of which 28.75 square miles (74.46 km) is land and 0.86 square miles (2.23 km) is water.
Natural forest, prairie and wetland areas are found within the city limits at the Hartman Reserve Nature Center.
Cedar Falls is part of the Waterloo-Cedar Falls metropolitan area.
As of the census of 2020, there were 40,713 people, and 15,254 households. The population density was 1,401.8 inhabitants per square mile (541.2 inhabitants/km). The racial makeup of the city was 91.2% White, 1.3% African American, 0.3% Native American, 4.5% Asian, and 2.5% from two or more races. Hispanic or Latino of any race were 2.7% of the population.
As of the census of 2010, there were 39,260 people, 14,608 households, and 8,091 families living in the city. The population density was 1,365.6 inhabitants per square mile (527.3/km). There were 15,477 housing units at an average density of 538.3 per square mile (207.8/km). The racial makeup of the city was 93.4% White, 2.1% African American, 0.2% Native American, 2.3% Asian, 0.5% from other races, and 1.7% from two or more races. Hispanic or Latino of any race were 2.0% of the population.
There were 14,608 households, of which 24.8% had children under the age of 18 living with them, 45.5% were married couples living together, 7.2% had a female householder with no husband present, 2.7% had a male householder with no wife present, and 44.6% were non-families. 28.0% of all households were made up of individuals, and 10.4% had someone living alone who was 65 years of age or older. The average household size was 2.37 and the average family size was 2.88.
The median age in the city was 26.8 years. 17.3% of residents were under the age of 18; 29.7% were between the ages of 18 and 24; 20.5% were from 25 to 44; 20.1% were from 45 to 64; and 12.4% were 65 years of age or older. The gender makeup of the city was 48.1% male and 51.9% female.
As of the census of 2000, there were 36,145 people, 12,833 households, and 7,558 families living in the city. The population density was 1,277.2 people per square mile (493.1 people/km). There were 13,271 housing units at an average density of 468.9 per square mile (181.0/km). The racial makeup of the city was 95.14% White, 1.57% Black or African American, 0.15% Native American, 1.61% Asian, 0.02% Pacific Islander, 0.41% from other races, and 1.09% from two or more races. 1.08% of the population were Hispanic or Latino of any race.
There were 12,833 households, out of which 26.9% had children under the age of 18 living with them, 48.9% were married couples living together, 7.5% had a female householder with no husband present, and 41.1% were non-families. 25.5% of all households were made up of individuals, and 9.4% had someone living alone who was 65 years of age or older. The average household size was 2.45 and the average family size was 2.91.
Age spread: 18.0% under the age of 18, 30.6% from 18 to 24, 20.5% from 25 to 44, 19.0% from 45 to 64, and 11.9% who were 65 years of age or older. The median age was 26 years. For every 100 females, there were 88.5 males. For every 100 females age 18 and over, there were 85.7 males.
The median income for a household in the city was $70,226, and the median income for a family was $85,158. Males had a median income of $60,235 versus $50,312 for females. The per capita income for the city was $27,140. About 5.6% of families and 4.7% of the population were below the poverty line, including 8.5% of those under age 18, and 6.1% of those age 65 or over.
In 1986, the City of Cedar Falls established the Cedar Falls Art and Culture Board, which oversees the operation of the city's Cultural Division and the James & Meryl Hearst Center for the Arts.
The Cedar Falls Public Library is housed in the Adele Whitenach Davis building located at 524 Main Street. The 47,000 square foot (4,400 m) structure, designed by Struxture Architects, replaced the Carniege-Dayton building in early 2004. As of the 2016 fiscal year, the library's holdings included approximately 8,000 audio materials, 12,000 video materials, and 104,000 books and periodicals for a grand total of approximately 124,000 items. Patrons made 245,000 visits which took advantage of circulation services, adult, teen, and youth programming. Circulation of library materials for fiscal year 2016 was 543,134. The library also provides public access to more than 30 public computers which provide internet access, office software suites, high resolution color printing, wi-fi, and various games. The library also offers digital loaning through Libby, Hoopla, and other platforms.
The mission of the Cedar Falls Public Library is to promote literacy and provide open access to resources which facilitate lifelong learning. The library is a member of the Cedar Valley Library Consortium. Cedar Falls Public Library shares an Integrated Library System (SirsiDynix Symphony) with the Waterloo Public Library. Library management is provided by Kelly Stern, Director of the Cedar Falls Public Library.
The Cedar Falls Historical Society has its offices in the Victorian Home and Carriage House Museum. It preserves Cedar Falls' history through its five museums, collection, archives, and public programs. Besides the Victorian House, the Society operates the Cedar Falls Ice House, Little Red Schoolhouse, and Behrens-Rapp Station.
The city's major shopping mall is College Square Mall, built in 1969.
The Oster Regent Theatre in downtown Cedar Falls originally opened in 1910 as the Cotton Theatre. It is currently the home of the Cedar Falls Community Theatre which was founded in 1978. The company produces approximately seven to eight shows per season.
The Gallagher-Bluedorn Performing Arts Center on the University of Northern Iowa campus hosts many professionally touring Broadway plays and musicals throughout the year. The facility's Great Hall can seat 1,680 patrons.
It hosts one of three public universities in Iowa, University of Northern Iowa (UNI).
Cedar Falls Community Schools, which covers most of the city limits, includes Cedar Falls High School, two junior high schools, seven elementary schools. Waterloo Community School District covers a small section of Cedar Falls. There is a private Christian school, Valley Lutheran High School. Additionally there is a private Catholic elementary school at St. Patrick Catholic Church, under the Roman Catholic Archdiocese of Dubuque. A significant renovation occurred beginning in May 2014.
The Malcolm Price Lab School/Northern University High School, was a state-funded K–12 school run by the university. It closed in 2012 following cuts at UNI.
The city owns its power, gas and water, and cable TV service. Because of this, Cedar Falls Utilities provides gigabit speeds to residents, this became available on January 14, 2015. Cedar Falls has the power to do so because, unlike 19 other states, Iowa does not prohibit municipal broadband from competing with the private cable TV monopoly. In 2020, Cedar Falls Utilities was recognized by PC Magazine as having the nation's fastest internet, by a factor of three.
Cedar Falls has public transportation provided by the Metropolitan Transit Authority of Black Hawk County.
The underground music scene in the Cedar Falls area from 1977 to present-day is well documented. The Wartburg College Art Gallery in Waverly, Iowa hosted a collaborative history of the bands, record labels, and music venues involved in the Cedar Falls music scene which ran from March 17 to April 14, 2007. This effort has been continued as a wiki-style website called The Secret History of the Cedar Valley.
Cedar Falls' sister cities are: | [
{
"paragraph_id": 0,
"text": "Cedar Falls is a city in Black Hawk County, Iowa, United States. As of the 2020 census, the city population was 40,713. Cedar Falls is home to the University of Northern Iowa, a public university.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cedar Falls along with neighboring city Waterloo, Iowa are the two principal municipalities within the Waterloo-Cedar Falls Metropolitan Statistical Area. This area is known locally as the \"Cedar Valley,\" due to the Cedar River that traverses the vicinity.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cedar Falls was first settled in March 1845 by brothers-in-law William R. Sturgis and Erasmus D. Adams. Initially, the city was named Sturgis Falls. The city was called Sturgis Falls until it was merged with Cedar City (another city on the other side of the Cedar River), creating Cedar Falls. The city's founders are honored each year with a week long community-wide celebration named in their honor – the Sturgis Falls Celebration.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Because of the availability of water power, Cedar Falls developed as a milling and industrial center prior to the Civil War. The establishment of the Civil War Soldiers' Orphans Home in Cedar Falls changed the direction in which the city developed when, following the war, it became the first building on the campus of the Iowa State Normal School (now the University of Northern Iowa).",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Cedar Falls is located at 42°31′24″N 92°26′45″W / 42.52333°N 92.44583°W / 42.52333; -92.44583 (42.523520, −92.446402). According to the United States Census Bureau, the city has a total area of 29.61 square miles (76.69 km), of which 28.75 square miles (74.46 km) is land and 0.86 square miles (2.23 km) is water.",
"title": "Geography"
},
{
"paragraph_id": 5,
"text": "Natural forest, prairie and wetland areas are found within the city limits at the Hartman Reserve Nature Center.",
"title": "Geography"
},
{
"paragraph_id": 6,
"text": "Cedar Falls is part of the Waterloo-Cedar Falls metropolitan area.",
"title": "Demographics"
},
{
"paragraph_id": 7,
"text": "As of the census of 2020, there were 40,713 people, and 15,254 households. The population density was 1,401.8 inhabitants per square mile (541.2 inhabitants/km). The racial makeup of the city was 91.2% White, 1.3% African American, 0.3% Native American, 4.5% Asian, and 2.5% from two or more races. Hispanic or Latino of any race were 2.7% of the population.",
"title": "Demographics"
},
{
"paragraph_id": 8,
"text": "As of the census of 2010, there were 39,260 people, 14,608 households, and 8,091 families living in the city. The population density was 1,365.6 inhabitants per square mile (527.3/km). There were 15,477 housing units at an average density of 538.3 per square mile (207.8/km). The racial makeup of the city was 93.4% White, 2.1% African American, 0.2% Native American, 2.3% Asian, 0.5% from other races, and 1.7% from two or more races. Hispanic or Latino of any race were 2.0% of the population.",
"title": "Demographics"
},
{
"paragraph_id": 9,
"text": "There were 14,608 households, of which 24.8% had children under the age of 18 living with them, 45.5% were married couples living together, 7.2% had a female householder with no husband present, 2.7% had a male householder with no wife present, and 44.6% were non-families. 28.0% of all households were made up of individuals, and 10.4% had someone living alone who was 65 years of age or older. The average household size was 2.37 and the average family size was 2.88.",
"title": "Demographics"
},
{
"paragraph_id": 10,
"text": "The median age in the city was 26.8 years. 17.3% of residents were under the age of 18; 29.7% were between the ages of 18 and 24; 20.5% were from 25 to 44; 20.1% were from 45 to 64; and 12.4% were 65 years of age or older. The gender makeup of the city was 48.1% male and 51.9% female.",
"title": "Demographics"
},
{
"paragraph_id": 11,
"text": "As of the census of 2000, there were 36,145 people, 12,833 households, and 7,558 families living in the city. The population density was 1,277.2 people per square mile (493.1 people/km). There were 13,271 housing units at an average density of 468.9 per square mile (181.0/km). The racial makeup of the city was 95.14% White, 1.57% Black or African American, 0.15% Native American, 1.61% Asian, 0.02% Pacific Islander, 0.41% from other races, and 1.09% from two or more races. 1.08% of the population were Hispanic or Latino of any race.",
"title": "Demographics"
},
{
"paragraph_id": 12,
"text": "There were 12,833 households, out of which 26.9% had children under the age of 18 living with them, 48.9% were married couples living together, 7.5% had a female householder with no husband present, and 41.1% were non-families. 25.5% of all households were made up of individuals, and 9.4% had someone living alone who was 65 years of age or older. The average household size was 2.45 and the average family size was 2.91.",
"title": "Demographics"
},
{
"paragraph_id": 13,
"text": "Age spread: 18.0% under the age of 18, 30.6% from 18 to 24, 20.5% from 25 to 44, 19.0% from 45 to 64, and 11.9% who were 65 years of age or older. The median age was 26 years. For every 100 females, there were 88.5 males. For every 100 females age 18 and over, there were 85.7 males.",
"title": "Demographics"
},
{
"paragraph_id": 14,
"text": "The median income for a household in the city was $70,226, and the median income for a family was $85,158. Males had a median income of $60,235 versus $50,312 for females. The per capita income for the city was $27,140. About 5.6% of families and 4.7% of the population were below the poverty line, including 8.5% of those under age 18, and 6.1% of those age 65 or over.",
"title": "Demographics"
},
{
"paragraph_id": 15,
"text": "In 1986, the City of Cedar Falls established the Cedar Falls Art and Culture Board, which oversees the operation of the city's Cultural Division and the James & Meryl Hearst Center for the Arts.",
"title": "Arts and culture"
},
{
"paragraph_id": 16,
"text": "The Cedar Falls Public Library is housed in the Adele Whitenach Davis building located at 524 Main Street. The 47,000 square foot (4,400 m) structure, designed by Struxture Architects, replaced the Carniege-Dayton building in early 2004. As of the 2016 fiscal year, the library's holdings included approximately 8,000 audio materials, 12,000 video materials, and 104,000 books and periodicals for a grand total of approximately 124,000 items. Patrons made 245,000 visits which took advantage of circulation services, adult, teen, and youth programming. Circulation of library materials for fiscal year 2016 was 543,134. The library also provides public access to more than 30 public computers which provide internet access, office software suites, high resolution color printing, wi-fi, and various games. The library also offers digital loaning through Libby, Hoopla, and other platforms.",
"title": "Arts and culture"
},
{
"paragraph_id": 17,
"text": "The mission of the Cedar Falls Public Library is to promote literacy and provide open access to resources which facilitate lifelong learning. The library is a member of the Cedar Valley Library Consortium. Cedar Falls Public Library shares an Integrated Library System (SirsiDynix Symphony) with the Waterloo Public Library. Library management is provided by Kelly Stern, Director of the Cedar Falls Public Library.",
"title": "Arts and culture"
},
{
"paragraph_id": 18,
"text": "The Cedar Falls Historical Society has its offices in the Victorian Home and Carriage House Museum. It preserves Cedar Falls' history through its five museums, collection, archives, and public programs. Besides the Victorian House, the Society operates the Cedar Falls Ice House, Little Red Schoolhouse, and Behrens-Rapp Station.",
"title": "Arts and culture"
},
{
"paragraph_id": 19,
"text": "The city's major shopping mall is College Square Mall, built in 1969.",
"title": "Arts and culture"
},
{
"paragraph_id": 20,
"text": "The Oster Regent Theatre in downtown Cedar Falls originally opened in 1910 as the Cotton Theatre. It is currently the home of the Cedar Falls Community Theatre which was founded in 1978. The company produces approximately seven to eight shows per season.",
"title": "Arts and culture"
},
{
"paragraph_id": 21,
"text": "The Gallagher-Bluedorn Performing Arts Center on the University of Northern Iowa campus hosts many professionally touring Broadway plays and musicals throughout the year. The facility's Great Hall can seat 1,680 patrons.",
"title": "Arts and culture"
},
{
"paragraph_id": 22,
"text": "It hosts one of three public universities in Iowa, University of Northern Iowa (UNI).",
"title": "Education"
},
{
"paragraph_id": 23,
"text": "Cedar Falls Community Schools, which covers most of the city limits, includes Cedar Falls High School, two junior high schools, seven elementary schools. Waterloo Community School District covers a small section of Cedar Falls. There is a private Christian school, Valley Lutheran High School. Additionally there is a private Catholic elementary school at St. Patrick Catholic Church, under the Roman Catholic Archdiocese of Dubuque. A significant renovation occurred beginning in May 2014.",
"title": "Education"
},
{
"paragraph_id": 24,
"text": "The Malcolm Price Lab School/Northern University High School, was a state-funded K–12 school run by the university. It closed in 2012 following cuts at UNI.",
"title": "Education"
},
{
"paragraph_id": 25,
"text": "The city owns its power, gas and water, and cable TV service. Because of this, Cedar Falls Utilities provides gigabit speeds to residents, this became available on January 14, 2015. Cedar Falls has the power to do so because, unlike 19 other states, Iowa does not prohibit municipal broadband from competing with the private cable TV monopoly. In 2020, Cedar Falls Utilities was recognized by PC Magazine as having the nation's fastest internet, by a factor of three.",
"title": "Utilities and internet access"
},
{
"paragraph_id": 26,
"text": "Cedar Falls has public transportation provided by the Metropolitan Transit Authority of Black Hawk County.",
"title": "Transportation"
},
{
"paragraph_id": 27,
"text": "The underground music scene in the Cedar Falls area from 1977 to present-day is well documented. The Wartburg College Art Gallery in Waverly, Iowa hosted a collaborative history of the bands, record labels, and music venues involved in the Cedar Falls music scene which ran from March 17 to April 14, 2007. This effort has been continued as a wiki-style website called The Secret History of the Cedar Valley.",
"title": "Media"
},
{
"paragraph_id": 28,
"text": "Cedar Falls' sister cities are:",
"title": "Sister cities"
}
] | Cedar Falls is a city in Black Hawk County, Iowa, United States. As of the 2020 census, the city population was 40,713. Cedar Falls is home to the University of Northern Iowa, a public university. Cedar Falls along with neighboring city Waterloo, Iowa are the two principal municipalities within the Waterloo-Cedar Falls Metropolitan Statistical Area. This area is known locally as the "Cedar Valley," due to the Cedar River that traverses the vicinity. | 2002-02-25T15:51:15Z | 2023-10-29T15:01:43Z | [
"Template:Webarchive",
"Template:Dead link",
"Template:Cite journal",
"Template:Black Hawk County, Iowa",
"Template:Use American English",
"Template:Convert",
"Template:Infobox settlement",
"Template:Coord",
"Template:Cbignore",
"Template:ISBN",
"Template:Distinguish",
"Template:Citation needed",
"Template:Portal",
"Template:Cite web",
"Template:US Census population",
"Template:See also",
"Template:Div col end",
"Template:Cite book",
"Template:Cite news",
"Template:Wikivoyage",
"Template:Authority control",
"Template:Div col",
"Template:Flagicon",
"Template:Commons category",
"Template:Use mdy dates",
"Template:Alumni",
"Template:Reflist",
"Template:EB1911 poster"
] | https://en.wikipedia.org/wiki/Cedar_Falls,_Iowa |
6,652 | Cleveland Guardians | The Cleveland Guardians are an American professional baseball team based in Cleveland. The Guardians compete in Major League Baseball (MLB) as a member club of the American League (AL) Central division. Since 1994, the team has played its home games at Progressive Field. Since their establishment as a Major League franchise in 1901, the team has won 11 Central Division titles, six American League pennants, and two World Series championships (in 1920 and 1948). The team's World Series championship drought since 1948 is the longest active among all 30 current Major League teams. The team's name references the Guardians of Traffic, eight monolithic 1932 Art Deco sculptures by Henry Hering on the city's Hope Memorial Bridge, which is adjacent to Progressive Field. The team's mascot is named "Slider". The team's spring training facility is at Goodyear Ballpark in Goodyear, Arizona.
The franchise originated in 1896 as the Columbus Buckeyes, a minor league team based in Columbus, Ohio, that played in the Western League. The team renamed to the Columbus Senators the following year and then relocated to Grand Rapids, Michigan in the middle of the 1899 season, becoming the Grand Rapids Furniture Makers for the remainder of the season. The team relocated to Cleveland in 1900 and was called the Cleveland Lake Shores. The Western League itself was renamed the American League prior to the 1900 season while continuing its minor league status. When the American League declared itself a major league in 1901, Cleveland was one of its eight charter franchises. Originally called the Cleveland Bluebirds or Blues, the team was also unofficially called the Cleveland Bronchos in 1902. Beginning in 1903, the team was named the Cleveland Napoleons or Naps, after team captain Nap Lajoie.
Following Lajoie's departure after the 1914 season, club owner Charles Somers requested that baseball writers choose a new name. They chose the name Cleveland Indians, declared to be a tribute of the nickname that fans gave to the Cleveland Spiders while Louis Sockalexis, a Native American, was playing for the team. That name stuck and remained in use for more than a century. Common nicknames for the Indians were "the Tribe" and "the Wahoos", the latter referencing their longtime logo, Chief Wahoo. After the Indians name came under criticism as part of the Native American mascot controversy, the team adopted the Guardians name following the 2021 season.
From August 24 to September 14, 2017, the team won 22 consecutive games, the longest winning streak in American League history, and the second longest winning streak in Major League Baseball history.
As of the end of the 2023 season, the franchise's overall record is 9,760–9,300 (.512).
"In 1857, baseball games were a daily spectacle in Cleveland's Public Squares. City authorities tried to find an ordinance forbidding it, to the joy of the crowd, they were unsuccessful. – Harold Seymour"
From 1865 to 1868 Forest Citys was an amateur ball club. During the 1869 season, Cleveland was among several cities that established professional baseball teams following the success of the 1869 Cincinnati Red Stockings, the first fully professional team. In the newspapers before and after 1870, the team was often called the Forest Citys, in the same generic way that the team from Chicago was sometimes called The Chicagos.
In 1871 the Forest Citys joined the new National Association of Professional Base Ball Players (NA), the first professional league. Ultimately, two of the league's western clubs went out of business during the first season and the Chicago Fire left that city's White Stockings impoverished, unable to field a team again until 1874. Cleveland was thus the NA's westernmost outpost in 1872, the year the club folded. Cleveland played its full schedule to July 19 followed by two games versus Boston in mid-August and disbanded at the end of the season.
In 1876, the National League (NL) supplanted the NA as the major professional league. Cleveland was not among its charter members, but by 1879 the league was looking for new entries and the city gained an NL team. The Cleveland Forest Citys were recreated, but rebranded in 1882 as the Cleveland Blues, because the National League required distinct colors for that season. The Blues had mediocre records for six seasons and were ruined by a trade war with the Union Association (UA) in 1884, when its three best players (Fred Dunlap, Jack Glasscock, and Jim McCormick) jumped to the UA after being offered higher salaries. The Cleveland Blues merged with the St. Louis Maroons UA team in 1885.
Cleveland went without major league baseball for two seasons until gaining a team in the American Association (AA) in 1887. After the AA's Allegheny club jumped to the NL, Cleveland followed suit in 1889, as the AA began to crumble. The Cleveland ball club, named the Spiders (supposedly inspired by their "skinny and spindly" players) slowly became a power in the league. In 1891, the Spiders moved into League Park, which would serve as the home of Cleveland professional baseball for the next 55 years. Led by native Ohioan Cy Young, the Spiders became a contender in the mid-1890s, playing in the Temple Cup Series (that era's World Series) twice and winning it in 1895. The team began to fade after this success, and was dealt a severe blow under the ownership of the Robison brothers.
Prior to the 1899 season, Frank Robison, the Spiders' owner, bought the St. Louis Browns, thus owning two clubs at the same time. The Browns were renamed the "Perfectos", and restocked with Cleveland talent. Just weeks before the season opener, most of the better Spiders were transferred to St. Louis, including three future Hall of Famers: Cy Young, Jesse Burkett and Bobby Wallace. The roster maneuvers failed to create a powerhouse Perfectos team, as St. Louis finished fifth in both 1899 and 1900. The Spiders were left with essentially a minor league lineup, and began to lose games at a record pace. Drawing almost no fans at home, they ended up playing most of their season on the road, and became known as "The Wanderers". The team ended the season in 12th place, 84 games out of first place, with an all-time worst record of 20–134 (.130 winning percentage). Following the 1899 season, the National League disbanded four teams, including the Spiders franchise. The disastrous 1899 season would actually be a step toward a new future for Cleveland fans the next year.
The Cleveland Infants competed in the Players' League, which was well-attended in some cities, but club owners lacked the confidence to continue beyond the one season. The Cleveland Infants finished with 55 wins and 75 losses, playing their home games at Brotherhood Park.
The Columbus Buckeyes were founded in Ohio in 1896 and were part of the Western League. In 1897 the team changed their name to the Columbus Senators. In the middle of the 1899 season, the Senators made a swap with the Grand Rapids Furniture Makers of the Interstate League; the Columbus Senators would become the Grand Rapids Furniture Makers and play in the Western League, and the Grand Rapids Furniture Makers would become the Columbus Senators and play in the Interstate League. Often confused with the Grand Rapids Rustlers (also known as Rippers), the Grand Rapids Furniture Makers finished the 1899 season in the Western League to become the Grand Rapids franchise to be relocated to Cleveland the following season. In 1900 the team moved to Cleveland and was named the Cleveland Lake Shores. Around the same time Ban Johnson changed the name of his minor league (Western League) to the American League. In 1900 the American League was still considered a minor league. In 1901 the team was called the Cleveland Bluebirds or Blues when the American League broke with the National Agreement and declared itself a competing Major League. The Cleveland franchise was among its eight charter members, and is one of four teams that remain in its original city, along with Boston, Chicago, and Detroit.
The new team was owned by coal magnate Charles Somers and tailor Jack Kilfoyl. Somers, a wealthy industrialist and also co-owner of the Boston Americans, lent money to other team owners, including Connie Mack's Philadelphia Athletics, to keep them and the new league afloat. Players did not think the name "Bluebirds" was suitable for a baseball team. Writers frequently shortened it to Cleveland Blues due to the players' all-blue uniforms, but the players did not like this unofficial name either. The players themselves tried to change the name to Cleveland Bronchos in 1902, but this name never caught on.
Cleveland suffered from financial problems in their first two seasons. This led Somers to seriously consider moving to either Pittsburgh or Cincinnati. Relief came in 1902 as a result of the conflict between the National and American Leagues. In 1901, Napoleon "Nap" Lajoie, the Philadelphia Phillies' star second baseman, jumped to the A's after his contract was capped at $2,400 per year—one of the highest-profile players to jump to the upstart AL. The Phillies subsequently filed an injunction to force Lajoie's return, which was granted by the Pennsylvania Supreme Court. The injunction appeared to doom any hopes of an early settlement between the warring leagues. However, a lawyer discovered that the injunction was only enforceable in the state of Pennsylvania. Mack, partly to thank Somers for his past financial support, agreed to trade Lajoie to the then-moribund Blues, who offered $25,000 salary over three years. Due to the injunction, however, Lajoie had to sit out any games played against the A's in Philadelphia. Lajoie arrived in Cleveland on June 4 and was an immediate hit, drawing 10,000 fans to League Park. Soon afterward, he was named team captain, and in 1903 the team was called the Cleveland Napoleons or Naps after a newspaper conducted a write-in contest.
Lajoie was named manager in 1905, and the team's fortunes improved somewhat. They finished half a game short of the pennant in 1908. However, the success did not last and Lajoie resigned during the 1909 season as manager but remained on as a player.
After that, the team began to unravel, leading Kilfoyl to sell his share of the team to Somers. Cy Young, who returned to Cleveland in 1909, was ineffective for most of his three remaining years and Addie Joss died from tubercular meningitis prior to the 1911 season.
Despite a strong lineup anchored by the potent Lajoie and Shoeless Joe Jackson, poor pitching kept the team below third place for most of the next decade. One reporter referred to the team as the Napkins, "because they fold up so easily". The team hit bottom in 1914 and 1915, finishing last place both years.
1915 brought significant changes to the team. Lajoie, nearly 40 years old, was no longer a top hitter in the league, batting only .258 in 1914. With Lajoie engaged in a feud with manager Joe Birmingham, the team sold Lajoie back to the A's.
With Lajoie gone, the club needed a new name. Somers asked the local baseball writers to come up with a new name, and based on their input, the team was renamed the Cleveland Indians. The name referred to the nickname "Indians" that was applied to the Cleveland Spiders baseball club during the time when Louis Sockalexis, a Native American, played in Cleveland (1897–1899).
At the same time, Somers' business ventures began to fail, leaving him deeply in debt. With the Indians playing poorly, attendance and revenue suffered. Somers decided to trade Jackson midway through the 1915 season for two players and $31,500, one of the largest sums paid for a player at the time.
By 1916, Somers was at the end of his tether, and sold the team to a syndicate headed by Chicago railroad contractor James C. "Jack" Dunn. Manager Lee Fohl, who had taken over in early 1915, acquired two minor league pitchers, Stan Coveleski and Jim Bagby and traded for center fielder Tris Speaker, who was engaged in a salary dispute with the Red Sox. All three would ultimately become key players in bringing a championship to Cleveland.
Speaker took over the reins as player-manager in 1919, and led the team to a championship in 1920. On August 16, 1920, the Indians were playing the Yankees at the Polo Grounds in New York. Shortstop Ray Chapman, who often crowded the plate, was batting against Carl Mays, who had an unusual underhand delivery. It was also late in the afternoon and the infield was completely shaded with the center field area (the batters' background) bathed in sunlight. As well, at the time, "part of every pitcher's job was to dirty up a new ball the moment it was thrown onto the field. By turns, they smeared it with dirt, licorice, tobacco juice; it was deliberately scuffed, sandpapered, scarred, cut, even spiked. The result was a misshapen, earth-colored ball that traveled through the air erratically, tended to soften in the later innings, and as it came over the plate, was very hard to see."
In any case, Chapman did not move reflexively when Mays' pitch came his way. The pitch hit Chapman in the head, fracturing his skull. Chapman died the next day, becoming the only player to sustain a fatal injury from a pitched ball. The Indians, who at the time were locked in a tight three-way pennant race with the Yankees and White Sox, were not slowed down by the death of their teammate. Rookie Joe Sewell hit .329 after replacing Chapman in the lineup.
In September 1920, the Black Sox Scandal came to a boil. With just a few games left in the season, and Cleveland and Chicago neck-and-neck for first place at 94–54 and 95–56 respectively, the Chicago owner suspended eight players. The White Sox lost two of three in their final series, while Cleveland won four and lost two in their final two series. Cleveland finished two games ahead of Chicago and three games ahead of the Yankees to win its first pennant, led by Speaker's .388 hitting, Jim Bagby's 30 victories and solid performances from Steve O'Neill and Stan Coveleski. Cleveland went on to defeat the Brooklyn Robins 5–2 in the World Series for their first title, winning four games in a row after the Robins took a 2–1 Series lead. The Series included three memorable "firsts", all of them in Game 5 at Cleveland, and all by the home team. In the first inning, right fielder Elmer Smith hit the first Series grand slam. In the fourth inning, Jim Bagby hit the first Series home run by a pitcher. In the top of the fifth inning, second baseman Bill Wambsganss executed the first (and only, so far) unassisted triple play in World Series history, in fact, the only Series triple play of any kind.
The team would not reach the heights of 1920 again for 28 years. Speaker and Coveleski were aging and the Yankees were rising with a new weapon: Babe Ruth and the home run. They managed two second-place finishes but spent much of the decade in last place. In 1927 Dunn's widow, Mrs. George Pross (Dunn had died in 1922), sold the team to a syndicate headed by Alva Bradley.
The Indians were a middling team by the 1930s, finishing third or fourth most years. 1936 brought Cleveland a new superstar in 17-year-old pitcher Bob Feller, who came from Iowa with a dominating fastball. That season, Feller set a record with 17 strikeouts in a single game and went on to lead the league in strikeouts from 1938 to 1941.
On August 20, 1938, Indians catchers Hank Helf and Frank Pytlak set the "all-time altitude mark" by catching baseballs dropped from the 708-foot (216 m) Terminal Tower.
By 1940, Feller, along with Ken Keltner, Mel Harder and Lou Boudreau, led the Indians to within one game of the pennant. However, the team was wracked with dissension, with some players (including Feller and Mel Harder) going so far as to request that Bradley fire manager Ossie Vitt. Reporters lampooned them as the Cleveland Crybabies. Feller, who had pitched a no-hitter to open the season and won 27 games, lost the final game of the season to unknown pitcher Floyd Giebell of the Detroit Tigers. The Tigers won the pennant and Giebell never won another major league game.
Cleveland entered 1941 with a young team and a new manager; Roger Peckinpaugh had replaced the despised Vitt; but the team regressed, finishing in fourth. Cleveland would soon be depleted of two stars. Hal Trosky retired in 1941 due to migraine headaches and Bob Feller enlisted in the Navy two days after the Attack on Pearl Harbor. Starting third baseman Ken Keltner and outfielder Ray Mack were both drafted in 1945 taking two more starters out of the lineup.
In 1946, Bill Veeck formed an investment group that purchased the Cleveland Indians from Bradley's group for a reported $1.6 million. Among the investors was Bob Hope, who had grown up in Cleveland, and former Tigers slugger, Hank Greenberg. A former owner of a minor league franchise in Milwaukee, Veeck brought to Cleveland a gift for promotion. At one point, Veeck hired rubber-faced Max Patkin, the "Clown Prince of Baseball" as a coach. Patkin's appearance in the coaching box was the sort of promotional stunt that delighted fans but infuriated the American League front office.
Recognizing that he had acquired a solid team, Veeck soon abandoned the aging, small and lightless League Park to take up full-time residence in massive Cleveland Municipal Stadium. The Indians had briefly moved from League Park to Municipal Stadium in mid-1932, but moved back to League Park due to complaints about the cavernous environment. From 1937 onward, however, the Indians began playing an increasing number of games at Municipal, until by 1940 they played most of their home slate there. League Park was mostly demolished in 1951, but has since been rebuilt as a recreational park.
Making the most of the cavernous stadium, Veeck had a portable center field fence installed, which he could move in or out depending on how the distance favored the Indians against their opponents in a given series. The fence moved as much as 15 feet (5 m) between series opponents. Following the 1947 season, the American League countered with a rule change that fixed the distance of an outfield wall for the duration of a season. The massive stadium did, however, permit the Indians to set the then-record for the largest crowd to see a Major League baseball game. On October 10, 1948, Game 5 of the World Series against the Boston Braves drew over 84,000. The record stood until the Los Angeles Dodgers drew a crowd in excess of 92,500 to watch Game 5 of the 1959 World Series at the Los Angeles Memorial Coliseum against the Chicago White Sox.
Under Veeck's leadership, one of Cleveland's most significant achievements was breaking the color barrier in the American League by signing Larry Doby, formerly a player for the Negro league's Newark Eagles in 1947, 11 weeks after Jackie Robinson signed with the Dodgers. Similar to Robinson, Doby battled racism on and off the field but posted a .301 batting average in 1948, his first full season. A power-hitting center fielder, Doby led the American League twice in homers.
In 1948, needing pitching for the stretch run of the pennant race, Veeck turned to the Negro leagues again and signed pitching great Satchel Paige amid much controversy. Barred from Major League Baseball during his prime, Veeck's signing of the aging star in 1948 was viewed by many as another publicity stunt. At an official age of 42, Paige became the oldest rookie in Major League baseball history, and the first black pitcher. Paige ended the year with a 6–1 record with a 2.48 ERA, 45 strikeouts and two shutouts.
In 1948, veterans Boudreau, Keltner, and Joe Gordon had career offensive seasons, while newcomers Doby and Gene Bearden also had standout seasons. The team went down to the wire with the Boston Red Sox, winning a one-game playoff, the first in American League history, to go to the World Series. In the series, the Indians defeated the Boston Braves four games to two for their first championship in 28 years. Boudreau won the American League MVP Award.
The Indians appeared in a film the following year titled The Kid From Cleveland, in which Veeck had an interest. The film portrayed the team helping out a "troubled teenaged fan" and featured many members of the Indians organization. However, filming during the season cost the players valuable rest days leading to fatigue towards the end of the season. That season, Cleveland again contended before falling to third place. On September 23, 1949, Bill Veeck and the Indians buried their 1948 pennant in center field the day after they were mathematically eliminated from the pennant race.
Later in 1949, Veeck's first wife (who had a half-stake in Veeck's share of the team) divorced him. With most of his money tied up in the Indians, Veeck was forced to sell the team to a syndicate headed by insurance magnate Ellis Ryan.
In 1953, Al Rosen was an All Star for the second year in a row, was named The Sporting News Major League Player of the Year, and won the American League Most Valuable Player Award in a unanimous vote playing for the Indians after leading the AL in runs, home runs, RBIs (for the second year in a row), and slugging percentage, and coming in second by one point in batting average. Ryan was forced out in 1953 in favor of Myron Wilson, who in turn gave way to William Daley in 1956. Despite this turnover in the ownership, a powerhouse team composed of Feller, Doby, Minnie Miñoso, Luke Easter, Bobby Ávila, Al Rosen, Early Wynn, Bob Lemon, and Mike Garcia continued to contend through the early 1950s. However, Cleveland only won a single pennant in the decade, in 1954, finishing second to the New York Yankees five times.
The winningest season in franchise history came in 1954, when the Indians finished the season with a record of 111–43 (.721). That mark set an American League record for wins that stood for 44 years until the Yankees won 114 games in 1998 (a 162-game regular season). The Indians' 1954 winning percentage of .721 is still an American League record. The Indians returned to the World Series to face the New York Giants. The team could not bring home the title, however, ultimately being upset by the Giants in a sweep. The series was notable for Willie Mays' over-the-shoulder catch off the bat of Vic Wertz in Game 1. Cleveland remained a talented team throughout the remainder of the decade, finishing in second place in 1959, George Strickland's last full year in the majors.
From 1960 to 1993, the Indians managed one third-place finish (in 1968) and six fourth-place finishes (in 1960, 1974, 1975, 1976, 1990, and 1992) but spent the rest of the time at or near the bottom of the standings, including four seasons with over 100 losses (1971, 1985, 1987, 1991).
The Indians hired general manager Frank Lane, known as "Trader" Lane, away from the St. Louis Cardinals in 1957. Lane over the years had gained a reputation as a GM who loved to make deals. With the White Sox, Lane had made over 100 trades involving over 400 players in seven years. In a short stint in St. Louis, he traded away Red Schoendienst and Harvey Haddix. Lane summed up his philosophy when he said that the only deals he regretted were the ones that he did not make.
One of Lane's early trades in Cleveland was to send Roger Maris to the Kansas City Athletics in the middle of 1958. Indians executive Hank Greenberg was not happy about the trade and neither was Maris, who said that he could not stand Lane. After Maris broke Babe Ruth's home run record, Lane defended himself by saying he still would have done the deal because Maris was unknown and he received good ballplayers in exchange.
After the Maris trade, Lane acquired 25-year-old Norm Cash from the White Sox for Minnie Miñoso and then traded him to Detroit before he ever played a game for the Indians; Cash went on to hit over 350 home runs for the Tigers. The Indians received Steve Demeter in the deal, who had only five at-bats for Cleveland.
In 1960, Lane made the trade that would define his tenure in Cleveland when he dealt slugging right fielder and fan favorite Rocky Colavito to the Detroit Tigers for Harvey Kuenn just before Opening Day in 1960.
It was a blockbuster trade that swapped the 1959 AL home run co-champion (Colavito) for the AL batting champion (Kuenn). After the trade, however, Colavito hit over 30 home runs four times and made three All-Star teams for Detroit and Kansas City before returning to Cleveland in 1965. Kuenn, on the other hand, played only one season for the Indians before departing for San Francisco in a trade for an aging Johnny Antonelli and Willie Kirkland. Akron Beacon Journal columnist Terry Pluto documented the decades of woe that followed the trade in his book The Curse of Rocky Colavito. Despite being attached to the curse, Colavito said that he never placed a curse on the Indians but that the trade was prompted by a salary dispute with Lane.
Lane also engineered a unique trade of managers in mid-season 1960, sending Joe Gordon to the Tigers in exchange for Jimmy Dykes. Lane left the team in 1961, but ill-advised trades continued. In 1965, the Indians traded pitcher Tommy John, who would go on to win 288 games in his career, and 1966 Rookie of the Year Tommy Agee to the White Sox to get Colavito back.
However, Indians' pitchers set numerous strikeout records. They led the league in K's every year from 1963 to 1968, and narrowly missed in 1969. The 1964 staff was the first to amass 1,100 strikeouts, and in 1968, they were the first to collect more strikeouts than hits allowed.
The 1970s were not much better, with the Indians trading away several future stars, including Graig Nettles, Dennis Eckersley, Buddy Bell and 1971 Rookie of the Year Chris Chambliss, for a number of players who made no impact.
Constant ownership changes did not help the Indians. In 1963, Daley's syndicate sold the team to a group headed by general manager Gabe Paul. Three years later, Paul sold the Indians to Vernon Stouffer, of the Stouffer's frozen-food empire. Prior to Stouffer's purchase, the team was rumored to be relocated due to poor attendance. Despite the potential for a financially strong owner, Stouffer had some non-baseball related financial setbacks and, consequently, the team was cash-poor. In order to solve some financial problems, Stouffer had made an agreement to play a minimum of 30 home games in New Orleans with a view to a possible move there. After rejecting an offer from George Steinbrenner and former Indian Al Rosen, Stouffer sold the team in 1972 to a group led by Cleveland Cavaliers and Cleveland Barons owner Nick Mileti. Steinbrenner went on to buy the New York Yankees in 1973.
Only five years later, Mileti's group sold the team for $11 million to a syndicate headed by trucking magnate Steve O'Neill and including former general manager and owner Gabe Paul. O'Neill's death in 1983 led to the team going on the market once more. O'Neill's nephew Patrick O'Neill did not find a buyer until real estate magnates Richard and David Jacobs purchased the team in 1986.
The team was unable to move out of last place, with losing seasons between 1969 and 1975. One highlight was the acquisition of Gaylord Perry in 1972. The Indians traded fireballer "Sudden Sam" McDowell for Perry, who became the first Indian pitcher to win the Cy Young Award. In 1975, Cleveland broke another color barrier with the hiring of Frank Robinson as Major League Baseball's first African American manager. Robinson served as player-manager and provided a franchise highlight when he hit a pinch-hit home run on Opening Day. But the high-profile signing of Wayne Garland, a 20-game winner in Baltimore, proved to be a disaster after Garland suffered from shoulder problems and went 28–48 over five years. The team failed to improve with Robinson as manager and he was fired in 1977. In 1977, pitcher Dennis Eckersley threw a no-hitter against the California Angels. The next season, he was traded to the Boston Red Sox where he won 20 games in 1978 and another 17 in 1979.
The 1970s also featured the infamous Ten Cent Beer Night at Cleveland Municipal Stadium. The ill-conceived promotion at a 1974 game against the Texas Rangers ended in a riot by fans and a forfeit by the Indians.
There were more bright spots in the 1980s. In May 1981, Len Barker threw a perfect game against the Toronto Blue Jays, joining Addie Joss as the only other Indian pitcher to do so. "Super Joe" Charboneau won the American League Rookie of the Year award. Unfortunately, Charboneau was out of baseball by 1983 after falling victim to back injuries and Barker, who was also hampered by injuries, never became a consistently dominant starting pitcher.
Eventually, the Indians traded Barker to the Atlanta Braves for Brett Butler and Brook Jacoby, who became mainstays of the team for the remainder of the decade. Butler and Jacoby were joined by Joe Carter, Mel Hall, Julio Franco and Cory Snyder, bringing new hope to fans in the late 1980s.
Cleveland's struggles over the 30-year span were highlighted in the 1989 film Major League, which comically depicted a hapless Cleveland ball club going from worst to first by the end of the film.
Throughout the 1980s, the Indians' owners had pushed for a new stadium. Cleveland Stadium had been a symbol of the Indians' glory years in the 1940s and 1950s. However, during the lean years even crowds of 40,000 were swallowed up by the cavernous environment. The old stadium was not aging gracefully; chunks of concrete were falling off in sections and the old wooden pilings were petrifying. In 1984, a proposal for a $150 million domed stadium was defeated in a referendum 2–1.
Finally, in May 1990, Cuyahoga County voters passed an excise tax on sales of alcohol and cigarettes in the county. The tax proceeds were to be used for financing the construction of the Gateway Sports and Entertainment Complex, which would include Jacobs Field for the Indians and Gund Arena for the Cleveland Cavaliers basketball team.
The team's fortunes started to turn in 1989, ironically with a very unpopular trade. The team sent power-hitting outfielder Joe Carter to the San Diego Padres for two unproven players, Sandy Alomar Jr. and Carlos Baerga. Alomar made an immediate impact, not only being elected to the All-Star team but also winning Cleveland's fourth Rookie of the Year award and a Gold Glove. Baerga became a three-time All-Star with consistent offensive production.
Indians general manager John Hart made a number of moves that finally brought success to the team. In 1991, he hired former Indian Mike Hargrove to manage and traded catcher Eddie Taubensee to the Houston Astros who, with a surplus of outfielders, were willing to part with Kenny Lofton. Lofton finished second in AL Rookie of the Year balloting with a .285 average and 66 stolen bases.
The Indians were named "Organization of the Year" by Baseball America in 1992, in response to the appearance of offensive bright spots and an improving farm system.
The team suffered a tragedy during spring training of 1993, when a boat carrying pitchers Steve Olin, Tim Crews, and Bob Ojeda crashed into a pier. Olin and Crews were killed, and Ojeda was seriously injured. (Ojeda missed most of the season, and retired the following year).
By the end of the 1993 season, the team was in transition, leaving Cleveland Stadium and fielding a talented nucleus of young players. Many of those players came from the Indians' new AAA farm team, the Charlotte Knights, who won the International League title that year.
Indians General Manager John Hart and team owner Richard Jacobs managed to turn the team's fortunes around. The Indians opened Jacobs Field in 1994 with the aim of improving on the prior season's sixth-place finish. The Indians were only one game behind the division-leading Chicago White Sox on August 12 when a players strike wiped out the rest of the season.
Having contended for the division in the aborted 1994 season, Cleveland sprinted to a 100–44 record (the season was shortened by 18 games due to player/owner negotiations) in 1995, winning its first-ever divisional title. Veterans Dennis Martínez, Orel Hershiser and Eddie Murray combined with a young core of players including Omar Vizquel, Albert Belle, Jim Thome, Manny Ramírez, Kenny Lofton and Charles Nagy to lead the league in team batting average as well as team ERA.
After defeating the Boston Red Sox in the Division Series and the Seattle Mariners in the ALCS, Cleveland clinched the American League pennant and a World Series berth, for the first time since 1954. The World Series ended in disappointment, however: the Indians fell in six games to the Atlanta Braves.
Tickets for every Indians home game sold out several months before opening day in 1996. The Indians repeated as AL Central champions but lost to the wild card Baltimore Orioles in the Division Series.
In 1997, Cleveland started slow but finished with an 86–75 record. Taking their third consecutive AL Central title, the Indians defeated the New York Yankees in the Division Series, 3–2. After defeating the Baltimore Orioles in the ALCS, Cleveland went on to face the Florida Marlins in the World Series that featured the coldest game in World Series history. With the series tied after Game 6, the Indians went into the ninth inning of Game Seven with a 2–1 lead, but closer José Mesa allowed the Marlins to tie the game. In the eleventh inning, Édgar Rentería drove in the winning run giving the Marlins their first championship. Cleveland became the first team to lose the World Series after carrying the lead into the ninth inning of the seventh game.
In 1998, the Indians made the postseason for the fourth straight year. After defeating the wild-card Boston Red Sox 3–1 in the Division Series, Cleveland lost the 1998 ALCS in six games to the New York Yankees, who had come into the postseason with a then-AL record 114 wins in the regular season.
For the 1999 season, Cleveland added relief pitcher Ricardo Rincón and second baseman Roberto Alomar, brother of catcher Sandy Alomar Jr., and won the Central Division title for the fifth consecutive year. The team scored 1,009 runs, becoming the first (and to date only) team since the 1950 Boston Red Sox to score more than 1,000 runs in a season. This time, Cleveland did not make it past the first round, losing the Division Series to the Red Sox, despite taking a 2–0 lead in the series. In game three, Indians starter Dave Burba went down with an injury in the 4th inning. Four pitchers, including presumed game four starter Jaret Wright, surrendered nine runs in relief. Without a long reliever or emergency starter on the playoff roster, Hargrove started both Bartolo Colón and Charles Nagy in games four and five on only three days rest. The Indians lost game four 23–7 and game five 12–8. Four days later, Hargrove was dismissed as manager.
In 2000, the Indians had a 44–42 start, but caught fire after the All Star break and went 46–30 the rest of the way to finish 90–72. The team had one of the league's best offenses that year and a defense that yielded three gold gloves. However, they ended up five games behind the Chicago White Sox in the Central division and missed the wild card by one game to the Seattle Mariners. Mid-season trades brought Bob Wickman and Jake Westbrook to Cleveland. After the season, free-agent outfielder Manny Ramírez departed for the Boston Red Sox.
In 2000, Larry Dolan bought the Indians for $320 million from Richard Jacobs, who, along with his late brother David, had paid $45 million for the club in 1986. The sale set a record at the time for the sale of a baseball franchise.
2001 saw a return to the postseason. After the departures of Ramírez and Sandy Alomar Jr., the Indians signed Ellis Burks and former MVP Juan González, who helped the team win the Central division with a 91–71 record. One of the highlights came on August 5, when the Indians completed the biggest comeback in MLB History. Cleveland rallied to close a 14–2 deficit in the seventh inning to defeat the Seattle Mariners 15–14 in 11 innings. The Mariners, who won an MLB record-tying 116 games that season, had a strong bullpen, and Indians manager Charlie Manuel had already pulled many of his starters with the game seemingly out of reach.
Seattle and Cleveland met in the first round of the postseason; however, the Mariners won the series 3–2. In the 2001–02 offseason, GM John Hart resigned and his assistant, Mark Shapiro, took the reins.
Shapiro moved to rebuild by dealing aging veterans for younger talent. He traded Roberto Alomar to the New York Mets for a package that included outfielder Matt Lawton and prospects Alex Escobar and Billy Traber. When the team fell out of contention in mid-2002, Shapiro fired manager Charlie Manuel and traded pitching ace Bartolo Colón for prospects Brandon Phillips, Cliff Lee, and Grady Sizemore; acquired Travis Hafner from the Rangers for Ryan Drese and Einar Díaz; and picked up Coco Crisp from the St. Louis Cardinals for aging starter Chuck Finley. Jim Thome left after the season, going to the Phillies for a larger contract.
Young Indians teams finished far out of contention in 2002 and 2003 under new manager Eric Wedge. They posted strong offensive numbers in 2004, but continued to struggle with a bullpen that blew more than 20 saves. A highlight of the season was a 22–0 victory over the New York Yankees on August 31, one of the worst defeats suffered by the Yankees in team history.
In early 2005, the offense got off to a poor start. After a brief July slump, the Indians caught fire in August, and cut a 15.5 game deficit in the Central Division down to 1.5 games. However, the season came to an end as the Indians went on to lose six of their last seven games, five of them by one run, missing the playoffs by only two games. Shapiro was named Executive of the Year in 2005. The next season, the club made several roster changes, while retaining its nucleus of young players. The off-season was highlighted by the acquisition of top prospect Andy Marte from the Boston Red Sox. The Indians had a solid offensive season, led by career years from Travis Hafner and Grady Sizemore. Hafner, despite missing the last month of the season, tied the single season grand slam record of six, which was set in 1987 by Don Mattingly. Despite the solid offensive performance, the bullpen struggled with 23 blown saves (a Major League worst), and the Indians finished a disappointing fourth.
In 2007, Shapiro signed veteran help for the bullpen and outfield in the offseason. Veterans Aaron Fultz and Joe Borowski joined Rafael Betancourt in the Indians bullpen. The Indians improved significantly over the prior year and went into the All-Star break in second place. The team brought back Kenny Lofton for his third stint with the team in late July. The Indians finished with a 96–66 record tied with the Red Sox for best in baseball, their seventh Central Division title in 13 years and their first postseason trip since 2001.
The Indians began their playoff run by defeating the Yankees in the ALDS three games to one. This series will be most remembered for the swarm of bugs that overtook the field in the later innings of Game Two. They also jumped out to a three-games-to-one lead over the Red Sox in the ALCS. The season ended in disappointment when Boston swept the final three games to advance to the 2007 World Series.
Despite the loss, Cleveland players took home a number of awards. Grady Sizemore, who had a .995 fielding percentage and only two errors in 405 chances, won the Gold Glove award, Cleveland's first since 2001. Indians Pitcher CC Sabathia won the second Cy Young Award in team history with a 19–7 record, a 3.21 ERA and an MLB-leading 241 innings pitched. Eric Wedge was awarded the first Manager of the Year Award in team history. Shapiro was named to his second Executive of the Year in 2007.
The Indians struggled during the 2008 season. Injuries to sluggers Travis Hafner and Victor Martinez, as well as starting pitchers Jake Westbrook and Fausto Carmona led to a poor start. The Indians, falling to last place for a short time in June and July, traded CC Sabathia to the Milwaukee Brewers for prospects Matt LaPorta, Rob Bryson, and Michael Brantley. and traded starting third baseman Casey Blake for catching prospect Carlos Santana. Pitcher Cliff Lee went 22–3 with an ERA of 2.54 and earned the AL Cy Young Award. Grady Sizemore had a career year, winning a Gold Glove Award and a Silver Slugger Award, and the Indians finished with a record of 81–81.
Prospects for the 2009 season dimmed early when the Indians ended May with a record of 22–30. Shapiro made multiple trades: Cliff Lee and Ben Francisco to the Philadelphia Phillies for prospects Jason Knapp, Carlos Carrasco, Jason Donald and Lou Marson; Victor Martinez to the Boston Red Sox for prospects Bryan Price, Nick Hagadone and Justin Masterson; Ryan Garko to the Texas Rangers for Scott Barnes; and Kelly Shoppach to the Tampa Bay Rays for Mitch Talbot. The Indians finished the season tied for fourth in their division, with a record of 65–97. The team announced on September 30, 2009, that Eric Wedge and all of the team's coaching staff were released at the end of the 2009 season. Manny Acta was hired as the team's 40th manager on October 25, 2009.
On February 18, 2010, it was announced that Shapiro (following the end of the 2010 season) would be promoted to team President, with current President Paul Dolan becoming the new Chairman/CEO, and longtime Shapiro assistant Chris Antonetti filling the GM role.
On January 18, 2011, longtime popular former first baseman and manager Mike Hargrove was brought in as a special adviser. The Indians started the 2011 season strong – going 30–15 in their first 45 games and seven games ahead of the Detroit Tigers for first place. Injuries led to a slump where the Indians fell out of first place. Many minor leaguers such as Jason Kipnis and Lonnie Chisenhall got opportunities to fill in for the injuries. The biggest news of the season came on July 30 when the Indians traded four prospects for Colorado Rockies star pitcher, Ubaldo Jiménez. The Indians sent their top two pitchers in the minors, Alex White and Drew Pomeranz along with Joe Gardner and Matt McBride. On August 25, the Indians signed the team leader in home runs, Jim Thome off of waivers. He made his first appearance in an Indians uniform since he left Cleveland after the 2002 season. To honor Thome, the Indians placed him at his original position, third base, for one pitch against the Minnesota Twins on September 25. It was his first appearance at third base since 1996, and his last for Cleveland. The Indians finished the season in 2nd place, 15 games behind the division champion Tigers.
The Indians broke Progressive Field's Opening Day attendance record with 43,190 against the Toronto Blue Jays on April 5, 2012. The game went 16 innings, setting the MLB Opening Day record, and lasted 5 hours and 14 minutes.
On September 27, 2012, with six games left in the Indians' 2012 season, Manny Acta was fired; Sandy Alomar Jr. was named interim manager for the remainder of the season. On October 6, the Indians announced that Terry Francona, who managed the Boston Red Sox to five playoff appearances and two World Series between 2004 and 2011, would take over as manager for 2013.
The Indians entered the 2013 season following an active offseason of dramatic roster turnover. Key acquisitions included free agent 1B/OF Nick Swisher and CF Michael Bourn. The team added prized right-handed pitching prospect Trevor Bauer, OF Drew Stubbs, and relief pitchers Bryan Shaw and Matt Albers in a three-way trade with the Arizona Diamondbacks and Cincinnati Reds that sent RF Shin-Soo Choo to the Reds, and Tony Sipp to the Arizona Diamondbacks Other notable additions included utility man Mike Avilés, catcher Yan Gomes, designated hitter Jason Giambi, and starting pitcher Scott Kazmir. The 2013 Indians increased their win total by 24 over 2012 (from 68 to 92), finishing in second place, one game behind Detroit in the Central division, but securing the number one seed in the American League Wild Card Standings. In their first postseason appearance since 2007, Cleveland lost the 2013 American League Wild Card Game 4–0 at home to Tampa Bay. Francona was recognized for the turnaround with the 2013 American League Manager of the Year Award.
With an 85–77 record, the 2014 Indians had consecutive winning seasons for the first time since 1999–2001, but they were eliminated from playoff contention during the last week of the season and finished third in the AL Central.
In 2015, after struggling through the first half of the season, the Indians finished 81–80 for their third consecutive winning season, which the team had not done since 1999–2001. For the second straight year, the Tribe finished third in the Central and was eliminated from the Wild Card race during the last week of the season. Following the departure of longtime team executive Mark Shapiro on October 6, the Indians promoted GM Chris Antonetti to President of Baseball Operations, assistant general manager Mike Chernoff to GM, and named Derek Falvey as assistant GM. Falvey was later hired by the Minnesota Twins in 2016, becoming their President of Baseball Operations.
The Indians set what was then a franchise record for longest winning streak when they won their 14th consecutive game, a 2–1 win over the Toronto Blue Jays in 19 innings on July 1, 2016, at Rogers Centre. The team clinched the Central Division pennant on September 26, their eighth division title overall and first since 2007, as well as returning to the playoffs for the first time since 2013. They finished the regular season at 94–67, marking their fourth straight winning season, a feat not accomplished since the 1990s and early 2000s.
The Indians began the 2016 postseason by sweeping the Boston Red Sox in the best-of-five American League Division Series, then defeated the Blue Jays in five games in the 2016 American League Championship Series to claim their sixth American League pennant and advance to the World Series against the Chicago Cubs. It marked the first appearance for the Indians in the World Series since 1997 and first for the Cubs since 1945. The Indians took a 3–1 series lead following a victory in Game 4 at Wrigley Field, but the Cubs rallied to take the final three games and won the series 4 games to 3. The Indians' 2016 success led to Francona winning his second AL Manager of the Year Award with the club.
From August 24 through September 15 during the 2017 season, the Indians set a new American League record by winning 22 games in a row. On September 28, the Indians won their 100th game of the season, marking only the third time in history the team has reached that milestone. They finished the regular season with 102 wins, second-most in team history (behind 1954's 111 win team). The Indians earned the AL Central title for the second consecutive year, along with home-field advantage throughout the American League playoffs, but they lost the 2017 ALDS to the Yankees 3–2 after being up 2–0.
In 2018, the Indians won their third consecutive AL Central crown with a 91–71 record, but were swept in the 2018 American League Division Series by the Houston Astros, who outscored Cleveland 21–6. In 2019, despite a two-game improvement, the Indians missed the playoffs as they trailed three games behind the Tampa Bay Rays for the second AL Wild Card berth. During the 2020 season (shortened to 60 games because of the COVID-19 pandemic), the Indians were 35–25, finishing second behind the Minnesota Twins in the AL Central, but qualified for the expanded playoffs. In the best-of-three AL Wild Card Series, the Indians were swept by the New York Yankees, ending their season.
On December 18, 2020, the team announced that the Indians name and logo would be dropped after the 2021 season, later revealing the replacement to be the Guardians. In their first season as the Guardians, the team won the 2022 AL Central Division crown, marking the 11th division title in franchise history. In the best-of-three AL Wild Card Series, the Guardians won the series against the Tampa Bay Rays 2–0, to advance to the AL Division Series. The Guardians lost the series to the New York Yankees 3–2, ending their season. In June 2022, sports investor David Blitzer bought a 25% stake in the franchise with an option to acquire controlling interest in 2028.
Following Francona's retirement at the end of the 2023 season, the Guardians named Stephen Vogt as their new manager on November 6, 2023.
The rivalry with fellow Ohio team the Cincinnati Reds is known as the Battle of Ohio or Buckeye Series and features the Ohio Cup trophy for the winner. Prior to 1997, the winner of the cup was determined by an annual pre-season baseball game, played each year at minor-league Cooper Stadium in the state capital of Columbus, and staged just days before the start of each new Major League Baseball season. A total of eight Ohio Cup games were played, with the Guardians winning six of them. It ended with the start of interleague play in 1997. The winner of the game each year was awarded the Ohio Cup in postgame ceremonies. The Ohio Cup was a favorite among baseball fans in Columbus, with attendances regularly topping 15,000.
Since 1997, the two teams have played each other as part of the regular season, with the exception of 2002. The Ohio Cup was reintroduced in 2008 and is presented to the team who wins the most games in the series that season. Initially, the teams played one three-game series per season, meeting in Cleveland in 1997 and Cincinnati the following year. The teams have played two series per season against each other since 1999, with the exception of 2002, one at each ballpark. A format change in 2013 made each series two games, except in years when the AL and NL Central divisions meet in interleague play, where it is usually extended to three games per series. Through the 2020 meetings, the Guardians lead the series 66–51.
An on-and-off rivalry with the Pittsburgh Pirates stems from the close proximity of the two cities, and features some carryover elements from the longstanding rivalry in the National Football League between the Cleveland Browns and Pittsburgh Steelers. Because the Guardians' designated interleague rival is the Reds and the Pirates' designated rival is the Tigers, the teams have played periodically. The teams played one three-game series each year from 1997–2001 and periodically between 2002 and 2022, generally only in years in which the AL Central played the NL Central in the former interleague play rotation. The teams played six games in 2020 as MLB instituted an abbreviated schedule focusing on regional match-ups. Beginning in 2023, the teams will play a three-game series each season as a result of the new "balanced" schedule. The Pirates lead the series 21–18.
As the Guardians play most of their games every year with each of their AL Central competitors (formerly 19 for each team until 2023), several rivalries have developed.
The Guardians have a geographic rivalry with the Detroit Tigers, highlighted in past years by intense battles for the AL Central title. The matchup has some carryover elements from the Ohio State-Michigan rivalry, as well as the general historic rivalry between Michigan and Ohio dating back to the Toledo War.
The Chicago White Sox are another rival, dating back to the 1959 season, when the Sox slipped past the Guardians to win the AL pennant. The rivalry intensified when both clubs were moved to the new AL Central in 1994. During that season, the two teams challenged for the division title, with the Guardians one game back of Chicago when the strike began in August. During a game in Chicago, the White Sox confiscated Albert Belle's corked bat, followed by an attempt by Guardians pitcher Jason Grimsley to crawl through the Comiskey Park clubhouse ceiling to retrieve it. Belle later signed with the White Sox in 1997, adding additional intensity to the rivalry.
The official team colors are navy blue, red, and white.
The primary home uniform is white with navy blue piping around each sleeve. Across the front of the jersey in script font is the word "Guardians" in red with a navy blue outline, with navy blue undershirts, belts, and socks.
The alternate home jersey is red with a navy blue script "Guardians" trimmed in white on the front, and navy blue piping on both sleeves, with navy blue undershirts, belts, and socks.
The home cap is navy blue with a red bill and features a red "diamond C" on the front.
The primary road uniform is gray, with "Cleveland" in navy blue "diamond C" letters, trimmed in red across the front of the jersey, navy blue piping around the sleeves, and navy blue undershirts, belts, and socks.
The alternate road jersey is navy blue with "Cleveland" in red "diamond C" letters trimmed in white on the front of the jersey, and navy blue undershirts, belts, and socks.
The road cap is similar to the home cap, with the only difference being the bill is navy blue.
For all games, the team uses a navy blue batting helmet with a red "diamond C" on the front.
All jerseys home and away feature the "winged G" logo on one sleeve, and a patch from Marathon Petroleum – in a sponsorship deal lasting through the 2026 season – on the other. The sleeve featuring the Marathon logo depends on how the player bats – left handed hitters have it on their right sleeve, as that is the arm facing the main TV camera when he bats, and vice versa for right handed batters.
The club name and its cartoon logo have been criticized for perpetuating Native American stereotypes. In 1997 and 1998, protesters were arrested after effigies were burned. Charges were dismissed in the 1997 case, and were not filed in the 1998 case. Protesters arrested in the 1998 incident subsequently fought and lost a lawsuit alleging that their First Amendment rights had been violated.
Bud Selig (then-Commissioner of Baseball) said in 2014 that he had never received a complaint about the logo. He has heard that there are some protesting against the mascots, but individual teams such as the Indians and Atlanta Braves, whose name was also criticized for similar reasons, should make their own decisions. An organized group consisting of Native Americans, which had protested for many years, protested Chief Wahoo on Opening Day 2015, noting that this was the 100th anniversary since the team became the Indians. Owner Paul Dolan, while stating his respect for the critics, said he mainly heard from fans who wanted to keep Chief Wahoo, and had no plans to change.
On January 29, 2018, Major League Baseball announced that Chief Wahoo would be removed from the Indians' uniforms as of the 2019 season, stating that the logo was no longer appropriate for on-field use. The block "C" was promoted to the primary logo; at the time, there were no plans to change the team's name.
In 2020, protests over the murder of George Floyd, a black man, by a Minneapolis police officer, led Dolan to reconsider use of the Indians name. On July 3, 2020, on the heels of the Washington Redskins announcing that they would "undergo a thorough review" of that team's name, the Indians announced that they would "determine the best path forward" regarding the team's name and emphasized the need to "keep improving as an organization on issues of social justice".
On December 13, 2020, it was reported that the Indians name would be dropped after the 2021 season. Although it had been hinted by the team that they may move forward without a replacement name (in similar manner to the Washington Football Team), it was announced via Twitter on July 23, 2021, that the team will be named the Guardians, after the Guardians of Traffic, eight large Art Deco statues on the Hope Memorial Bridge, located close to Progressive Field.
The club, however, found itself amid a trademark dispute with a men's roller derby team called the Cleveland Guardians. The Cleveland Guardians roller derby team has competed in the Men's Roller Derby Association since 2016. In addition, two other entities have attempted to preempt the team's use of the trademark by filing their own registrations with the U.S. Patent and Trademark Office. The roller derby team filed a federal lawsuit in the U.S. District Court for the Northern District of Ohio on October 27, 2021, seeking to block the baseball team's name change. On November 16, 2021, the lawsuit was resolved, and both teams were allowed to continue using the Guardians name. The name change from Indians to Guardians became official on November 19, 2021.
Cleveland stations WTAM (1100 AM/106.9 FM) and WMMS (100.7 FM) serve as flagship stations for the Cleveland Guardians Radio Network, with lead announcer Tom Hamilton and Jim Rosenhaus calling the games.
The television rights are held by Bally Sports Great Lakes. Lead announcer Matt Underwood, analyst and former Indians Gold Glove-winning centerfielder Rick Manning, and field reporter Andre Knott form the broadcast team, with Al Pawlowski and former Indians pitcher Jensen Lewis serving as pregame/postgame hosts. Former Indians Pat Tabler, Ellis Burks and Chris Gimenez serve as contributors and occasional fill-ins for Manning and/or Lewis. Select games are simulcast over-the-air on WKYC channel 3.
Notable former broadcasters include Tom Manning, Jack Graney (the first ex-baseball player to become a play-by-play announcer), Ken Coleman, Joe Castiglione, Van Patrick, Nev Chandler, Bruce Drennan, Jim "Mudcat" Grant, Rocky Colavito, Dan Coughlin, and Jim Donovan.
Previous broadcasters who have had lengthy tenures with the team include Joe Tait (15 seasons between TV and radio), Jack Corrigan (18 seasons on TV), Ford C. Frick Award winner Jimmy Dudley (19 seasons on radio), Mike Hegan (23 seasons between TV and radio), and Herb Score (34 seasons between TV and radio).
Under the Cleveland Indians name, the team has been featured in several films, including:
Numerous Naps/Indians players have had statues made in their honor:
(*) – Inducted into the Baseball Hall of Fame as an Indian/Nap.
In July 2022 - in honor of the 75th anniversary of Larry Doby becoming the AL's first black player - a mural was added to the exterior of Progressive Field, honoring players who were viewed as barrier breakers that played for the Indians/Guardians. The mural features Doby, Frank Robinson, and Satchel Paige.
A portion of Eagle Avenue near Progressive Field was renamed "Larry Doby Way" in 2012
A number of parks and newly built and renovated youth baseball fields in Cleveland have been named after former and current Indians/Guardians players, including:
The Cleveland Guardians farm system consists of seven minor league affiliates.
(*) - There were no fans allowed in any MLB stadium in 2020 due to the COVID-19 pandemic.
(**) - At the beginning of the season, there was a limit of 30% capacity due to COVID-19 restrictions implemented by Ohio Governor Mike DeWine. On June 2, DeWine lifted the restrictions, and the team immediately allowed full capacity at Progressive Field. | [
{
"paragraph_id": 0,
"text": "The Cleveland Guardians are an American professional baseball team based in Cleveland. The Guardians compete in Major League Baseball (MLB) as a member club of the American League (AL) Central division. Since 1994, the team has played its home games at Progressive Field. Since their establishment as a Major League franchise in 1901, the team has won 11 Central Division titles, six American League pennants, and two World Series championships (in 1920 and 1948). The team's World Series championship drought since 1948 is the longest active among all 30 current Major League teams. The team's name references the Guardians of Traffic, eight monolithic 1932 Art Deco sculptures by Henry Hering on the city's Hope Memorial Bridge, which is adjacent to Progressive Field. The team's mascot is named \"Slider\". The team's spring training facility is at Goodyear Ballpark in Goodyear, Arizona.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The franchise originated in 1896 as the Columbus Buckeyes, a minor league team based in Columbus, Ohio, that played in the Western League. The team renamed to the Columbus Senators the following year and then relocated to Grand Rapids, Michigan in the middle of the 1899 season, becoming the Grand Rapids Furniture Makers for the remainder of the season. The team relocated to Cleveland in 1900 and was called the Cleveland Lake Shores. The Western League itself was renamed the American League prior to the 1900 season while continuing its minor league status. When the American League declared itself a major league in 1901, Cleveland was one of its eight charter franchises. Originally called the Cleveland Bluebirds or Blues, the team was also unofficially called the Cleveland Bronchos in 1902. Beginning in 1903, the team was named the Cleveland Napoleons or Naps, after team captain Nap Lajoie.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Following Lajoie's departure after the 1914 season, club owner Charles Somers requested that baseball writers choose a new name. They chose the name Cleveland Indians, declared to be a tribute of the nickname that fans gave to the Cleveland Spiders while Louis Sockalexis, a Native American, was playing for the team. That name stuck and remained in use for more than a century. Common nicknames for the Indians were \"the Tribe\" and \"the Wahoos\", the latter referencing their longtime logo, Chief Wahoo. After the Indians name came under criticism as part of the Native American mascot controversy, the team adopted the Guardians name following the 2021 season.",
"title": ""
},
{
"paragraph_id": 3,
"text": "From August 24 to September 14, 2017, the team won 22 consecutive games, the longest winning streak in American League history, and the second longest winning streak in Major League Baseball history.",
"title": ""
},
{
"paragraph_id": 4,
"text": "As of the end of the 2023 season, the franchise's overall record is 9,760–9,300 (.512).",
"title": ""
},
{
"paragraph_id": 5,
"text": "\"In 1857, baseball games were a daily spectacle in Cleveland's Public Squares. City authorities tried to find an ordinance forbidding it, to the joy of the crowd, they were unsuccessful. – Harold Seymour\"",
"title": "Early Cleveland baseball teams"
},
{
"paragraph_id": 6,
"text": "From 1865 to 1868 Forest Citys was an amateur ball club. During the 1869 season, Cleveland was among several cities that established professional baseball teams following the success of the 1869 Cincinnati Red Stockings, the first fully professional team. In the newspapers before and after 1870, the team was often called the Forest Citys, in the same generic way that the team from Chicago was sometimes called The Chicagos.",
"title": "Early Cleveland baseball teams"
},
{
"paragraph_id": 7,
"text": "In 1871 the Forest Citys joined the new National Association of Professional Base Ball Players (NA), the first professional league. Ultimately, two of the league's western clubs went out of business during the first season and the Chicago Fire left that city's White Stockings impoverished, unable to field a team again until 1874. Cleveland was thus the NA's westernmost outpost in 1872, the year the club folded. Cleveland played its full schedule to July 19 followed by two games versus Boston in mid-August and disbanded at the end of the season.",
"title": "Early Cleveland baseball teams"
},
{
"paragraph_id": 8,
"text": "In 1876, the National League (NL) supplanted the NA as the major professional league. Cleveland was not among its charter members, but by 1879 the league was looking for new entries and the city gained an NL team. The Cleveland Forest Citys were recreated, but rebranded in 1882 as the Cleveland Blues, because the National League required distinct colors for that season. The Blues had mediocre records for six seasons and were ruined by a trade war with the Union Association (UA) in 1884, when its three best players (Fred Dunlap, Jack Glasscock, and Jim McCormick) jumped to the UA after being offered higher salaries. The Cleveland Blues merged with the St. Louis Maroons UA team in 1885.",
"title": "Early Cleveland baseball teams"
},
{
"paragraph_id": 9,
"text": "Cleveland went without major league baseball for two seasons until gaining a team in the American Association (AA) in 1887. After the AA's Allegheny club jumped to the NL, Cleveland followed suit in 1889, as the AA began to crumble. The Cleveland ball club, named the Spiders (supposedly inspired by their \"skinny and spindly\" players) slowly became a power in the league. In 1891, the Spiders moved into League Park, which would serve as the home of Cleveland professional baseball for the next 55 years. Led by native Ohioan Cy Young, the Spiders became a contender in the mid-1890s, playing in the Temple Cup Series (that era's World Series) twice and winning it in 1895. The team began to fade after this success, and was dealt a severe blow under the ownership of the Robison brothers.",
"title": "Early Cleveland baseball teams"
},
{
"paragraph_id": 10,
"text": "Prior to the 1899 season, Frank Robison, the Spiders' owner, bought the St. Louis Browns, thus owning two clubs at the same time. The Browns were renamed the \"Perfectos\", and restocked with Cleveland talent. Just weeks before the season opener, most of the better Spiders were transferred to St. Louis, including three future Hall of Famers: Cy Young, Jesse Burkett and Bobby Wallace. The roster maneuvers failed to create a powerhouse Perfectos team, as St. Louis finished fifth in both 1899 and 1900. The Spiders were left with essentially a minor league lineup, and began to lose games at a record pace. Drawing almost no fans at home, they ended up playing most of their season on the road, and became known as \"The Wanderers\". The team ended the season in 12th place, 84 games out of first place, with an all-time worst record of 20–134 (.130 winning percentage). Following the 1899 season, the National League disbanded four teams, including the Spiders franchise. The disastrous 1899 season would actually be a step toward a new future for Cleveland fans the next year.",
"title": "Early Cleveland baseball teams"
},
{
"paragraph_id": 11,
"text": "The Cleveland Infants competed in the Players' League, which was well-attended in some cities, but club owners lacked the confidence to continue beyond the one season. The Cleveland Infants finished with 55 wins and 75 losses, playing their home games at Brotherhood Park.",
"title": "Early Cleveland baseball teams"
},
{
"paragraph_id": 12,
"text": "The Columbus Buckeyes were founded in Ohio in 1896 and were part of the Western League. In 1897 the team changed their name to the Columbus Senators. In the middle of the 1899 season, the Senators made a swap with the Grand Rapids Furniture Makers of the Interstate League; the Columbus Senators would become the Grand Rapids Furniture Makers and play in the Western League, and the Grand Rapids Furniture Makers would become the Columbus Senators and play in the Interstate League. Often confused with the Grand Rapids Rustlers (also known as Rippers), the Grand Rapids Furniture Makers finished the 1899 season in the Western League to become the Grand Rapids franchise to be relocated to Cleveland the following season. In 1900 the team moved to Cleveland and was named the Cleveland Lake Shores. Around the same time Ban Johnson changed the name of his minor league (Western League) to the American League. In 1900 the American League was still considered a minor league. In 1901 the team was called the Cleveland Bluebirds or Blues when the American League broke with the National Agreement and declared itself a competing Major League. The Cleveland franchise was among its eight charter members, and is one of four teams that remain in its original city, along with Boston, Chicago, and Detroit.",
"title": "Franchise history"
},
{
"paragraph_id": 13,
"text": "The new team was owned by coal magnate Charles Somers and tailor Jack Kilfoyl. Somers, a wealthy industrialist and also co-owner of the Boston Americans, lent money to other team owners, including Connie Mack's Philadelphia Athletics, to keep them and the new league afloat. Players did not think the name \"Bluebirds\" was suitable for a baseball team. Writers frequently shortened it to Cleveland Blues due to the players' all-blue uniforms, but the players did not like this unofficial name either. The players themselves tried to change the name to Cleveland Bronchos in 1902, but this name never caught on.",
"title": "Franchise history"
},
{
"paragraph_id": 14,
"text": "Cleveland suffered from financial problems in their first two seasons. This led Somers to seriously consider moving to either Pittsburgh or Cincinnati. Relief came in 1902 as a result of the conflict between the National and American Leagues. In 1901, Napoleon \"Nap\" Lajoie, the Philadelphia Phillies' star second baseman, jumped to the A's after his contract was capped at $2,400 per year—one of the highest-profile players to jump to the upstart AL. The Phillies subsequently filed an injunction to force Lajoie's return, which was granted by the Pennsylvania Supreme Court. The injunction appeared to doom any hopes of an early settlement between the warring leagues. However, a lawyer discovered that the injunction was only enforceable in the state of Pennsylvania. Mack, partly to thank Somers for his past financial support, agreed to trade Lajoie to the then-moribund Blues, who offered $25,000 salary over three years. Due to the injunction, however, Lajoie had to sit out any games played against the A's in Philadelphia. Lajoie arrived in Cleveland on June 4 and was an immediate hit, drawing 10,000 fans to League Park. Soon afterward, he was named team captain, and in 1903 the team was called the Cleveland Napoleons or Naps after a newspaper conducted a write-in contest.",
"title": "Franchise history"
},
{
"paragraph_id": 15,
"text": "Lajoie was named manager in 1905, and the team's fortunes improved somewhat. They finished half a game short of the pennant in 1908. However, the success did not last and Lajoie resigned during the 1909 season as manager but remained on as a player.",
"title": "Franchise history"
},
{
"paragraph_id": 16,
"text": "After that, the team began to unravel, leading Kilfoyl to sell his share of the team to Somers. Cy Young, who returned to Cleveland in 1909, was ineffective for most of his three remaining years and Addie Joss died from tubercular meningitis prior to the 1911 season.",
"title": "Franchise history"
},
{
"paragraph_id": 17,
"text": "Despite a strong lineup anchored by the potent Lajoie and Shoeless Joe Jackson, poor pitching kept the team below third place for most of the next decade. One reporter referred to the team as the Napkins, \"because they fold up so easily\". The team hit bottom in 1914 and 1915, finishing last place both years.",
"title": "Franchise history"
},
{
"paragraph_id": 18,
"text": "1915 brought significant changes to the team. Lajoie, nearly 40 years old, was no longer a top hitter in the league, batting only .258 in 1914. With Lajoie engaged in a feud with manager Joe Birmingham, the team sold Lajoie back to the A's.",
"title": "Franchise history"
},
{
"paragraph_id": 19,
"text": "With Lajoie gone, the club needed a new name. Somers asked the local baseball writers to come up with a new name, and based on their input, the team was renamed the Cleveland Indians. The name referred to the nickname \"Indians\" that was applied to the Cleveland Spiders baseball club during the time when Louis Sockalexis, a Native American, played in Cleveland (1897–1899).",
"title": "Franchise history"
},
{
"paragraph_id": 20,
"text": "At the same time, Somers' business ventures began to fail, leaving him deeply in debt. With the Indians playing poorly, attendance and revenue suffered. Somers decided to trade Jackson midway through the 1915 season for two players and $31,500, one of the largest sums paid for a player at the time.",
"title": "Franchise history"
},
{
"paragraph_id": 21,
"text": "By 1916, Somers was at the end of his tether, and sold the team to a syndicate headed by Chicago railroad contractor James C. \"Jack\" Dunn. Manager Lee Fohl, who had taken over in early 1915, acquired two minor league pitchers, Stan Coveleski and Jim Bagby and traded for center fielder Tris Speaker, who was engaged in a salary dispute with the Red Sox. All three would ultimately become key players in bringing a championship to Cleveland.",
"title": "Franchise history"
},
{
"paragraph_id": 22,
"text": "Speaker took over the reins as player-manager in 1919, and led the team to a championship in 1920. On August 16, 1920, the Indians were playing the Yankees at the Polo Grounds in New York. Shortstop Ray Chapman, who often crowded the plate, was batting against Carl Mays, who had an unusual underhand delivery. It was also late in the afternoon and the infield was completely shaded with the center field area (the batters' background) bathed in sunlight. As well, at the time, \"part of every pitcher's job was to dirty up a new ball the moment it was thrown onto the field. By turns, they smeared it with dirt, licorice, tobacco juice; it was deliberately scuffed, sandpapered, scarred, cut, even spiked. The result was a misshapen, earth-colored ball that traveled through the air erratically, tended to soften in the later innings, and as it came over the plate, was very hard to see.\"",
"title": "Franchise history"
},
{
"paragraph_id": 23,
"text": "In any case, Chapman did not move reflexively when Mays' pitch came his way. The pitch hit Chapman in the head, fracturing his skull. Chapman died the next day, becoming the only player to sustain a fatal injury from a pitched ball. The Indians, who at the time were locked in a tight three-way pennant race with the Yankees and White Sox, were not slowed down by the death of their teammate. Rookie Joe Sewell hit .329 after replacing Chapman in the lineup.",
"title": "Franchise history"
},
{
"paragraph_id": 24,
"text": "In September 1920, the Black Sox Scandal came to a boil. With just a few games left in the season, and Cleveland and Chicago neck-and-neck for first place at 94–54 and 95–56 respectively, the Chicago owner suspended eight players. The White Sox lost two of three in their final series, while Cleveland won four and lost two in their final two series. Cleveland finished two games ahead of Chicago and three games ahead of the Yankees to win its first pennant, led by Speaker's .388 hitting, Jim Bagby's 30 victories and solid performances from Steve O'Neill and Stan Coveleski. Cleveland went on to defeat the Brooklyn Robins 5–2 in the World Series for their first title, winning four games in a row after the Robins took a 2–1 Series lead. The Series included three memorable \"firsts\", all of them in Game 5 at Cleveland, and all by the home team. In the first inning, right fielder Elmer Smith hit the first Series grand slam. In the fourth inning, Jim Bagby hit the first Series home run by a pitcher. In the top of the fifth inning, second baseman Bill Wambsganss executed the first (and only, so far) unassisted triple play in World Series history, in fact, the only Series triple play of any kind.",
"title": "Franchise history"
},
{
"paragraph_id": 25,
"text": "The team would not reach the heights of 1920 again for 28 years. Speaker and Coveleski were aging and the Yankees were rising with a new weapon: Babe Ruth and the home run. They managed two second-place finishes but spent much of the decade in last place. In 1927 Dunn's widow, Mrs. George Pross (Dunn had died in 1922), sold the team to a syndicate headed by Alva Bradley.",
"title": "Franchise history"
},
{
"paragraph_id": 26,
"text": "The Indians were a middling team by the 1930s, finishing third or fourth most years. 1936 brought Cleveland a new superstar in 17-year-old pitcher Bob Feller, who came from Iowa with a dominating fastball. That season, Feller set a record with 17 strikeouts in a single game and went on to lead the league in strikeouts from 1938 to 1941.",
"title": "Franchise history"
},
{
"paragraph_id": 27,
"text": "On August 20, 1938, Indians catchers Hank Helf and Frank Pytlak set the \"all-time altitude mark\" by catching baseballs dropped from the 708-foot (216 m) Terminal Tower.",
"title": "Franchise history"
},
{
"paragraph_id": 28,
"text": "By 1940, Feller, along with Ken Keltner, Mel Harder and Lou Boudreau, led the Indians to within one game of the pennant. However, the team was wracked with dissension, with some players (including Feller and Mel Harder) going so far as to request that Bradley fire manager Ossie Vitt. Reporters lampooned them as the Cleveland Crybabies. Feller, who had pitched a no-hitter to open the season and won 27 games, lost the final game of the season to unknown pitcher Floyd Giebell of the Detroit Tigers. The Tigers won the pennant and Giebell never won another major league game.",
"title": "Franchise history"
},
{
"paragraph_id": 29,
"text": "Cleveland entered 1941 with a young team and a new manager; Roger Peckinpaugh had replaced the despised Vitt; but the team regressed, finishing in fourth. Cleveland would soon be depleted of two stars. Hal Trosky retired in 1941 due to migraine headaches and Bob Feller enlisted in the Navy two days after the Attack on Pearl Harbor. Starting third baseman Ken Keltner and outfielder Ray Mack were both drafted in 1945 taking two more starters out of the lineup.",
"title": "Franchise history"
},
{
"paragraph_id": 30,
"text": "In 1946, Bill Veeck formed an investment group that purchased the Cleveland Indians from Bradley's group for a reported $1.6 million. Among the investors was Bob Hope, who had grown up in Cleveland, and former Tigers slugger, Hank Greenberg. A former owner of a minor league franchise in Milwaukee, Veeck brought to Cleveland a gift for promotion. At one point, Veeck hired rubber-faced Max Patkin, the \"Clown Prince of Baseball\" as a coach. Patkin's appearance in the coaching box was the sort of promotional stunt that delighted fans but infuriated the American League front office.",
"title": "Franchise history"
},
{
"paragraph_id": 31,
"text": "Recognizing that he had acquired a solid team, Veeck soon abandoned the aging, small and lightless League Park to take up full-time residence in massive Cleveland Municipal Stadium. The Indians had briefly moved from League Park to Municipal Stadium in mid-1932, but moved back to League Park due to complaints about the cavernous environment. From 1937 onward, however, the Indians began playing an increasing number of games at Municipal, until by 1940 they played most of their home slate there. League Park was mostly demolished in 1951, but has since been rebuilt as a recreational park.",
"title": "Franchise history"
},
{
"paragraph_id": 32,
"text": "Making the most of the cavernous stadium, Veeck had a portable center field fence installed, which he could move in or out depending on how the distance favored the Indians against their opponents in a given series. The fence moved as much as 15 feet (5 m) between series opponents. Following the 1947 season, the American League countered with a rule change that fixed the distance of an outfield wall for the duration of a season. The massive stadium did, however, permit the Indians to set the then-record for the largest crowd to see a Major League baseball game. On October 10, 1948, Game 5 of the World Series against the Boston Braves drew over 84,000. The record stood until the Los Angeles Dodgers drew a crowd in excess of 92,500 to watch Game 5 of the 1959 World Series at the Los Angeles Memorial Coliseum against the Chicago White Sox.",
"title": "Franchise history"
},
{
"paragraph_id": 33,
"text": "Under Veeck's leadership, one of Cleveland's most significant achievements was breaking the color barrier in the American League by signing Larry Doby, formerly a player for the Negro league's Newark Eagles in 1947, 11 weeks after Jackie Robinson signed with the Dodgers. Similar to Robinson, Doby battled racism on and off the field but posted a .301 batting average in 1948, his first full season. A power-hitting center fielder, Doby led the American League twice in homers.",
"title": "Franchise history"
},
{
"paragraph_id": 34,
"text": "In 1948, needing pitching for the stretch run of the pennant race, Veeck turned to the Negro leagues again and signed pitching great Satchel Paige amid much controversy. Barred from Major League Baseball during his prime, Veeck's signing of the aging star in 1948 was viewed by many as another publicity stunt. At an official age of 42, Paige became the oldest rookie in Major League baseball history, and the first black pitcher. Paige ended the year with a 6–1 record with a 2.48 ERA, 45 strikeouts and two shutouts.",
"title": "Franchise history"
},
{
"paragraph_id": 35,
"text": "In 1948, veterans Boudreau, Keltner, and Joe Gordon had career offensive seasons, while newcomers Doby and Gene Bearden also had standout seasons. The team went down to the wire with the Boston Red Sox, winning a one-game playoff, the first in American League history, to go to the World Series. In the series, the Indians defeated the Boston Braves four games to two for their first championship in 28 years. Boudreau won the American League MVP Award.",
"title": "Franchise history"
},
{
"paragraph_id": 36,
"text": "The Indians appeared in a film the following year titled The Kid From Cleveland, in which Veeck had an interest. The film portrayed the team helping out a \"troubled teenaged fan\" and featured many members of the Indians organization. However, filming during the season cost the players valuable rest days leading to fatigue towards the end of the season. That season, Cleveland again contended before falling to third place. On September 23, 1949, Bill Veeck and the Indians buried their 1948 pennant in center field the day after they were mathematically eliminated from the pennant race.",
"title": "Franchise history"
},
{
"paragraph_id": 37,
"text": "Later in 1949, Veeck's first wife (who had a half-stake in Veeck's share of the team) divorced him. With most of his money tied up in the Indians, Veeck was forced to sell the team to a syndicate headed by insurance magnate Ellis Ryan.",
"title": "Franchise history"
},
{
"paragraph_id": 38,
"text": "In 1953, Al Rosen was an All Star for the second year in a row, was named The Sporting News Major League Player of the Year, and won the American League Most Valuable Player Award in a unanimous vote playing for the Indians after leading the AL in runs, home runs, RBIs (for the second year in a row), and slugging percentage, and coming in second by one point in batting average. Ryan was forced out in 1953 in favor of Myron Wilson, who in turn gave way to William Daley in 1956. Despite this turnover in the ownership, a powerhouse team composed of Feller, Doby, Minnie Miñoso, Luke Easter, Bobby Ávila, Al Rosen, Early Wynn, Bob Lemon, and Mike Garcia continued to contend through the early 1950s. However, Cleveland only won a single pennant in the decade, in 1954, finishing second to the New York Yankees five times.",
"title": "Franchise history"
},
{
"paragraph_id": 39,
"text": "The winningest season in franchise history came in 1954, when the Indians finished the season with a record of 111–43 (.721). That mark set an American League record for wins that stood for 44 years until the Yankees won 114 games in 1998 (a 162-game regular season). The Indians' 1954 winning percentage of .721 is still an American League record. The Indians returned to the World Series to face the New York Giants. The team could not bring home the title, however, ultimately being upset by the Giants in a sweep. The series was notable for Willie Mays' over-the-shoulder catch off the bat of Vic Wertz in Game 1. Cleveland remained a talented team throughout the remainder of the decade, finishing in second place in 1959, George Strickland's last full year in the majors.",
"title": "Franchise history"
},
{
"paragraph_id": 40,
"text": "From 1960 to 1993, the Indians managed one third-place finish (in 1968) and six fourth-place finishes (in 1960, 1974, 1975, 1976, 1990, and 1992) but spent the rest of the time at or near the bottom of the standings, including four seasons with over 100 losses (1971, 1985, 1987, 1991).",
"title": "Franchise history"
},
{
"paragraph_id": 41,
"text": "The Indians hired general manager Frank Lane, known as \"Trader\" Lane, away from the St. Louis Cardinals in 1957. Lane over the years had gained a reputation as a GM who loved to make deals. With the White Sox, Lane had made over 100 trades involving over 400 players in seven years. In a short stint in St. Louis, he traded away Red Schoendienst and Harvey Haddix. Lane summed up his philosophy when he said that the only deals he regretted were the ones that he did not make.",
"title": "Franchise history"
},
{
"paragraph_id": 42,
"text": "One of Lane's early trades in Cleveland was to send Roger Maris to the Kansas City Athletics in the middle of 1958. Indians executive Hank Greenberg was not happy about the trade and neither was Maris, who said that he could not stand Lane. After Maris broke Babe Ruth's home run record, Lane defended himself by saying he still would have done the deal because Maris was unknown and he received good ballplayers in exchange.",
"title": "Franchise history"
},
{
"paragraph_id": 43,
"text": "After the Maris trade, Lane acquired 25-year-old Norm Cash from the White Sox for Minnie Miñoso and then traded him to Detroit before he ever played a game for the Indians; Cash went on to hit over 350 home runs for the Tigers. The Indians received Steve Demeter in the deal, who had only five at-bats for Cleveland.",
"title": "Franchise history"
},
{
"paragraph_id": 44,
"text": "In 1960, Lane made the trade that would define his tenure in Cleveland when he dealt slugging right fielder and fan favorite Rocky Colavito to the Detroit Tigers for Harvey Kuenn just before Opening Day in 1960.",
"title": "Franchise history"
},
{
"paragraph_id": 45,
"text": "It was a blockbuster trade that swapped the 1959 AL home run co-champion (Colavito) for the AL batting champion (Kuenn). After the trade, however, Colavito hit over 30 home runs four times and made three All-Star teams for Detroit and Kansas City before returning to Cleveland in 1965. Kuenn, on the other hand, played only one season for the Indians before departing for San Francisco in a trade for an aging Johnny Antonelli and Willie Kirkland. Akron Beacon Journal columnist Terry Pluto documented the decades of woe that followed the trade in his book The Curse of Rocky Colavito. Despite being attached to the curse, Colavito said that he never placed a curse on the Indians but that the trade was prompted by a salary dispute with Lane.",
"title": "Franchise history"
},
{
"paragraph_id": 46,
"text": "Lane also engineered a unique trade of managers in mid-season 1960, sending Joe Gordon to the Tigers in exchange for Jimmy Dykes. Lane left the team in 1961, but ill-advised trades continued. In 1965, the Indians traded pitcher Tommy John, who would go on to win 288 games in his career, and 1966 Rookie of the Year Tommy Agee to the White Sox to get Colavito back.",
"title": "Franchise history"
},
{
"paragraph_id": 47,
"text": "However, Indians' pitchers set numerous strikeout records. They led the league in K's every year from 1963 to 1968, and narrowly missed in 1969. The 1964 staff was the first to amass 1,100 strikeouts, and in 1968, they were the first to collect more strikeouts than hits allowed.",
"title": "Franchise history"
},
{
"paragraph_id": 48,
"text": "The 1970s were not much better, with the Indians trading away several future stars, including Graig Nettles, Dennis Eckersley, Buddy Bell and 1971 Rookie of the Year Chris Chambliss, for a number of players who made no impact.",
"title": "Franchise history"
},
{
"paragraph_id": 49,
"text": "Constant ownership changes did not help the Indians. In 1963, Daley's syndicate sold the team to a group headed by general manager Gabe Paul. Three years later, Paul sold the Indians to Vernon Stouffer, of the Stouffer's frozen-food empire. Prior to Stouffer's purchase, the team was rumored to be relocated due to poor attendance. Despite the potential for a financially strong owner, Stouffer had some non-baseball related financial setbacks and, consequently, the team was cash-poor. In order to solve some financial problems, Stouffer had made an agreement to play a minimum of 30 home games in New Orleans with a view to a possible move there. After rejecting an offer from George Steinbrenner and former Indian Al Rosen, Stouffer sold the team in 1972 to a group led by Cleveland Cavaliers and Cleveland Barons owner Nick Mileti. Steinbrenner went on to buy the New York Yankees in 1973.",
"title": "Franchise history"
},
{
"paragraph_id": 50,
"text": "Only five years later, Mileti's group sold the team for $11 million to a syndicate headed by trucking magnate Steve O'Neill and including former general manager and owner Gabe Paul. O'Neill's death in 1983 led to the team going on the market once more. O'Neill's nephew Patrick O'Neill did not find a buyer until real estate magnates Richard and David Jacobs purchased the team in 1986.",
"title": "Franchise history"
},
{
"paragraph_id": 51,
"text": "The team was unable to move out of last place, with losing seasons between 1969 and 1975. One highlight was the acquisition of Gaylord Perry in 1972. The Indians traded fireballer \"Sudden Sam\" McDowell for Perry, who became the first Indian pitcher to win the Cy Young Award. In 1975, Cleveland broke another color barrier with the hiring of Frank Robinson as Major League Baseball's first African American manager. Robinson served as player-manager and provided a franchise highlight when he hit a pinch-hit home run on Opening Day. But the high-profile signing of Wayne Garland, a 20-game winner in Baltimore, proved to be a disaster after Garland suffered from shoulder problems and went 28–48 over five years. The team failed to improve with Robinson as manager and he was fired in 1977. In 1977, pitcher Dennis Eckersley threw a no-hitter against the California Angels. The next season, he was traded to the Boston Red Sox where he won 20 games in 1978 and another 17 in 1979.",
"title": "Franchise history"
},
{
"paragraph_id": 52,
"text": "The 1970s also featured the infamous Ten Cent Beer Night at Cleveland Municipal Stadium. The ill-conceived promotion at a 1974 game against the Texas Rangers ended in a riot by fans and a forfeit by the Indians.",
"title": "Franchise history"
},
{
"paragraph_id": 53,
"text": "There were more bright spots in the 1980s. In May 1981, Len Barker threw a perfect game against the Toronto Blue Jays, joining Addie Joss as the only other Indian pitcher to do so. \"Super Joe\" Charboneau won the American League Rookie of the Year award. Unfortunately, Charboneau was out of baseball by 1983 after falling victim to back injuries and Barker, who was also hampered by injuries, never became a consistently dominant starting pitcher.",
"title": "Franchise history"
},
{
"paragraph_id": 54,
"text": "Eventually, the Indians traded Barker to the Atlanta Braves for Brett Butler and Brook Jacoby, who became mainstays of the team for the remainder of the decade. Butler and Jacoby were joined by Joe Carter, Mel Hall, Julio Franco and Cory Snyder, bringing new hope to fans in the late 1980s.",
"title": "Franchise history"
},
{
"paragraph_id": 55,
"text": "Cleveland's struggles over the 30-year span were highlighted in the 1989 film Major League, which comically depicted a hapless Cleveland ball club going from worst to first by the end of the film.",
"title": "Franchise history"
},
{
"paragraph_id": 56,
"text": "Throughout the 1980s, the Indians' owners had pushed for a new stadium. Cleveland Stadium had been a symbol of the Indians' glory years in the 1940s and 1950s. However, during the lean years even crowds of 40,000 were swallowed up by the cavernous environment. The old stadium was not aging gracefully; chunks of concrete were falling off in sections and the old wooden pilings were petrifying. In 1984, a proposal for a $150 million domed stadium was defeated in a referendum 2–1.",
"title": "Franchise history"
},
{
"paragraph_id": 57,
"text": "Finally, in May 1990, Cuyahoga County voters passed an excise tax on sales of alcohol and cigarettes in the county. The tax proceeds were to be used for financing the construction of the Gateway Sports and Entertainment Complex, which would include Jacobs Field for the Indians and Gund Arena for the Cleveland Cavaliers basketball team.",
"title": "Franchise history"
},
{
"paragraph_id": 58,
"text": "The team's fortunes started to turn in 1989, ironically with a very unpopular trade. The team sent power-hitting outfielder Joe Carter to the San Diego Padres for two unproven players, Sandy Alomar Jr. and Carlos Baerga. Alomar made an immediate impact, not only being elected to the All-Star team but also winning Cleveland's fourth Rookie of the Year award and a Gold Glove. Baerga became a three-time All-Star with consistent offensive production.",
"title": "Franchise history"
},
{
"paragraph_id": 59,
"text": "Indians general manager John Hart made a number of moves that finally brought success to the team. In 1991, he hired former Indian Mike Hargrove to manage and traded catcher Eddie Taubensee to the Houston Astros who, with a surplus of outfielders, were willing to part with Kenny Lofton. Lofton finished second in AL Rookie of the Year balloting with a .285 average and 66 stolen bases.",
"title": "Franchise history"
},
{
"paragraph_id": 60,
"text": "The Indians were named \"Organization of the Year\" by Baseball America in 1992, in response to the appearance of offensive bright spots and an improving farm system.",
"title": "Franchise history"
},
{
"paragraph_id": 61,
"text": "The team suffered a tragedy during spring training of 1993, when a boat carrying pitchers Steve Olin, Tim Crews, and Bob Ojeda crashed into a pier. Olin and Crews were killed, and Ojeda was seriously injured. (Ojeda missed most of the season, and retired the following year).",
"title": "Franchise history"
},
{
"paragraph_id": 62,
"text": "By the end of the 1993 season, the team was in transition, leaving Cleveland Stadium and fielding a talented nucleus of young players. Many of those players came from the Indians' new AAA farm team, the Charlotte Knights, who won the International League title that year.",
"title": "Franchise history"
},
{
"paragraph_id": 63,
"text": "Indians General Manager John Hart and team owner Richard Jacobs managed to turn the team's fortunes around. The Indians opened Jacobs Field in 1994 with the aim of improving on the prior season's sixth-place finish. The Indians were only one game behind the division-leading Chicago White Sox on August 12 when a players strike wiped out the rest of the season.",
"title": "Franchise history"
},
{
"paragraph_id": 64,
"text": "Having contended for the division in the aborted 1994 season, Cleveland sprinted to a 100–44 record (the season was shortened by 18 games due to player/owner negotiations) in 1995, winning its first-ever divisional title. Veterans Dennis Martínez, Orel Hershiser and Eddie Murray combined with a young core of players including Omar Vizquel, Albert Belle, Jim Thome, Manny Ramírez, Kenny Lofton and Charles Nagy to lead the league in team batting average as well as team ERA.",
"title": "Franchise history"
},
{
"paragraph_id": 65,
"text": "After defeating the Boston Red Sox in the Division Series and the Seattle Mariners in the ALCS, Cleveland clinched the American League pennant and a World Series berth, for the first time since 1954. The World Series ended in disappointment, however: the Indians fell in six games to the Atlanta Braves.",
"title": "Franchise history"
},
{
"paragraph_id": 66,
"text": "Tickets for every Indians home game sold out several months before opening day in 1996. The Indians repeated as AL Central champions but lost to the wild card Baltimore Orioles in the Division Series.",
"title": "Franchise history"
},
{
"paragraph_id": 67,
"text": "In 1997, Cleveland started slow but finished with an 86–75 record. Taking their third consecutive AL Central title, the Indians defeated the New York Yankees in the Division Series, 3–2. After defeating the Baltimore Orioles in the ALCS, Cleveland went on to face the Florida Marlins in the World Series that featured the coldest game in World Series history. With the series tied after Game 6, the Indians went into the ninth inning of Game Seven with a 2–1 lead, but closer José Mesa allowed the Marlins to tie the game. In the eleventh inning, Édgar Rentería drove in the winning run giving the Marlins their first championship. Cleveland became the first team to lose the World Series after carrying the lead into the ninth inning of the seventh game.",
"title": "Franchise history"
},
{
"paragraph_id": 68,
"text": "In 1998, the Indians made the postseason for the fourth straight year. After defeating the wild-card Boston Red Sox 3–1 in the Division Series, Cleveland lost the 1998 ALCS in six games to the New York Yankees, who had come into the postseason with a then-AL record 114 wins in the regular season.",
"title": "Franchise history"
},
{
"paragraph_id": 69,
"text": "For the 1999 season, Cleveland added relief pitcher Ricardo Rincón and second baseman Roberto Alomar, brother of catcher Sandy Alomar Jr., and won the Central Division title for the fifth consecutive year. The team scored 1,009 runs, becoming the first (and to date only) team since the 1950 Boston Red Sox to score more than 1,000 runs in a season. This time, Cleveland did not make it past the first round, losing the Division Series to the Red Sox, despite taking a 2–0 lead in the series. In game three, Indians starter Dave Burba went down with an injury in the 4th inning. Four pitchers, including presumed game four starter Jaret Wright, surrendered nine runs in relief. Without a long reliever or emergency starter on the playoff roster, Hargrove started both Bartolo Colón and Charles Nagy in games four and five on only three days rest. The Indians lost game four 23–7 and game five 12–8. Four days later, Hargrove was dismissed as manager.",
"title": "Franchise history"
},
{
"paragraph_id": 70,
"text": "In 2000, the Indians had a 44–42 start, but caught fire after the All Star break and went 46–30 the rest of the way to finish 90–72. The team had one of the league's best offenses that year and a defense that yielded three gold gloves. However, they ended up five games behind the Chicago White Sox in the Central division and missed the wild card by one game to the Seattle Mariners. Mid-season trades brought Bob Wickman and Jake Westbrook to Cleveland. After the season, free-agent outfielder Manny Ramírez departed for the Boston Red Sox.",
"title": "Franchise history"
},
{
"paragraph_id": 71,
"text": "In 2000, Larry Dolan bought the Indians for $320 million from Richard Jacobs, who, along with his late brother David, had paid $45 million for the club in 1986. The sale set a record at the time for the sale of a baseball franchise.",
"title": "Franchise history"
},
{
"paragraph_id": 72,
"text": "2001 saw a return to the postseason. After the departures of Ramírez and Sandy Alomar Jr., the Indians signed Ellis Burks and former MVP Juan González, who helped the team win the Central division with a 91–71 record. One of the highlights came on August 5, when the Indians completed the biggest comeback in MLB History. Cleveland rallied to close a 14–2 deficit in the seventh inning to defeat the Seattle Mariners 15–14 in 11 innings. The Mariners, who won an MLB record-tying 116 games that season, had a strong bullpen, and Indians manager Charlie Manuel had already pulled many of his starters with the game seemingly out of reach.",
"title": "Franchise history"
},
{
"paragraph_id": 73,
"text": "Seattle and Cleveland met in the first round of the postseason; however, the Mariners won the series 3–2. In the 2001–02 offseason, GM John Hart resigned and his assistant, Mark Shapiro, took the reins.",
"title": "Franchise history"
},
{
"paragraph_id": 74,
"text": "Shapiro moved to rebuild by dealing aging veterans for younger talent. He traded Roberto Alomar to the New York Mets for a package that included outfielder Matt Lawton and prospects Alex Escobar and Billy Traber. When the team fell out of contention in mid-2002, Shapiro fired manager Charlie Manuel and traded pitching ace Bartolo Colón for prospects Brandon Phillips, Cliff Lee, and Grady Sizemore; acquired Travis Hafner from the Rangers for Ryan Drese and Einar Díaz; and picked up Coco Crisp from the St. Louis Cardinals for aging starter Chuck Finley. Jim Thome left after the season, going to the Phillies for a larger contract.",
"title": "Franchise history"
},
{
"paragraph_id": 75,
"text": "Young Indians teams finished far out of contention in 2002 and 2003 under new manager Eric Wedge. They posted strong offensive numbers in 2004, but continued to struggle with a bullpen that blew more than 20 saves. A highlight of the season was a 22–0 victory over the New York Yankees on August 31, one of the worst defeats suffered by the Yankees in team history.",
"title": "Franchise history"
},
{
"paragraph_id": 76,
"text": "In early 2005, the offense got off to a poor start. After a brief July slump, the Indians caught fire in August, and cut a 15.5 game deficit in the Central Division down to 1.5 games. However, the season came to an end as the Indians went on to lose six of their last seven games, five of them by one run, missing the playoffs by only two games. Shapiro was named Executive of the Year in 2005. The next season, the club made several roster changes, while retaining its nucleus of young players. The off-season was highlighted by the acquisition of top prospect Andy Marte from the Boston Red Sox. The Indians had a solid offensive season, led by career years from Travis Hafner and Grady Sizemore. Hafner, despite missing the last month of the season, tied the single season grand slam record of six, which was set in 1987 by Don Mattingly. Despite the solid offensive performance, the bullpen struggled with 23 blown saves (a Major League worst), and the Indians finished a disappointing fourth.",
"title": "Franchise history"
},
{
"paragraph_id": 77,
"text": "In 2007, Shapiro signed veteran help for the bullpen and outfield in the offseason. Veterans Aaron Fultz and Joe Borowski joined Rafael Betancourt in the Indians bullpen. The Indians improved significantly over the prior year and went into the All-Star break in second place. The team brought back Kenny Lofton for his third stint with the team in late July. The Indians finished with a 96–66 record tied with the Red Sox for best in baseball, their seventh Central Division title in 13 years and their first postseason trip since 2001.",
"title": "Franchise history"
},
{
"paragraph_id": 78,
"text": "The Indians began their playoff run by defeating the Yankees in the ALDS three games to one. This series will be most remembered for the swarm of bugs that overtook the field in the later innings of Game Two. They also jumped out to a three-games-to-one lead over the Red Sox in the ALCS. The season ended in disappointment when Boston swept the final three games to advance to the 2007 World Series.",
"title": "Franchise history"
},
{
"paragraph_id": 79,
"text": "Despite the loss, Cleveland players took home a number of awards. Grady Sizemore, who had a .995 fielding percentage and only two errors in 405 chances, won the Gold Glove award, Cleveland's first since 2001. Indians Pitcher CC Sabathia won the second Cy Young Award in team history with a 19–7 record, a 3.21 ERA and an MLB-leading 241 innings pitched. Eric Wedge was awarded the first Manager of the Year Award in team history. Shapiro was named to his second Executive of the Year in 2007.",
"title": "Franchise history"
},
{
"paragraph_id": 80,
"text": "The Indians struggled during the 2008 season. Injuries to sluggers Travis Hafner and Victor Martinez, as well as starting pitchers Jake Westbrook and Fausto Carmona led to a poor start. The Indians, falling to last place for a short time in June and July, traded CC Sabathia to the Milwaukee Brewers for prospects Matt LaPorta, Rob Bryson, and Michael Brantley. and traded starting third baseman Casey Blake for catching prospect Carlos Santana. Pitcher Cliff Lee went 22–3 with an ERA of 2.54 and earned the AL Cy Young Award. Grady Sizemore had a career year, winning a Gold Glove Award and a Silver Slugger Award, and the Indians finished with a record of 81–81.",
"title": "Franchise history"
},
{
"paragraph_id": 81,
"text": "Prospects for the 2009 season dimmed early when the Indians ended May with a record of 22–30. Shapiro made multiple trades: Cliff Lee and Ben Francisco to the Philadelphia Phillies for prospects Jason Knapp, Carlos Carrasco, Jason Donald and Lou Marson; Victor Martinez to the Boston Red Sox for prospects Bryan Price, Nick Hagadone and Justin Masterson; Ryan Garko to the Texas Rangers for Scott Barnes; and Kelly Shoppach to the Tampa Bay Rays for Mitch Talbot. The Indians finished the season tied for fourth in their division, with a record of 65–97. The team announced on September 30, 2009, that Eric Wedge and all of the team's coaching staff were released at the end of the 2009 season. Manny Acta was hired as the team's 40th manager on October 25, 2009.",
"title": "Franchise history"
},
{
"paragraph_id": 82,
"text": "On February 18, 2010, it was announced that Shapiro (following the end of the 2010 season) would be promoted to team President, with current President Paul Dolan becoming the new Chairman/CEO, and longtime Shapiro assistant Chris Antonetti filling the GM role.",
"title": "Franchise history"
},
{
"paragraph_id": 83,
"text": "On January 18, 2011, longtime popular former first baseman and manager Mike Hargrove was brought in as a special adviser. The Indians started the 2011 season strong – going 30–15 in their first 45 games and seven games ahead of the Detroit Tigers for first place. Injuries led to a slump where the Indians fell out of first place. Many minor leaguers such as Jason Kipnis and Lonnie Chisenhall got opportunities to fill in for the injuries. The biggest news of the season came on July 30 when the Indians traded four prospects for Colorado Rockies star pitcher, Ubaldo Jiménez. The Indians sent their top two pitchers in the minors, Alex White and Drew Pomeranz along with Joe Gardner and Matt McBride. On August 25, the Indians signed the team leader in home runs, Jim Thome off of waivers. He made his first appearance in an Indians uniform since he left Cleveland after the 2002 season. To honor Thome, the Indians placed him at his original position, third base, for one pitch against the Minnesota Twins on September 25. It was his first appearance at third base since 1996, and his last for Cleveland. The Indians finished the season in 2nd place, 15 games behind the division champion Tigers.",
"title": "Franchise history"
},
{
"paragraph_id": 84,
"text": "The Indians broke Progressive Field's Opening Day attendance record with 43,190 against the Toronto Blue Jays on April 5, 2012. The game went 16 innings, setting the MLB Opening Day record, and lasted 5 hours and 14 minutes.",
"title": "Franchise history"
},
{
"paragraph_id": 85,
"text": "On September 27, 2012, with six games left in the Indians' 2012 season, Manny Acta was fired; Sandy Alomar Jr. was named interim manager for the remainder of the season. On October 6, the Indians announced that Terry Francona, who managed the Boston Red Sox to five playoff appearances and two World Series between 2004 and 2011, would take over as manager for 2013.",
"title": "Franchise history"
},
{
"paragraph_id": 86,
"text": "The Indians entered the 2013 season following an active offseason of dramatic roster turnover. Key acquisitions included free agent 1B/OF Nick Swisher and CF Michael Bourn. The team added prized right-handed pitching prospect Trevor Bauer, OF Drew Stubbs, and relief pitchers Bryan Shaw and Matt Albers in a three-way trade with the Arizona Diamondbacks and Cincinnati Reds that sent RF Shin-Soo Choo to the Reds, and Tony Sipp to the Arizona Diamondbacks Other notable additions included utility man Mike Avilés, catcher Yan Gomes, designated hitter Jason Giambi, and starting pitcher Scott Kazmir. The 2013 Indians increased their win total by 24 over 2012 (from 68 to 92), finishing in second place, one game behind Detroit in the Central division, but securing the number one seed in the American League Wild Card Standings. In their first postseason appearance since 2007, Cleveland lost the 2013 American League Wild Card Game 4–0 at home to Tampa Bay. Francona was recognized for the turnaround with the 2013 American League Manager of the Year Award.",
"title": "Franchise history"
},
{
"paragraph_id": 87,
"text": "With an 85–77 record, the 2014 Indians had consecutive winning seasons for the first time since 1999–2001, but they were eliminated from playoff contention during the last week of the season and finished third in the AL Central.",
"title": "Franchise history"
},
{
"paragraph_id": 88,
"text": "In 2015, after struggling through the first half of the season, the Indians finished 81–80 for their third consecutive winning season, which the team had not done since 1999–2001. For the second straight year, the Tribe finished third in the Central and was eliminated from the Wild Card race during the last week of the season. Following the departure of longtime team executive Mark Shapiro on October 6, the Indians promoted GM Chris Antonetti to President of Baseball Operations, assistant general manager Mike Chernoff to GM, and named Derek Falvey as assistant GM. Falvey was later hired by the Minnesota Twins in 2016, becoming their President of Baseball Operations.",
"title": "Franchise history"
},
{
"paragraph_id": 89,
"text": "The Indians set what was then a franchise record for longest winning streak when they won their 14th consecutive game, a 2–1 win over the Toronto Blue Jays in 19 innings on July 1, 2016, at Rogers Centre. The team clinched the Central Division pennant on September 26, their eighth division title overall and first since 2007, as well as returning to the playoffs for the first time since 2013. They finished the regular season at 94–67, marking their fourth straight winning season, a feat not accomplished since the 1990s and early 2000s.",
"title": "Franchise history"
},
{
"paragraph_id": 90,
"text": "The Indians began the 2016 postseason by sweeping the Boston Red Sox in the best-of-five American League Division Series, then defeated the Blue Jays in five games in the 2016 American League Championship Series to claim their sixth American League pennant and advance to the World Series against the Chicago Cubs. It marked the first appearance for the Indians in the World Series since 1997 and first for the Cubs since 1945. The Indians took a 3–1 series lead following a victory in Game 4 at Wrigley Field, but the Cubs rallied to take the final three games and won the series 4 games to 3. The Indians' 2016 success led to Francona winning his second AL Manager of the Year Award with the club.",
"title": "Franchise history"
},
{
"paragraph_id": 91,
"text": "From August 24 through September 15 during the 2017 season, the Indians set a new American League record by winning 22 games in a row. On September 28, the Indians won their 100th game of the season, marking only the third time in history the team has reached that milestone. They finished the regular season with 102 wins, second-most in team history (behind 1954's 111 win team). The Indians earned the AL Central title for the second consecutive year, along with home-field advantage throughout the American League playoffs, but they lost the 2017 ALDS to the Yankees 3–2 after being up 2–0.",
"title": "Franchise history"
},
{
"paragraph_id": 92,
"text": "In 2018, the Indians won their third consecutive AL Central crown with a 91–71 record, but were swept in the 2018 American League Division Series by the Houston Astros, who outscored Cleveland 21–6. In 2019, despite a two-game improvement, the Indians missed the playoffs as they trailed three games behind the Tampa Bay Rays for the second AL Wild Card berth. During the 2020 season (shortened to 60 games because of the COVID-19 pandemic), the Indians were 35–25, finishing second behind the Minnesota Twins in the AL Central, but qualified for the expanded playoffs. In the best-of-three AL Wild Card Series, the Indians were swept by the New York Yankees, ending their season.",
"title": "Franchise history"
},
{
"paragraph_id": 93,
"text": "On December 18, 2020, the team announced that the Indians name and logo would be dropped after the 2021 season, later revealing the replacement to be the Guardians. In their first season as the Guardians, the team won the 2022 AL Central Division crown, marking the 11th division title in franchise history. In the best-of-three AL Wild Card Series, the Guardians won the series against the Tampa Bay Rays 2–0, to advance to the AL Division Series. The Guardians lost the series to the New York Yankees 3–2, ending their season. In June 2022, sports investor David Blitzer bought a 25% stake in the franchise with an option to acquire controlling interest in 2028.",
"title": "Franchise history"
},
{
"paragraph_id": 94,
"text": "Following Francona's retirement at the end of the 2023 season, the Guardians named Stephen Vogt as their new manager on November 6, 2023.",
"title": "Franchise history"
},
{
"paragraph_id": 95,
"text": "The rivalry with fellow Ohio team the Cincinnati Reds is known as the Battle of Ohio or Buckeye Series and features the Ohio Cup trophy for the winner. Prior to 1997, the winner of the cup was determined by an annual pre-season baseball game, played each year at minor-league Cooper Stadium in the state capital of Columbus, and staged just days before the start of each new Major League Baseball season. A total of eight Ohio Cup games were played, with the Guardians winning six of them. It ended with the start of interleague play in 1997. The winner of the game each year was awarded the Ohio Cup in postgame ceremonies. The Ohio Cup was a favorite among baseball fans in Columbus, with attendances regularly topping 15,000.",
"title": "Rivalries"
},
{
"paragraph_id": 96,
"text": "Since 1997, the two teams have played each other as part of the regular season, with the exception of 2002. The Ohio Cup was reintroduced in 2008 and is presented to the team who wins the most games in the series that season. Initially, the teams played one three-game series per season, meeting in Cleveland in 1997 and Cincinnati the following year. The teams have played two series per season against each other since 1999, with the exception of 2002, one at each ballpark. A format change in 2013 made each series two games, except in years when the AL and NL Central divisions meet in interleague play, where it is usually extended to three games per series. Through the 2020 meetings, the Guardians lead the series 66–51.",
"title": "Rivalries"
},
{
"paragraph_id": 97,
"text": "An on-and-off rivalry with the Pittsburgh Pirates stems from the close proximity of the two cities, and features some carryover elements from the longstanding rivalry in the National Football League between the Cleveland Browns and Pittsburgh Steelers. Because the Guardians' designated interleague rival is the Reds and the Pirates' designated rival is the Tigers, the teams have played periodically. The teams played one three-game series each year from 1997–2001 and periodically between 2002 and 2022, generally only in years in which the AL Central played the NL Central in the former interleague play rotation. The teams played six games in 2020 as MLB instituted an abbreviated schedule focusing on regional match-ups. Beginning in 2023, the teams will play a three-game series each season as a result of the new \"balanced\" schedule. The Pirates lead the series 21–18.",
"title": "Rivalries"
},
{
"paragraph_id": 98,
"text": "As the Guardians play most of their games every year with each of their AL Central competitors (formerly 19 for each team until 2023), several rivalries have developed.",
"title": "Rivalries"
},
{
"paragraph_id": 99,
"text": "The Guardians have a geographic rivalry with the Detroit Tigers, highlighted in past years by intense battles for the AL Central title. The matchup has some carryover elements from the Ohio State-Michigan rivalry, as well as the general historic rivalry between Michigan and Ohio dating back to the Toledo War.",
"title": "Rivalries"
},
{
"paragraph_id": 100,
"text": "The Chicago White Sox are another rival, dating back to the 1959 season, when the Sox slipped past the Guardians to win the AL pennant. The rivalry intensified when both clubs were moved to the new AL Central in 1994. During that season, the two teams challenged for the division title, with the Guardians one game back of Chicago when the strike began in August. During a game in Chicago, the White Sox confiscated Albert Belle's corked bat, followed by an attempt by Guardians pitcher Jason Grimsley to crawl through the Comiskey Park clubhouse ceiling to retrieve it. Belle later signed with the White Sox in 1997, adding additional intensity to the rivalry.",
"title": "Rivalries"
},
{
"paragraph_id": 101,
"text": "The official team colors are navy blue, red, and white.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 102,
"text": "The primary home uniform is white with navy blue piping around each sleeve. Across the front of the jersey in script font is the word \"Guardians\" in red with a navy blue outline, with navy blue undershirts, belts, and socks.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 103,
"text": "The alternate home jersey is red with a navy blue script \"Guardians\" trimmed in white on the front, and navy blue piping on both sleeves, with navy blue undershirts, belts, and socks.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 104,
"text": "The home cap is navy blue with a red bill and features a red \"diamond C\" on the front.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 105,
"text": "The primary road uniform is gray, with \"Cleveland\" in navy blue \"diamond C\" letters, trimmed in red across the front of the jersey, navy blue piping around the sleeves, and navy blue undershirts, belts, and socks.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 106,
"text": "The alternate road jersey is navy blue with \"Cleveland\" in red \"diamond C\" letters trimmed in white on the front of the jersey, and navy blue undershirts, belts, and socks.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 107,
"text": "The road cap is similar to the home cap, with the only difference being the bill is navy blue.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 108,
"text": "For all games, the team uses a navy blue batting helmet with a red \"diamond C\" on the front.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 109,
"text": "All jerseys home and away feature the \"winged G\" logo on one sleeve, and a patch from Marathon Petroleum – in a sponsorship deal lasting through the 2026 season – on the other. The sleeve featuring the Marathon logo depends on how the player bats – left handed hitters have it on their right sleeve, as that is the arm facing the main TV camera when he bats, and vice versa for right handed batters.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 110,
"text": "The club name and its cartoon logo have been criticized for perpetuating Native American stereotypes. In 1997 and 1998, protesters were arrested after effigies were burned. Charges were dismissed in the 1997 case, and were not filed in the 1998 case. Protesters arrested in the 1998 incident subsequently fought and lost a lawsuit alleging that their First Amendment rights had been violated.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 111,
"text": "Bud Selig (then-Commissioner of Baseball) said in 2014 that he had never received a complaint about the logo. He has heard that there are some protesting against the mascots, but individual teams such as the Indians and Atlanta Braves, whose name was also criticized for similar reasons, should make their own decisions. An organized group consisting of Native Americans, which had protested for many years, protested Chief Wahoo on Opening Day 2015, noting that this was the 100th anniversary since the team became the Indians. Owner Paul Dolan, while stating his respect for the critics, said he mainly heard from fans who wanted to keep Chief Wahoo, and had no plans to change.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 112,
"text": "On January 29, 2018, Major League Baseball announced that Chief Wahoo would be removed from the Indians' uniforms as of the 2019 season, stating that the logo was no longer appropriate for on-field use. The block \"C\" was promoted to the primary logo; at the time, there were no plans to change the team's name.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 113,
"text": "In 2020, protests over the murder of George Floyd, a black man, by a Minneapolis police officer, led Dolan to reconsider use of the Indians name. On July 3, 2020, on the heels of the Washington Redskins announcing that they would \"undergo a thorough review\" of that team's name, the Indians announced that they would \"determine the best path forward\" regarding the team's name and emphasized the need to \"keep improving as an organization on issues of social justice\".",
"title": "Logos and uniforms"
},
{
"paragraph_id": 114,
"text": "On December 13, 2020, it was reported that the Indians name would be dropped after the 2021 season. Although it had been hinted by the team that they may move forward without a replacement name (in similar manner to the Washington Football Team), it was announced via Twitter on July 23, 2021, that the team will be named the Guardians, after the Guardians of Traffic, eight large Art Deco statues on the Hope Memorial Bridge, located close to Progressive Field.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 115,
"text": "The club, however, found itself amid a trademark dispute with a men's roller derby team called the Cleveland Guardians. The Cleveland Guardians roller derby team has competed in the Men's Roller Derby Association since 2016. In addition, two other entities have attempted to preempt the team's use of the trademark by filing their own registrations with the U.S. Patent and Trademark Office. The roller derby team filed a federal lawsuit in the U.S. District Court for the Northern District of Ohio on October 27, 2021, seeking to block the baseball team's name change. On November 16, 2021, the lawsuit was resolved, and both teams were allowed to continue using the Guardians name. The name change from Indians to Guardians became official on November 19, 2021.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 116,
"text": "Cleveland stations WTAM (1100 AM/106.9 FM) and WMMS (100.7 FM) serve as flagship stations for the Cleveland Guardians Radio Network, with lead announcer Tom Hamilton and Jim Rosenhaus calling the games.",
"title": "Media"
},
{
"paragraph_id": 117,
"text": "The television rights are held by Bally Sports Great Lakes. Lead announcer Matt Underwood, analyst and former Indians Gold Glove-winning centerfielder Rick Manning, and field reporter Andre Knott form the broadcast team, with Al Pawlowski and former Indians pitcher Jensen Lewis serving as pregame/postgame hosts. Former Indians Pat Tabler, Ellis Burks and Chris Gimenez serve as contributors and occasional fill-ins for Manning and/or Lewis. Select games are simulcast over-the-air on WKYC channel 3.",
"title": "Media"
},
{
"paragraph_id": 118,
"text": "Notable former broadcasters include Tom Manning, Jack Graney (the first ex-baseball player to become a play-by-play announcer), Ken Coleman, Joe Castiglione, Van Patrick, Nev Chandler, Bruce Drennan, Jim \"Mudcat\" Grant, Rocky Colavito, Dan Coughlin, and Jim Donovan.",
"title": "Media"
},
{
"paragraph_id": 119,
"text": "Previous broadcasters who have had lengthy tenures with the team include Joe Tait (15 seasons between TV and radio), Jack Corrigan (18 seasons on TV), Ford C. Frick Award winner Jimmy Dudley (19 seasons on radio), Mike Hegan (23 seasons between TV and radio), and Herb Score (34 seasons between TV and radio).",
"title": "Media"
},
{
"paragraph_id": 120,
"text": "Under the Cleveland Indians name, the team has been featured in several films, including:",
"title": "Popular culture"
},
{
"paragraph_id": 121,
"text": "Numerous Naps/Indians players have had statues made in their honor:",
"title": "Awards and honors"
},
{
"paragraph_id": 122,
"text": "(*) – Inducted into the Baseball Hall of Fame as an Indian/Nap.",
"title": "Awards and honors"
},
{
"paragraph_id": 123,
"text": "In July 2022 - in honor of the 75th anniversary of Larry Doby becoming the AL's first black player - a mural was added to the exterior of Progressive Field, honoring players who were viewed as barrier breakers that played for the Indians/Guardians. The mural features Doby, Frank Robinson, and Satchel Paige.",
"title": "Awards and honors"
},
{
"paragraph_id": 124,
"text": "A portion of Eagle Avenue near Progressive Field was renamed \"Larry Doby Way\" in 2012",
"title": "Awards and honors"
},
{
"paragraph_id": 125,
"text": "A number of parks and newly built and renovated youth baseball fields in Cleveland have been named after former and current Indians/Guardians players, including:",
"title": "Awards and honors"
},
{
"paragraph_id": 126,
"text": "The Cleveland Guardians farm system consists of seven minor league affiliates.",
"title": "Minor league affiliations"
},
{
"paragraph_id": 127,
"text": "(*) - There were no fans allowed in any MLB stadium in 2020 due to the COVID-19 pandemic.",
"title": "Regular season home attendance"
},
{
"paragraph_id": 128,
"text": "(**) - At the beginning of the season, there was a limit of 30% capacity due to COVID-19 restrictions implemented by Ohio Governor Mike DeWine. On June 2, DeWine lifted the restrictions, and the team immediately allowed full capacity at Progressive Field.",
"title": "Regular season home attendance"
}
] | The Cleveland Guardians are an American professional baseball team based in Cleveland. The Guardians compete in Major League Baseball (MLB) as a member club of the American League (AL) Central division. Since 1994, the team has played its home games at Progressive Field. Since their establishment as a Major League franchise in 1901, the team has won 11 Central Division titles, six American League pennants, and two World Series championships. The team's World Series championship drought since 1948 is the longest active among all 30 current Major League teams. The team's name references the Guardians of Traffic, eight monolithic 1932 Art Deco sculptures by Henry Hering on the city's Hope Memorial Bridge, which is adjacent to Progressive Field. The team's mascot is named "Slider". The team's spring training facility is at Goodyear Ballpark in Goodyear, Arizona. The franchise originated in 1896 as the Columbus Buckeyes, a minor league team based in Columbus, Ohio, that played in the Western League. The team renamed to the Columbus Senators the following year and then relocated to Grand Rapids, Michigan in the middle of the 1899 season, becoming the Grand Rapids Furniture Makers for the remainder of the season. The team relocated to Cleveland in 1900 and was called the Cleveland Lake Shores. The Western League itself was renamed the American League prior to the 1900 season while continuing its minor league status. When the American League declared itself a major league in 1901, Cleveland was one of its eight charter franchises. Originally called the Cleveland Bluebirds or Blues, the team was also unofficially called the Cleveland Bronchos in 1902. Beginning in 1903, the team was named the Cleveland Napoleons or Naps, after team captain Nap Lajoie. Following Lajoie's departure after the 1914 season, club owner Charles Somers requested that baseball writers choose a new name. They chose the name Cleveland Indians, declared to be a tribute of the nickname that fans gave to the Cleveland Spiders while Louis Sockalexis, a Native American, was playing for the team. That name stuck and remained in use for more than a century. Common nicknames for the Indians were "the Tribe" and "the Wahoos", the latter referencing their longtime logo, Chief Wahoo. After the Indians name came under criticism as part of the Native American mascot controversy, the team adopted the Guardians name following the 2021 season. From August 24 to September 14, 2017, the team won 22 consecutive games, the longest winning streak in American League history, and the second longest winning streak in Major League Baseball history. As of the end of the 2023 season, the franchise's overall record is 9,760–9,300 (.512). | 2001-10-08T21:17:03Z | 2023-12-31T16:11:37Z | [
"Template:Baseball hall of fame list",
"Template:Div col end",
"Template:Dead link",
"Template:Commons category",
"Template:Short description",
"Template:Infobox MLB",
"Template:Convert",
"Template:Better source needed",
"Template:MLBTeam",
"Template:S-ttl",
"Template:Cite magazine",
"Template:Webarchive",
"Template:S-bef",
"Template:Winpct",
"Template:Retired number list",
"Template:Cleveland Guardians",
"Template:Navboxes",
"Template:MLBy",
"Template:Div col",
"Template:S-start-collapsible",
"Template:S-end",
"Template:Cite book",
"Template:Baseball year",
"Template:Main",
"Template:Further",
"Template:Reflist",
"Template:Authority control",
"Template:Use mdy dates",
"Template:See also",
"Template:Ford C. Frick award list",
"Template:Cite news",
"Template:Cite press release",
"Template:S-aft",
"Template:Redirect",
"Template:Wide image",
"Template:Cleveland Indians roster",
"Template:Notelist",
"Template:Win–loss record",
"Template:Cite web",
"Template:Cite court",
"Template:Portal bar"
] | https://en.wikipedia.org/wiki/Cleveland_Guardians |
6,653 | Cape Town | Cape Town is the legislative capital of South Africa. It is the country's oldest city and the seat of the Parliament of South Africa. It is the country's second-largest city, after Johannesburg, and the largest in the Western Cape. The city is part of the City of Cape Town metropolitan municipality.
The city is known for its harbour, its natural setting in the Cape Floristic Region, and for landmarks such as Table Mountain and Cape Point. In 2014, Cape Town was named the best place in the world to visit by The New York Times and similarly by The Daily Telegraph in 2016.
Located on the shore of Table Bay, the City Bowl area of Cape Town is the oldest urban area in the Western Cape, with a significant cultural heritage. It was founded by the Dutch East India Company (VOC) as a supply station for Dutch ships sailing to East Africa, India, and the Far East. Jan van Riebeeck's arrival on 6 April 1652 established the VOC Cape Colony, the first permanent European settlement in South Africa. Cape Town outgrew its original purpose as the first European outpost at the Castle of Good Hope, becoming the economic and cultural hub of the Cape Colony. Until the Witwatersrand Gold Rush and the development of Johannesburg, Cape Town was the largest city in southern Africa.
The metropolitan area has a long coastline on the Atlantic Ocean, which includes False Bay, and extends to the Hottentots Holland mountains to the east. The Table Mountain National Park is within the city boundaries and there are several other nature reserves and marine-protected areas within, and adjacent to, the city, protecting the diverse terrestrial and marine natural environment.
The earliest known remnants of human occupation in the region were found at Peers Cave in Fish Hoek and have been dated to between 15,000 and 12,000 years old.
Little is known of the history of the region's first residents, since there is no written history from the area before it was first mentioned by Portuguese explorer Bartolomeu Dias. Dias, the first European to reach the area, arrived in 1488 and named it "Cape of Storms" (Cabo das Tormentas). It was later renamed by John II of Portugal as "Cape of Good Hope" (Cabo da Boa Esperança) because of the great optimism engendered by the opening of a sea route to the Indian subcontinent and East Indies.
In 1497, Portuguese explorer Vasco da Gama recorded a sighting of the Cape of Good Hope.
In 1510, at the Battle of Salt River, the Portuguese admiral Francisco de Almeida and sixty-four of his men were killed and his party was defeated by the !Uriǁ’aekua ("Goringhaiqua" in Dutch approximate spelling) using specially trained cattle. The !Uriǁ’aekua were one of the so-called Khoekhoe clans who inhabited the area.
In the late 16th century French, Danish, Dutch and English, but mainly Portuguese, ships regularly continued to stop over in Table Bay en route to the Indies. They traded tobacco, copper, and iron with the Khoekhoe clans of the region in exchange for fresh meat and other essential travelling provisions.
In 1652, Jan van Riebeeck and other employees of the United East India Company (Dutch: Verenigde Oost-indische Compagnie, VOC) were sent to the Cape Colony to establish a way-station for ships travelling to the Dutch East Indies, and the Fort de Goede Hoop (later replaced by the Castle of Good Hope). The settlement grew slowly during this period, as it was hard to find adequate labour. This labour shortage prompted the local authorities to import enslaved people from Indonesia and Madagascar. Many of these people are ancestors of modern-day Cape Coloured communities.
Under Van Riebeeck and his successors, as VOC commanders and later governors at the Cape, a wide range of agricultural plants were introduced to the Cape. Some of these, including grapes, cereals, ground nuts, potatoes, apples and citrus, had a large and lasting influence on the societies and economies of the region.
With the Dutch Republic being transformed into Revolutionary France's vassal Batavian Republic, Great Britain moved to take control of Dutch colonies, including the colonial possessions of the VOC.
Britain captured Cape Town in 1795, but it was returned to the Dutch by treaty in 1803. British forces occupied the Cape again in 1806 following the Battle of Blaauwberg when the successor state to the Batavian Republic, the Kingdom of Holland, allied with France during the Napoleonic Wars.
In the Anglo-Dutch Treaty of 1814, Cape Town was permanently ceded to the United Kingdom. It became the capital of the newly formed Cape Colony, whose territory expanded very substantially through the 1800s. With expansion came calls for greater independence from the UK, with the Cape attaining its own parliament (1854) and a locally accountable Prime Minister (1872). Suffrage was established according to the non-racial Cape Qualified Franchise.
During the 1850s and 1860s, additional plant species were introduced from Australia by the British authorities. Notably rooikrans was introduced to stabilise the sand of the Cape Flats to allow for a road connecting the peninsula with the rest of the African continent and eucalyptus was used to drain marshes.
In 1859 the first railway line was built by the Cape Government Railways and a system of railways rapidly expanded in the 1870s. The discovery of diamonds in Griqualand West in 1867, and the Witwatersrand Gold Rush in 1886, prompted a flood of immigration into South Africa. In 1895 the city's first public power station, the Graaff Electric Lighting Works, was opened.
Conflicts between the Boer republics in the interior and the British colonial government resulted in the Second Boer War of 1899–1902. Britain's victory in this war led to the formation of a united South Africa. From 1891 to 1901, the city's population more than doubled from 67,000 to 171,000.
As the 19th century came to an end, the economic and political dominance of Cape Town in the Southern Africa region during the 19th century started to give way to the dominance of Johannesburg and Pretoria in the 20th century.
In 1910, Britain established the Union of South Africa, which unified the Cape Colony with the two defeated Boer Republics and the British colony of Natal. Cape Town became the legislative capital of the Union, and later of the Republic of South Africa.
By the time of the 1936 census, Johannesburg had overtaken Cape Town as the largest city in the country.
In 1945 the expansion of the Cape Town foreshore was completed adding an additional 194 ha (480 acres) to the City Bowl area to the city centre.
Prior to the mid-twentieth century, Cape Town was one of the most racially integrated cities in South Africa. In the 1948 national elections, the National Party won on a platform of apartheid (racial segregation) under the slogan of "swart gevaar" (Afrikaans for "black danger"). This led to the erosion and eventual abolition of the Cape's multiracial franchise.
In 1950, the apartheid government first introduced the Group Areas Act, which classified and segregated urban areas according to race. Formerly multi-racial suburbs of Cape Town were either purged of residents deemed unlawful by apartheid legislation, or demolished. The most infamous example of this in Cape Town was the suburb of District Six. After it was declared a whites-only area in 1965, all housing there was demolished and over 60,000 residents were forcibly removed. Many of these residents were relocated to the Cape Flats.
The earliest of the Cape Flats forced removals saw the expulsion of Black South Africans to the Langa, Cape Town's first and oldest township, in line with the 1923 Native Urban Areas Act.
Under apartheid, the Cape was considered a "Coloured labour preference area", to the exclusion of "Bantus", i.e. Black Africans. The implementation of this policy was widely opposed by trade unions, civil society and opposition parties. It is notable that this policy was not advocated for by any Coloured political group, and its implementation was a unilateral decision by the apartheid government. During the student-led Soweto Uprising of June 1976, school students from Langa, Gugulethu and Nyanga in Cape Town reacted to the news of the protests against Bantu Education by organising gatherings and marches of their own. A number of school buildings were burnt down and the protest action was met with forceful resistance from the police.
Cape Town has been home to many leaders of the anti-apartheid movement. In Table Bay, 10 km (6 mi) from the city is Robben Island. This penitentiary island was the site of a maximum security prison where many famous apartheird-era political prisoners served long prison sentences. Famous prisoners include activist, lawyer and future president Nelson Mandela who served 18 of his 27 years of imprisonment on the island, as well as two other future presidents, Kgalema Motlanthe and Jacob Zuma.
In one of the most famous moments marking the end of apartheid, Nelson Mandela made his first public speech since his imprisonment, from the balcony of Cape Town City Hall, hours after being released on 11 February 1990. His speech heralded the beginning of a new era for the country. The first democratic election, was held four years later, on 27 April 1994.
Nobel Square in the Victoria & Alfred Waterfront features statues of South Africa's four Nobel Peace Prize winners: Albert Luthuli, Desmond Tutu, F. W. de Klerk and Nelson Mandela.
Cape Town faced a severe water shortage from 2015 to 2018.
Since the 2010s, Cape Town and the wider Western Cape province have seen the rise of a small secessionist movement. Support for parties "which have formally adopted Cape independence" was around 5% in the 2021 municipal elections.
Cape Town is located at latitude 33.55° S (approximately the same as Sydney and Buenos Aires and equivalent to Casablanca and Los Angeles in the northern hemisphere) and longitude 18.25° E.
Table Mountain, with its near vertical cliffs and flat-topped summit over 1,000 m (3,300 ft) high, and with Devil's Peak and Lion's Head on either side, together form a dramatic mountainous backdrop enclosing the central area of Cape Town, the so-called City Bowl. A thin strip of cloud, known colloquially as the "tablecloth" ("Karos" in Afrikaans), sometimes forms on top of the mountain. To the immediate south of the city, the Cape Peninsula is a scenic mountainous spine jutting 40 km (25 mi) southward into the Atlantic Ocean and terminating at Cape Point.
There are over 70 peaks above 300 m (980 ft) within Cape Town's official metropolitan limits. Many of the city's suburbs lie on the large plain called the Cape Flats, which extends over 50 km (30 mi) to the east and joins the peninsula to the mainland. The Cape Town region is characterised by an extensive coastline, rugged mountain ranges, coastal plains and inland valleys.
The extent of Cape Town has varied considerably over time. It originated as a small settlement at the foot of Table Mountain and has grown beyond its city limits as a metropolitan area to encompass the entire Cape Peninsula to the south, the Cape Flats, the Helderberg basin and part of the Steenbras catchment area to the east, and the Tygerberg hills, Blouberg and other areas to the north. Robben Island in Table Bay is also part of Cape Town. It is bounded by the Atlantic Ocean to the west, and False Bay to the south. To the north and east, the extent is demarcated by boundaries of neighbouring municipalities within the Western Cape province.
The official boundaries of the city proper extend between the City Bowl and the Atlantic Seaboard to the east and the Southern Suburbs to the south. The City of Cape Town, the metropolitan municipality that takes its name from the city covers the Greater Cape Town metropolitan area, known as the Cape Metropole, extending beyond the city proper itself to include a number of satellite towns, suburbs and rural areas such as Milnerton, Atlantis, Bellville, Brackenfell, Durbanville, Goodwood, Gordon's Bay, Hout Bay, Kraaifontein, Kuilsrivier, Muizenberg, Simon's Town, Somerset West and Strand among others.
The Cape Peninsula is 52 km (30 mi) long from Mouille Point in the north to Cape Point in the south, with an area of about 470 km (180 sq mi), and it displays more topographical variety than other similar sized areas in southern Africa, and consequently spectacular scenery. There are diverse low-nutrient soils, large rocky outcrops, scree slopes, a mainly rocky coastline with embayed beaches, and considerable local variation in climatic conditions. The sedimentary rocks of the Cape Supergroup, of which parts of the Graafwater and Peninsula Formations remain, were uplifted between 280 and 21S million years ago, and were largely eroded away during the Mesozoic. The region was geologically stable during the Tertiary, which has led to slow denudation of the durable sandstones. Erosion rate and drainage has been influenced by fault lines and fractures, leaving remnant steep-sided massifs like Table Mountain surrounded by flatter slopes of deposits of the eroded material overlaying the older rocks,
There are two internationally notable landmarks, Table Mountain and Cape Point, at opposite ends of the Peninsula Mountain Chain, with the Cape Flats and False Bay to the east and the Atlantic Ocean to the west. The landscape is dominated by sandstone plateaux and ridges, which generally drop steeply at their margins to the surrounding debris slopes, interrupted by a major gap at the Fish Hoek–Noordhoek valley. In the south much of the area is a low sandstone plateau with sand dunes. Maximum altitude is 1113 m on Table Mountain. The Cape Flats (Afrikaans: Kaapse Vlakte) is a flat, low-lying, sandy area, area to the east the Cape Peninsula, and west of the Helderberg much of which was wetland and dunes within recent history. To the north are the Tygerberg Hills and the Stellenbosch district.
The Helderberg area of Greater Cape Town, previously known as the "Hottentots-Holland" area, is mostly residential, but also a wine-producing area east of the Cape Flats, west of the Hottentots Holland mountain range and south of the Helderberg mountain, from which it gets its current name. The Helderberg consists of the previous municipalities of Somerset West, Strand, Gordons Bay and a few other towns. Industry and commerce is largely in service of the area. After the Cape Peninsula, Helderberg is the next most mountainous part of Greater Cape Town, bordered to the north and east by the highest peaks in the region along the watershed of the Helderberg and Hottentots Holland Mountains, which are part of the Cape Fold Belt with Cape Supergroup strata on a basement of Tygerberg Formation rocks intruded by part of the Stellenbosch granite pluton. The region includes the entire catchment of the Lourens and Sir Lowry's rivers, separated by the Schapenberg hill, and a small part of the catchment of the Eerste River to the west. The Helderberg is ecologically highly diverse, rivaling the Cape Peninsula, and has its own endemic ecoregions and several conservation areas.
To the east of the Hottentots Holland mountains is the valley of the Steenbras River, in which the Steenbras Dam was built as a water supply for Cape Town. The dam has been supplemented by several other dams around the western Cape, some of them considerably larger. This is almost entirely a conservation area, of high biodiversity. Bellville, Brackenfell, Durbanville, Kraaifontein, Goodwood and Parow are a few of the towns that make up the Northern Suburbs of Cape Town. In current popular culture these areas are often referred to as being beyond the "boerewors curtain," a play on the term "iron curtain."
UNESCO declared Robben Island in the Western Cape a World Heritage Site in 1999. Robben Island is located in Table Bay, some 6 km (3.7 mi) west of Bloubergstrand, a coastal suburb north of Cape Town, and stands some 30m above sea level. Robben Island has been used as a prison where people were isolated, banished, and exiled for nearly 400 years. It was also used as a leper colony, a post office, a grazing ground, a mental hospital, and an outpost.
The Cape Peninsula is a rocky and mountainous peninsula that juts out into the Atlantic Ocean at the south-western extremity of the continent. At its tip is Cape Point and the Cape of Good Hope. The peninsula forms the west side of False Bay and the Cape Flats. On the east side are the Helderberg and Hottentots Holland mountains. The three main rock formations are the late-Precambrian Malmebury group (sedimentary and metamorphic rock), the Cape Granite suit, comprising the huge Peninsula, Kuilsrivier-Helderberg, and Stellenbosch batholiths, that were intruded into the Malmesbury Group about 630 million years ago, and the Table Mountain group sandstones that were deposited on the eroded surface of the granite and Malmesbury series basement about 450 million years ago.
The sand, silt and mud deposits were lithified by pressure and then folded during the Cape Orogeny to form the Cape Fold Belt, which extends in an arc along the western and southern coasts. The present landscape is due to prolonged erosion having carved out deep valleys, removing parts of the once continuous Table Mountain Group sandstone cover from over the Cape Flats and False Bay, and leaving high residual mountain ridges.
At times the sea covered the Cape Flats and Noordhoek valley and the Cape Peninsula was then a group of islands. During glacial periods the sea level dropped to expose the bottom of False Bay to weathering and erosion, with the last major regression leaving the entire bottom of False Bay exposed. During this period an extensive system of dunes was formed on the sandy floor of False Bay. At this time the drainage outlets lay between Rocky Bank Cape Point to the west, and between Rocky Bank and Hangklip Ridge to the east, with the watershed roughly along the line of the contact zone east of Seal Island and Whittle Rock.
Cape Town has a warm Mediterranean climate (Köppen: Csb), with mild, moderately wet winters and dry, warm summers. Winter, which lasts from June to September, may see large cold fronts entering for limited periods from the Atlantic Ocean with significant precipitation and strong north-westerly winds. Winter months in the city average a maximum of 18 °C (64 °F) and minimum of 8.5 °C (47 °F). Total annual rainfall in the city averages 515 mm (20.3 in) although in the Southern Suburbs, close to the mountains, rainfall is significantly higher and averages closer to 1,000 mm (39.4 in).
Summer, which lasts from December to March, is warm and dry with an average maximum of 26 °C (79 °F) and minimum of 16 °C (61 °F). The region can get uncomfortably hot when the Berg Wind, meaning "mountain wind", blows from the Karoo interior. Spring and summer generally feature a strong wind from the south-east, known locally as the south-easter or the Cape Doctor, so called because it blows air pollution away. This wind is caused by a persistent high-pressure system over the South Atlantic to the west of Cape Town, known as the South Atlantic High, which shifts latitude seasonally, following the sun, and influencing the strength of the fronts and their northward reach. Cape Town receives about 3,100 hours of sunshine per year.
Water temperatures range greatly, between 10 °C (50 °F) on the Atlantic Seaboard, to over 22 °C (72 °F) in False Bay. Average annual ocean surface temperatures are between 13 °C (55 °F) on the Atlantic Seaboard (similar to Californian waters, such as San Francisco or Big Sur), and 17 °C (63 °F) in False Bay (similar to Northern Mediterranean temperatures, such as Nice or Monte Carlo).
Unlike other parts of the country the city does not have many thunderstorms, and most of those that do occur, happen around October to December and March to April.
A 2019 paper published in PLOS One estimated that under Representative Concentration Pathway 4.5, a "moderate" scenario of climate change where global warming reaches ~2.5–3 °C (4.5–5.4 °F) by 2100, the climate of Cape Town in the year 2050 would most closely resemble the current climate of Perth in Australia. The annual temperature would increase by 1.1 °C (2.0 °F), and the temperature of the coldest month by 0.3 °C (0.54 °F), while the temperature of the warmest month would be 2.3 °C (4.1 °F) higher. According to Climate Action Tracker, the current warming trajectory appears consistent with 2.7 °C (4.9 °F), which closely matches RCP 4.5.
Moreover, according to the 2022 IPCC Sixth Assessment Report, Cape Town is one of 12 major African cities (Abidjan, Alexandria, Algiers, Cape Town, Casablanca, Dakar, Dar es Salaam, Durban, Lagos, Lomé, Luanda and Maputo) which would be the most severely affected by future sea level rise. It estimates that they would collectively sustain cumulative damages of US$65 billion under RCP 4.5 and US$86.5 billion for the high-emission scenario RCP 8.5 by the year 2050. Additionally, RCP 8.5 combined with the hypothetical impact from marine ice sheet instability at high levels of warming would involve up to US$137.5 billion in damages, while the additional accounting for the "low-probability, high-damage events" may increase aggregate risks to US$187 billion for the "moderate" RCP4.5, US$206 billion for RCP8.5 and US$397 billion under the high-end ice sheet instability scenario. Since sea level rise would continue for about 10,000 years under every scenario of climate change, future costs of sea level rise would only increase, especially without adaptation measures.
Cape Town's coastal water ranges from cold to mild, and the difference between the two sides of the peninsula can be dramatic. While the Atlantic Seaboard averages annual sea surface temperatures around 13 °C (55 °F), the False Bay coast is much warmer, averaging between 16 and 17 °C (61 and 63 °F) annually. In summer, False Bay water averages slightly over 20 °C (68 °F), with 22 °C (72 °F) an occasional high. Beaches located on the Atlantic Coast tend to have colder water due to the wind driven upwellings which contribute to the Benguela Current which originates off the Cape Peninsula, while the water at False Bay beaches may occasionally be warmer by up to 10 °C (18 °F) at the same time in summer.
In summer False Bay is thermally stratified, with a vertical temperature variation of 5 to 9˚C between the warmer surface water and cooler depths below 50 m, while in winter the water column is at nearly constant temperature at all depths. The development of a thermocline is strongest around late December and peaks in late summer to early autumn. In summer the south easterly winds generate a zone of upwelling near Cape Hangklip, where surface water temperatures can be 6 to 7 °C colder than the surrounding areas, and bottom temperatures below 12 °C.
In the summer to early autumn (January–March), cold water upwelling near Cape Hangklip causes a strong surface temperature gradient between the south-western and north-eastern corners of the bay. In winter the surface temperature tends to be much the same everywhere. In the northern sector surface temperature varies a bit more (13 to 22 °C) than in the south (14 to 20 °C) during the year.
Surface temperature variation from year to year is linked to the El Niño–Southern Oscillation. During El Niño years the South Atlantic high is shifted, reducing the south-easterly winds, so upwelling and evaporative cooling are reduced and sea surface temperatures throughout the bay are warmer, while in La Niña years there is more wind and upwelling and consequently lower temperatures. Surface water heating during El Niño increases vertical stratification. The relationship is not linear. Occasionally eddies from the Agulhas current will bring warmer water and vagrant sea life carried from the south and east coasts into False Bay.
Located in a Conservation International biodiversity hotspot as well as the unique Cape Floristic Region, the city of Cape Town has one of the highest levels of biodiversity of any equivalent area in the world. These protected areas are a World Heritage Site, and an estimated 2,200 species of plants are confined to Table Mountain – more than exist in the whole of the United Kingdom which has 1200 plant species and 67 endemic plant species. Many of these species, including a great many types of proteas, are endemic to the mountain and can be found nowhere else.
It is home to a total of 19 different vegetation types, of which several are endemic to the city and occur nowhere else in the world. It is also the only habitat of hundreds of endemic species, and hundreds of others which are severely restricted or threatened. This enormous species diversity is mainly because the city is uniquely located at the convergence point of several different soil types and micro-climates.
Table Mountain has an unusually rich biodiversity. Its vegetation consists predominantly of several different types of the unique and rich Cape Fynbos. The main vegetation type is endangered Peninsula Sandstone Fynbos, but critically endangered Peninsula Granite Fynbos, Peninsula Shale Renosterveld and Afromontane forest occur in smaller portions on the mountain.
Rapid population growth and urban sprawl has covered much of these ecosystems with development. Consequently, Cape Town now has over 300 threatened plant species and 13 which are now extinct. The Cape Peninsula, which lies entirely within the city of Cape Town, has the highest concentration of threatened species of any continental area of equivalent size in the world. Tiny remnant populations of critically endangered or near extinct plants sometimes survive on road sides, pavements and sports fields. The remaining ecosystems are partially protected through a system of over 30 nature reserves – including the massive Table Mountain National Park.
Cape Town reached first place in the 2019 iNaturalist City Nature Challenge in two out of the three categories: Most Observations, and Most Species. This was the first entry by Capetonians in this annual competition to observe and record the local biodiversity over a four-day long weekend during what is considered the worst time of the year for local observations. A worldwide survey suggested that the extinction rate of endemic plants from the City of Cape Town is one of the highest in the world, at roughly three per year since 1900 – partly a consequence of the very small and localised habitats and high endemicity.
Cape Town is governed by a 231-member city council elected in a system of mixed-member proportional representation. The city is divided into 116 wards, each of which elects a councillor by first-past-the-post voting. The remaining 115 councillors are elected from party lists so that the total number of councillors for each party is proportional to the number of votes received by that party.
In the 2021 Municipal Elections, the Democratic Alliance (DA) kept its majority, this time diminished, taking 136 seats. The African National Congress lost substantially, receiving 43 of the seats. The Democratic Alliance candidate for the Cape Town mayoralty, Geordin Hill-Lewis was elected mayor.
Cape Town has nineteen active sister city agreements
The City of Cape Town has expressed explicit support for Ukraine during the 2022 invasion of the country by Russia. To show this support the City of Cape Town lit up the Old City Hall in the colours of the Ukrainian flag on 2 March 2022. This has differentiated the city from the officially neutral foreign policy position taken by the South African national government.
According to the South African National Census of 2011, the population of the City of Cape Town metropolitan municipality – an area that includes suburbs and exurbs – is 3,740,026 people. This represents an annual growth rate of 2.6% compared to the results of the previous census in 2001 which found a population of 2,892,243 people. Of those residents who were asked about their first language, 35.7% spoke Afrikaans, 29.8% spoke Xhosa and 28.4% spoke English. 24.8% of the population is under the age of 15, while 5.5% is 65 or older. The sex ratio is 96, meaning that there are slightly more women than men.
Of those residents aged 20 or older, 1.8% have no schooling, 8.1% have some schooling but did not finish primary school, 4.6% finished primary school but have no secondary schooling, 38.9% have some secondary schooling but did not finish Grade 12, 29.9% finished Grade 12 but have no higher education, and 16.7% have higher education. Overall, 46.6% have at least a Grade 12 education. Of those aged between 5 and 25, 67.8% are attending an educational institution. Amongst those aged between 15 and 65 the unemployment rate is 23.7%. The average annual household income is R161,762.
The total number of households grew from 653,085 in 1996 to 1,068,572 in 2011, which represents an increase of 63.6%. The average number of household members declined from 3,92 in 1996 to 3,50 in 2011. Of those households, 78.4% are in formal structures (houses or flats), while 20.5% are in informal structures (shacks). 97.3% of City-supplied households have access to electricity, and 94.0% of households use electricity for lighting. 87.3% of households have piped water to the dwelling, while 12.0% have piped water through a communal tap. 94.9% of households have regular refuse collection service. 91.4% of households have a flush toilet or chemical toilet, while 4.5% still use a bucket toilet. 82.1% of households have a refrigerator, 87.3% have a television and 70.1% have a radio. Only 34.0% have a landline telephone, but 91.3% have a cellphone. 37.9% have a computer, and 49.3% have access to the Internet (either through a computer or a cellphone).
In 2011 over 70% of cross provincial South African migrants coming into the Western Cape settled in Cape Town; 53.64% of South African migrants into the Western Cape came from the Eastern Cape, the old Cape Colony's former native reserve, and 20.95% came from Gauteng province.
According to the 2016 City of Cape Town community survey, there were 4,004,793 people in the City of Cape Town metro. Out of this population, 45.7% identified as Black African, 35.1% identified as Coloured, 16.2% identified as White and 1.6% identified as Asian.
During the outbreak of the COVID-19 pandemic in South Africa, local media reported that increasing numbers of wealthy and middle-class South Africans have started moving from inland areas to coastal regions of the country, most notably Cape Town, in a phenomenon referred to as "semigration." Declining municipal services in the rest of the country and the South African energy crisis are other cited reasons for semigration.
The city's population is expected to grow by an additional 400,000 residents between 2020 and 2025 with 76% of those new residents falling into the low-income bracket earning less than R 13,000 a month.
In the 2015 General Household Survey 82.3% of respondents self identified as Christian, 8% as Muslim, 3.8% as following a traditional African religion and 3.1% as "nothing in particular."
Most places of worship in the city are Christian churches and cathedrals: Zion Christian Church, Apostolic Faith Mission of South Africa, Assemblies of God, Baptist Union of Southern Africa (Baptist World Alliance), Methodist Church of Southern Africa (World Methodist Council), Anglican Church of Southern Africa (Anglican Communion), Presbyterian Church of Africa (World Communion of Reformed Churches), Roman Catholic Archdiocese of Cape Town (Catholic Church), the Orthodox Archbishopric of Good Hope (Greek Orthodox Cathedral of St George) and the Church of Jesus Christ of Latter-day Saints (LDS Church). The LDS Church announced 4 April 2021 the construction of a temple with groundbreaking dates yet to be announced.
Islam is the city's second largest religion with a long history in Cape Town, resulting in a number of mosques and other Muslim religious sites spread across the city, such as the Auwal Mosque, South Africa's first mosque.
Cape Town's significant Jewish population supports a number of synagogues most notably the historic Gardens Shul, the oldest Jewish congregation in South Africa. Marais Road Shul in the city's Jewish hub, Sea Point, is the largest Jewish congregation in South Africa. Temple Israel (Cape Town Progressive Jewish Congregation) also has three temples in the city. There is also a Chabad centre in Sea Point and a Chabad on Campus at the University of Cape Town, catering to Jewish students.
Other religious sites in the city include Hindu, Buddhist and Baháʼí temples.
In recent years, the city has struggled with drugs, a surge in violent drug-related crime and more recently gang violence. In the Cape Flats alone, there were approximately 100,000 people in over 130 different gangs in 2018. While there are some alliances, this multitude and division is also cause for conflict between groups. At the same time, the economy has grown due to the boom in the tourism and the real estate industries. Since July 2019 widespread violent crime in poorer gang dominated areas of greater Cape Town has resulted in an ongoing military presence in these neighbourhoods. Cape Town had the highest murder rate among large South African cities at 77 murders per 100,000 people in the period April 2018 to March 2019, with 3157 murders mostly occurring in poor townships created under the apartheid regime. In 2022 the Mexican Council for Public Security and Criminal Justice ranked Cape Town as one of the 50 most violent cities in the world.
The city is South Africa's second main economic centre and Africa's third main economic hub city. It serves as the regional manufacturing centre in the Western Cape. In 2019 the city's GMP of R489 billion (US$33.04 billion) represented 71.1% of the Western Cape's total GRP and 9.6% of South Africa's total GDP; the city also accounted for 11.1% of all employed people in the country and had a citywide GDP per capita of R111,364 (US$7,524). Since the global financial crisis of 2007 the city's economic growth rate has mirrored South Africa's decline in growth whilst the population growth rate for the city has remained steady at around 2% a year. Around 80% of the city's economic activity is generated by the tertiary sector of the economy with the finance, retail, real-estate, food and beverage industries being the four largest contributors to the city's economic growth rate.
In 2008 the city was named as the most entrepreneurial city in South Africa, with the percentage of Capetonians pursuing business opportunities almost three times higher than the national average. Those aged between 18 and 64 were 190% more likely to pursue new business, whilst in Johannesburg, the same demographic group was only 60% more likely than the national average to pursue a new business.
With the highest number of successful information technology companies in Africa, Cape Town is an important centre for the industry on the continent. This includes an increasing number of companies in the space industry. Growing at an annual rate of 8.5% and an estimated worth of R77 billion in 2010, nationwide the high tech industry in Cape Town is becoming increasingly important to the city's economy. A number of entrepreneurship initiatives and universities hosting technology startups such as Jumo, Yoco, Aerobotics, Luno, Rain telecommunication and The Sun Exchange are located in the city.
The city has the largest film industry in the Southern Hemisphere generating R5 billion (US$476.19 million) in revenue and providing an estimated 6,058 direct and 2,502 indirect jobs in 2013. Much of the industry is based out of the Cape Town Film Studios.
Most companies headquartered in the city are insurance companies, retail groups, publishers, design houses, fashion designers, shipping companies, petrochemical companies, architects and advertising agencies. Some of the most notable companies headquartered in the city are food and fashion retailer Woolworths, supermarket chain Pick n Pay Stores and Shoprite, New Clicks Holdings Limited, fashion retailer Foschini Group, internet service provider MWEB, Mediclinic International, eTV, multinational mass media giant Naspers, and financial services giant Sanlam and Old Mutual Park.
Other notable companies include Belron, Ceres Fruit Juices, Coronation Fund Managers, Vida e Caffè, Capitec Bank. The city is a manufacturing base for several multinational companies including, Johnson & Johnson, GlaxoSmithKline, Levi Strauss & Co., Adidas, Bokomo Foods, Yoco and Nampak. Amazon Web Services maintains one of its largest facilities in the world in Cape Town with the city serving as the Africa headquarters for its parent company Amazon.
The city of Cape Town's Gini coefficient of 0.58 is lower than South Africa's Gini coefficient of 0.7 making it more equal than the rest of the country or any other major South Africa city although still highly unequal by international standards. Between 2001 and 2010 the city's Gini coefficient, a measure of inequality, improved by dropping from 0.59 in 2007 to 0.57 in 2010 only to increase to 0.58 by 2017.
The Western Cape is a highly important tourist region in South Africa; the tourism industry accounts for 9.8% of the GDP of the province and employs 9.6% of the province's workforce. In 2010, over 1.5 million international tourists visited the area. Cape Town is not only a popular international tourist destination in South Africa, but Africa as a whole. This is due to its mild climate, natural setting, and well-developed infrastructure. The city has several well-known natural features that attract tourists, most notably Table Mountain, which forms a large part of the Table Mountain National Park and is the back end of the City Bowl. Reaching the top of the mountain can be achieved either by hiking up, or by taking the Table Mountain Cableway. Cape Point is the dramatic headland at the end of the Cape Peninsula. Many tourists also drive along Chapman's Peak Drive, a narrow road that links Noordhoek with Hout Bay, for the views of the Atlantic Ocean and nearby mountains. It is possible to either drive or hike up Signal Hill for closer views of the City Bowl and Table Mountain.
Many tourists also visit Cape Town's beaches, which are popular with local residents. It is possible to visit several different beaches in the same day, each with a different setting and atmosphere. Both coasts are popular, although the beaches in affluent Clifton and elsewhere on the Atlantic Coast are better developed with restaurants and cafés, with a strip of restaurants and bars accessible to the beach at Camps Bay. The Atlantic seaboard, known as Cape Town's Riviera, is regarded as one of the most scenic routes in South Africa, along the slopes of the Twelve Apostles to the boulders and white sand beaches of Llandudno, with the route ending in Hout Bay, a diverse suburb with a fishing and recreational boating harbour near a small island with a breeding colony of African fur seals. This suburb is also accessible by road from the Constantia valley over the mountains to the northeast, and via the picturesque Chapman's Peak drive from the residential suburb Noordhoek in the Fish Hoek valley to the south-east. Boulders Beach near Simon's Town is known for its colony of African penguins.
The city has several notable cultural attractions. The Victoria & Alfred Waterfront, built on top of part of the docks of the Port of Cape Town, is the city's most visited tourist attraction. It is also one of the city's most popular shopping venues, with several hundred shops as well as the Two Oceans Aquarium. The V&A also hosts the Nelson Mandela Gateway, through which ferries depart for Robben Island. It is possible to take a ferry from the V&A to Hout Bay, Simon's Town and the Cape fur seal colonies on Seal and Duiker Islands. Several companies offer tours of the Cape Flats, a region of mostly Coloured & Black townships.
Within the metropolitan area, the most popular areas for visitors to stay include Camps Bay, Sea Point, the V&A Waterfront, the City Bowl, Hout Bay, Constantia, Rondebosch, Newlands, and Somerset West. In November 2013, Cape Town was voted the best global city in The Daily Telegraph's annual Travel Awards. Cape Town offers tourists a range of air, land and sea-based adventure activities, including helicopter rides, paragliding and skydiving, snorkelling and scuba diving, boat trips, game-fishing, hiking, mountain biking and rock climbing. Surfing is popular and the city hosts the Red Bull Big Wave Africa surfing competition every year, and there is some local and international recreational scuba tourism.
The City of Cape Town works closely with Cape Town Tourism to promote the city both locally and internationally. The primary focus of Cape Town Tourism is to represent Cape Town as a tourist destination. Cape Town Tourism receives a portion of its funding from the City of Cape Town while the remainder is made up of membership fees and own-generated funds. The Tristan da Cunha government owns and operates a lodging facility in Cape Town which charges discounted rates to Tristan da Cunha residents and non-resident natives. Cape Town's transport system links it to the rest of South Africa; it serves as the gateway to other destinations within the province. The Cape Winelands and in particular the towns of Stellenbosch, Paarl and Franschhoek are popular day trips from the city for sightseeing and wine tasting.
Most goods are handled through the Port of Cape Town or Cape Town International Airport. Most major shipbuilding companies have offices in Cape Town. The province is also a centre of energy development for the country, with the existing Koeberg nuclear power station providing energy for the Western Cape's needs.
Greater Cape Town has four major commercial nodes, with Cape Town Central Business District containing the majority of job opportunities and office space. Century City, the Bellville/Tygervalley strip and Claremont commercial nodes are well established and contain many offices and corporate headquarters.
Public primary and secondary schools in Cape Town are run by the Western Cape Education Department. This provincial department is divided into seven districts; four of these are "Metropole" districts – Metropole Central, North, South, and East – which cover various areas of the metropolis. There are also many private schools, both religious and secular. Cape Town has a well-developed higher system of public universities. Cape Town is served by three public universities: the University of Cape Town (UCT), the University of the Western Cape (UWC) and the Cape Peninsula University of Technology (CPUT). Stellenbosch University, while not based in the metropolitan area itself, has its main campus and administrative section 50 kilometres from the City Bowl and has additional campuses, such as the Tygerberg Faculty of Medicine and Health Sciences and the Bellville Business Park, north-west of the city in the town of Bellville.
Both the University of Cape Town and Stellenbosch University are leading universities in South Africa. This is due in large part to substantial financial contributions made to these institutions by both the public and private sector. UCT is an English-language tuition institution. It has over 21,000 students and has an MBA programme that was ranked 51st by the Financial Times in 2006. It is also the top-ranked university in Africa, being the only African university to make the world's Top 200 university list at number 146. Since the African National Congress has become the country's ruling party, some restructuring of Western Cape universities has taken place and as such, traditionally non-white universities have seen increased financing, which has evidently benefitted the University of the Western Cape.
The Cape Peninsula University of Technology was formed on 1 January 2005, when two separate institutions – Cape Technikon and Peninsula Technikon – were merged. The new university offers education primarily in English, although one may take courses in any of South Africa's official languages. The institution generally awards the National Diploma. Students from the universities and high schools are involved in the South African SEDS, Students for the Exploration and Development of Space. This is the South African SEDS, and there are many SEDS branches in other countries, preparing enthusiastic students and young professionals for the growing Space industry. As well as the Universities, there are also several colleges in and around Cape Town. Including the College of Cape Town, False Bay College and Northlink College. Many students use NSFAS funding to help pay for tertiary education at these TVET colleges. Cape Town has also become a popular study abroad destination for many international college students. Many study abroad providers offer semester, summer, short-term, and internship programs in partnership with Cape Town universities as a chance for international students to gain intercultural understanding.
The Western Cape Water Supply System (WCWSS) is a complex water supply system in the Western Cape region of South Africa, comprising an inter-linked system of six main dams, pipelines, tunnels and distribution networks, and a number of minor dams, some owned and operated by the Department of Water and Sanitation and some by the City of Cape Town.
The Cape Town water crisis of 2017 to 2018 was a period of severe water shortage in the Western Cape region, most notably affecting the City of Cape Town. While dam water levels had been declining since 2015, the Cape Town water crisis peaked during mid-2017 to mid-2018 when water levels hovered between 15 and 30 percent of total dam capacity.
In late 2017, there were first mentions of plans for "Day Zero", a shorthand reference for the day when the water level of the major dams supplying the city could fall below 13.5 percent. "Day Zero" would mark the start of Level 7 water restrictions, when municipal water supplies would be largely switched off and it was envisioned that residents could have to queue for their daily ration of water. If this had occurred, it would have made the City of Cape Town the first major city in the world to run out of water.
The city of Cape Town implemented significant water restrictions in a bid to curb water usage, and succeeded in reducing its daily water usage by more than half to around 500 million litres (130,000,000 US gal) per day in March 2018. The fall in water usage led the city to postpone its estimate for "Day Zero", and strong rains starting in June 2018 led to dam levels recovering. In September 2018, with dam levels close to 70 percent, the city began easing water restrictions, indicating that the worst of the water crisis was over. Good rains in 2020 effectively broke the drought and resulting water shortage when dam levels reached 95 percent. Concerns have been raised, however, that unsustainble demand and limited water supply could result in future drought events.
Cape Town International Airport serves both domestic and international flights. It is the second-largest airport in South Africa and serves as a major gateway for travelers to the Cape region. Cape Town has regularly scheduled services to Southern Africa, East Africa, Mauritius, Middle East, Far East, Europe and the United States as well as eleven domestic destinations. Cape Town International Airport opened a brand new central terminal building that was developed to handle an expected increase in air traffic as tourism numbers increased in the lead-up to the tournament of the 2010 FIFA World Cup. Other renovations include several large new parking garages, a revamped domestic departure terminal, a new Bus Rapid Transit system station and a new double-decker road system. The airport's cargo facilities are also being expanded and several large empty lots are being developed into office space and hotels.
Cape Town is one of five internationally recognised Antarctic gateway cities with transportation connections. Since 2021, commercial flights have operated from Cape Town to Wolf's Fang Runway, Antarctica. The Cape Town International Airport was among the winners of the World Travel Awards for being Africa's leading airport. Cape Town International Airport is located 18 km from the Central Business District.
Cape Town has a long tradition as a port city. The Port of Cape Town, the city's main port, is in Table Bay directly to the north of the CBD. The port is a hub for ships in the southern Atlantic: it is located along one of the busiest shipping corridors in the world, and acts as a stopover point for goods en route to or from Latin America and Asia. It is also an entry point into the South African market. It is the second-busiest container port in South Africa after Durban. In 2004, it handled 3,161 ships and 9.2 million tonnes of cargo.
Simon's Town Harbour on the False Bay coast of the Cape Peninsula is the main operational base of the South African Navy.
Until the 1970s the city was served by the Union Castle Line with service to the United Kingdom and St Helena. The RMS St Helena provided passenger and cargo service between Cape Town and St Helena until the opening of St Helena Airport.
The cargo vessel M/V Helena, under AW Shipping Management, takes a limited number of passengers, between Cape Town and St Helena and Ascension Island on its voyages. Multiple vessels also take passengers to and from Tristan da Cunha, inaccessible by aircraft, to and from Cape Town. In addition NSB Niederelbe Schiffahrtsgesellschaft [de] takes passengers on its cargo service to the Canary Islands and Hamburg, Germany.
The Shosholoza Meyl is the passenger rail operations of Spoornet and operates two long-distance passenger rail services from Cape Town: a daily service to and from Johannesburg via Kimberley and a weekly service to and from Durban via Kimberley, Bloemfontein and Pietermaritzburg. These trains terminate at Cape Town railway station and make a brief stop at Bellville. Cape Town is also one terminus of the luxury tourist-oriented Blue Train as well as the five-star Rovos Rail.
Metrorail operates a commuter rail service in Cape Town and the surrounding area. The Metrorail network consists of 96 stations throughout the suburbs and outskirts of Cape Town.
Cape Town is the origin of three national roads. The N1 and N2 begin in the foreshore area near the City Centre and the N7, which runs North toward Namibia. The N1 runs East-North-East through Edgemead, Parow, Bellville, Brackenfell and Kraaifontein. It connects Cape Town to major cities further inland, namely Bloemfontein, Johannesburg, and Pretoria An older at-grade road, the R101, runs parallel to the N1 from Bellville. The N2 runs East-South-East through Rondebosch, Guguletu, Khayelitsha, Macassar to Somerset West. It becomes a multiple-carriageway, at-grade road from the intersection with the R44 onward. The N2 continues east along the coast, linking Cape Town to the coastal cities of Mossel Bay, George, Port Elizabeth, East London and Durban. An older at-grade road, the R102, runs parallel to the N1 initially, before veering south at Bellville, to join the N2 at Somerset West via the suburbs of Kuils River and Eerste River. The N7 originates from the N1 at Wingfield Interchange near Edgemead. It begins, initially as a highway, but becoming an at-grade road from the intersection with the M5 onward.
There are also a number of regional routes linking Cape Town with surrounding areas. The R27 originates from the N1 near the Foreshore and runs north parallel to the N7, but nearer to the coast. It passes through the suburbs of Milnerton, Table View and Bloubergstrand and links the city to the West Coast, ending at the town of Velddrif. The R44 enters the east of the metro from the north, from Stellenbosch. It connects Stellenbosch to Somerset West, then crosses the N2 to Strand and Gordon's Bay. It exits the metro heading south hugging the coast, leading to the towns of Betty's Bay and Kleinmond.
Of the three-digit routes, the R300 is an expressway linking the N1 at Brackenfell to the N2 near Mitchells Plain and the Cape Town International Airport. The R302 runs from the R102 in Bellville, heading north across the N1 through Durbanville leaving the metro to Malmesbury. The R304 enters the northern limits of the metro from Stellenbosch, running NNW before veering west to cross the N7 at Philadelphia to end at Atlantis at a junction with the R307. This R307 starts north of Koeberg from the R27 and, after meeting the R304, continues north to Darling. The R310 originates from Muizenberg and runs along the coast, to the south of Mitchell's Plain and Khayelitsha, before veering north-east, crossing the N2 west of Macassar, and exiting the metro heading to Stellenbosch.
Cape Town, like most South African cities, uses Metropolitan or "M" routes for important intra-city routes, a layer below National (N) roads and Regional (R) routes. Each city's M roads are independently numbered. Most are at-grade roads. The M3 splits from the N2 and runs to the south along the eastern slopes of Table Mountain, connecting the City Bowl with Muizenberg. Except for a section between Rondebosch and Newlands that has at-grade intersections, this route is a highway. The M5 splits from the N1 further east than the M3, and links the Cape Flats to the CBD. It is a highway as far as the interchange with the M68 at Ottery, before continuing as an at-grade road. Cape Town has the worst traffic congestion in South Africa.
Golden Arrow Bus Services operates scheduled bus services in the Cape Town metropolitan area. Several companies run long-distance bus services from Cape Town to the other cities in South Africa.
Cape Town has a public transport system in about 10% of the city, running north to south along the west coastline of the city, comprising Phase 1 of the IRT system. This is known as the MyCiTi service.
MyCiTi Phase 1 includes services linking the Airport to the Cape Town inner city, as well as the following areas: Blouberg / Table View, Dunoon, Atlantis and Melkbosstrand, Milnerton, Paarden Eiland, Century City, Salt River and Walmer Estate, and all suburbs of the City Bowl and Atlantic Seaboard all the way to Llandudno and Hout Bay.
The MyCiTi N2 Express service consists of two routes each linking the Cape Town inner city and Khayelitsha and Mitchells Plain on the Cape Flats.
The service use high floor articulated and standard size buses in dedicated busways, low floor articulated and standard size buses on the N2 Express service, and smaller 9 m (30 ft) Optare buses in suburban and inner city areas. It offers universal access through level boarding and numerous other measures, and requires cashless fare payment using the EMV compliant smart card system, called myconnect. Headway of services (i.e. the time between buses on the same route) range from three to twenty minutes in peak times to an hour in off-peak times.
Cape Town has taxis as well as e-hailing services such as Uber. Taxis are either metered taxis or minibus taxis. Unlike many cities, metered taxis can be found at transport hubs as well as other tourist establishments, while minibus taxis can be found at taxi ranks or travelling along main streets. Minibus taxis can be hailed from the road.
Cape Town metered taxi cabs mostly operate in the city bowl, suburbs and Cape Town International Airport areas. Large companies that operate fleets of cabs can be reached by phone and are cheaper than the single operators that apply for hire from taxi ranks and Victoria and Alfred Waterfront. There are about one thousand meter taxis in Cape Town. Their rates vary from R8 per kilometre to about R15 per kilometre. The larger taxi companies in Cape Town are Excite Taxis, Cabnet and Intercab and single operators are reachable by cellular phone. The seven seated Toyota Avanza are the most popular with larger Taxi companies. Meter cabs are mostly used by tourists and are safer to use than minibus taxis.
Minibus taxis are the standard form of transport for the majority of the population who cannot afford private vehicles. Although essential, these taxis are often poorly maintained and are frequently not road-worthy. These taxis make frequent unscheduled stops to pick up passengers, which can cause accidents. With the high demand for transport by the working class of South Africa, minibus taxis are often filled over their legal passenger allowance. Minibuses are generally owned and operated in fleets.
Cape Town is noted for its architectural heritage, with the highest density of Cape Dutch style buildings in the world. Cape Dutch style, which combines the architectural traditions of the Netherlands, Germany, France and Indonesia, is most visible in Constantia, the old government buildings in the Central Business District, and along Long Street. The annual Cape Town Minstrel Carnival, also known by its Afrikaans name of Kaapse Klopse, is a large minstrel festival held annually on 2 January or "Tweede Nuwe Jaar" (Second New Year). Competing teams of minstrels parade in brightly coloured costumes, performing Cape Jazz, either carrying colourful umbrellas or playing an array of musical instruments. The Artscape Theatre Centre is the largest performing arts venue in Cape Town. The city was named the World Design Capital for 2014 by the International Council of Societies of Industrial Design.
The city also encloses the 36 hectare Kirstenbosch National Botanical Garden that contains protected natural forest and fynbos along with a variety of animals and birds. There are over 7,000 species in cultivation at Kirstenbosch, including many rare and threatened species of the Cape Floristic Region. In 2004 this Region, including Kirstenbosch, was declared a UNESCO World Heritage Site.
Whale watching is popular amongst tourists: southern right whales and humpback whales are seen off the coast during the breeding season (August to November) and Bryde's whales and orca can be seen any time of the year. The nearby town of Hermanus is known for its Whale Festival, but whales can also be seen in False Bay. Heaviside's dolphins are endemic to the area and can be seen from the coast north of Cape Town; dusky dolphins live along the same coast and can occasionally be seen from the ferry to Robben Island.
The only complete windmill in South Africa is Mostert's Mill, Mowbray. It was built in 1796 and restored in 1935 and again in 1995.
Food originating from or synonymous with Cape Town includes the savoury sweet spiced meat dish Bobotie that dates from the 17th century. The Gatsby, a sandwich filled with slap chips and other toppings, was first served in 1976 in the suburb of Athlone and is also synonymous with the city. The koe'sister is a traditional Cape Malay pastry described as a cinnamon infused dumpling with a cake-like texture, finished off with a sprinkling of desiccated coconut. Malva pudding (sometimes known as Cape Malva pudding) is a sticky sweet dessert often served with hot custard is also associated with the city and dates back to the 17th century. A related dessert dish, Cape Brandy Pudding, is also associated with the city and surrounding region. Cape Town is also the home of the South African wine industry with the first wine produced in the country being bottled in the city; a number of notable wineries still exist in the city including Groot Constantia and Klein Constantia.
Several newspapers, magazines and printing facilities have their offices in the city. Independent News and Media publishes the major English language papers in the city, the Cape Argus and the Cape Times. Naspers, the largest media conglomerate in South Africa, publishes Die Burger, the major Afrikaans language paper.
Cape Town has many local community newspapers. Some of the largest community newspapers in English are the Athlone News from Athlone, the Atlantic Sun, the Constantiaberg Bulletin from Constantiaberg, the City Vision from Bellville, the False Bay Echo from False Bay, the Helderberg Sun from Helderberg, the Plainsman from Michell's Plain, the Sentinel News from Hout Bay, the Southern Mail from the Southern Peninsula, the Southern Suburbs Tatler from the Southern Suburbs, Table Talk from Table View and Tygertalk from Tygervalley/Durbanville. Afrikaans language community newspapers include the Landbou-Burger and the Tygerburger. Vukani, based in the Cape Flats, is published in Xhosa.
Cape Town is a centre for major broadcast media with several radio stations that only broadcast within the city. 94.5 Kfm (94.5 MHz FM) and Good Hope FM (94–97 MHz FM) mostly play pop music. Heart FM (104.9 MHz FM), the former P4 Radio, plays jazz and R&B, while Fine Music Radio (101.3 FM) plays classical music and jazz, and Magic Music Radio (828 kHz MW) plays adult contemporary and classic rock from the '60s, '70s, '80s, '90s and '00s. Bush Radio is a community radio station (89.5 MHz FM). The Voice of the Cape (95.8 MHz FM) and Cape Talk (567 kHz MW) are the major talk radio stations in the city. Bokradio (98.9 MHz FM) is an Afrikaans music station. The University of Cape Town also runs its own radio station, UCT Radio (104.5 MHz FM).
The SABC has a small presence in the city, with satellite studios located at Sea Point. e.tv has a greater presence, with a large complex located at Longkloof Studios in Gardens. M-Net is not well represented with infrastructure within the city. Cape Town TV is a local TV station, supported by numerous organisation and focusing mostly on documentaries. Numerous productions companies and their support industries are located in the city, mostly supporting the production of overseas commercials, model shoots, TV-series and movies. The local media infrastructure remains primarily in Johannesburg.
Cape Town's most popular sports by participation are cricket, association football, swimming, and rugby union. In rugby union, Cape Town is the home of the Western Province side, who play at Cape Town Stadium and compete in the Currie Cup. In addition, Western Province players (along with some from Wellington's Boland Cavaliers) comprise the Stormers in the United Rugby Championship competition. Cape Town has also been a host city for both the 1995 Rugby World Cup and 2010 FIFA World Cup, and annually hosts the Africa leg of the World Rugby 7s. It will also be host to the 2023 Netball World Cup.
Association football, which is mostly known as soccer in South Africa, is also popular. Two clubs from Cape Town play in the Premier Soccer League (PSL), South Africa's premier league. These teams are Ajax Cape Town, which formed as a result of the 1999 amalgamation of the Seven Stars and the Cape Town Spurs and resurrected Cape Town City F.C. Cape Town was also the location of several of the matches of the FIFA 2010 World Cup including a semi-final, held in South Africa. The Mother City built a new 70,000-seat stadium (Cape Town Stadium) in the Green Point area.
In cricket, the Cape Cobras represent Cape Town at the Newlands Cricket Ground. The team is the result of an amalgamation of the Western Province Cricket and Boland Cricket teams. They take part in the Supersport and Standard Bank Cup Series. The Newlands Cricket Ground regularly hosts international matches.
Cape Town has had Olympic aspirations. For example, in 1996, Cape Town was one of the five candidate cities shortlisted by the IOC to launch official candidatures to host the 2004 Summer Olympics. Although the Games ultimately went to Athens, Cape Town came in third place. There has been some speculation that Cape Town was seeking the South African Olympic Committee's nomination to be South Africa's bid city for the 2020 Summer Olympic Games. That was quashed when the International Olympic Committee awarded the 2020 Games to Tokyo.
The city of Cape Town has vast experience in hosting major national and international sports events. The Cape Town Cycle Tour is the world's largest individually timed road cycling race – and the first event outside Europe to be included in the International Cycling Union's Golden Bike series. It sees over 35,000 cyclists tackling a 109 km (68 mi) route around Cape Town. The Absa Cape Epic is the largest full-service mountain bike stage race in the world. Some notable events hosted by Cape Town have included the 1995 Rugby World Cup, 2003 ICC Cricket World Cup, and World Championships in various sports such as athletics, fencing, weightlifting, hockey, cycling, canoeing, gymnastics and others. Cape Town was also a host city to the 2010 FIFA World Cup from 11 June to 11 July 2010, further enhancing its profile as a major events city. It was also one of the host cities of the 2009 Indian Premier League cricket tournament. The Mother City has also played host to the Africa leg of the annual World Rugby 7s event since 2015; for nine seasons, from 2002 until 2010, the event was staged in George in the Western Cape, before moving to Port Elizabeth for the 2011 edition, and then to Cape Town in 2015. The event usually takes place in mid-December, and is hosted at the Cape Town Stadium in Green Point.
There are several golf courses in Cape Town. The Clovelly Country Club and Metropolitan Golf Club are two of the best Golf Courses in Cape Town both offering superb views while playing the 18 holes.
The coastline of Cape Town is relatively long, and the varied exposure to weather conditions makes it fairly common for water conditions to be conducive to recreational scuba diving at some part of the city's coast. There is considerable variation in the underwater environment and regional ecology as there are dive sites on reefs and wrecks on both sides of the Cape Peninsula and False Bay, split between two coastal marine ecoregions by the Cape Peninsula, and also variable by depth zone.
False Bay is open to the south, and the prevailing open ocean swell arrives from the southwest, so the exposure varies considerably around the coastline. The inshore bathymetry near Cape Point is shallow enough for a moderate amount of refraction of long period swell, but deep enough to have less effect on short period swell, and acts as a filter to pass mainly the longer swell components to the Western shores, although they are significantly attenuated. The eastern shores get more of the open ocean spectrum, and this results in very different swell conditions between the two sides at any given time. The fetch is generally too short for southeasterly winds to produce good surf. There are more than 20 named breaks in False Bay. The north-wester can have a long fetch and can produce large waves, but they may also be associated with local wind and be very poorly sorted. The Atlantic coast is exposed to the full power of the South-westerly swell produced by the westerly winds of the southern ocean, often a long way away, so the swell has time to separate into similar wavelengths, and there are some world class big wave breaks among the named breaks of the Atlantic shore. | [
{
"paragraph_id": 0,
"text": "Cape Town is the legislative capital of South Africa. It is the country's oldest city and the seat of the Parliament of South Africa. It is the country's second-largest city, after Johannesburg, and the largest in the Western Cape. The city is part of the City of Cape Town metropolitan municipality.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The city is known for its harbour, its natural setting in the Cape Floristic Region, and for landmarks such as Table Mountain and Cape Point. In 2014, Cape Town was named the best place in the world to visit by The New York Times and similarly by The Daily Telegraph in 2016.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Located on the shore of Table Bay, the City Bowl area of Cape Town is the oldest urban area in the Western Cape, with a significant cultural heritage. It was founded by the Dutch East India Company (VOC) as a supply station for Dutch ships sailing to East Africa, India, and the Far East. Jan van Riebeeck's arrival on 6 April 1652 established the VOC Cape Colony, the first permanent European settlement in South Africa. Cape Town outgrew its original purpose as the first European outpost at the Castle of Good Hope, becoming the economic and cultural hub of the Cape Colony. Until the Witwatersrand Gold Rush and the development of Johannesburg, Cape Town was the largest city in southern Africa.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The metropolitan area has a long coastline on the Atlantic Ocean, which includes False Bay, and extends to the Hottentots Holland mountains to the east. The Table Mountain National Park is within the city boundaries and there are several other nature reserves and marine-protected areas within, and adjacent to, the city, protecting the diverse terrestrial and marine natural environment.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The earliest known remnants of human occupation in the region were found at Peers Cave in Fish Hoek and have been dated to between 15,000 and 12,000 years old.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Little is known of the history of the region's first residents, since there is no written history from the area before it was first mentioned by Portuguese explorer Bartolomeu Dias. Dias, the first European to reach the area, arrived in 1488 and named it \"Cape of Storms\" (Cabo das Tormentas). It was later renamed by John II of Portugal as \"Cape of Good Hope\" (Cabo da Boa Esperança) because of the great optimism engendered by the opening of a sea route to the Indian subcontinent and East Indies.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In 1497, Portuguese explorer Vasco da Gama recorded a sighting of the Cape of Good Hope.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1510, at the Battle of Salt River, the Portuguese admiral Francisco de Almeida and sixty-four of his men were killed and his party was defeated by the !Uriǁ’aekua (\"Goringhaiqua\" in Dutch approximate spelling) using specially trained cattle. The !Uriǁ’aekua were one of the so-called Khoekhoe clans who inhabited the area.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the late 16th century French, Danish, Dutch and English, but mainly Portuguese, ships regularly continued to stop over in Table Bay en route to the Indies. They traded tobacco, copper, and iron with the Khoekhoe clans of the region in exchange for fresh meat and other essential travelling provisions.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In 1652, Jan van Riebeeck and other employees of the United East India Company (Dutch: Verenigde Oost-indische Compagnie, VOC) were sent to the Cape Colony to establish a way-station for ships travelling to the Dutch East Indies, and the Fort de Goede Hoop (later replaced by the Castle of Good Hope). The settlement grew slowly during this period, as it was hard to find adequate labour. This labour shortage prompted the local authorities to import enslaved people from Indonesia and Madagascar. Many of these people are ancestors of modern-day Cape Coloured communities.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Under Van Riebeeck and his successors, as VOC commanders and later governors at the Cape, a wide range of agricultural plants were introduced to the Cape. Some of these, including grapes, cereals, ground nuts, potatoes, apples and citrus, had a large and lasting influence on the societies and economies of the region.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "With the Dutch Republic being transformed into Revolutionary France's vassal Batavian Republic, Great Britain moved to take control of Dutch colonies, including the colonial possessions of the VOC.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Britain captured Cape Town in 1795, but it was returned to the Dutch by treaty in 1803. British forces occupied the Cape again in 1806 following the Battle of Blaauwberg when the successor state to the Batavian Republic, the Kingdom of Holland, allied with France during the Napoleonic Wars.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In the Anglo-Dutch Treaty of 1814, Cape Town was permanently ceded to the United Kingdom. It became the capital of the newly formed Cape Colony, whose territory expanded very substantially through the 1800s. With expansion came calls for greater independence from the UK, with the Cape attaining its own parliament (1854) and a locally accountable Prime Minister (1872). Suffrage was established according to the non-racial Cape Qualified Franchise.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "During the 1850s and 1860s, additional plant species were introduced from Australia by the British authorities. Notably rooikrans was introduced to stabilise the sand of the Cape Flats to allow for a road connecting the peninsula with the rest of the African continent and eucalyptus was used to drain marshes.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1859 the first railway line was built by the Cape Government Railways and a system of railways rapidly expanded in the 1870s. The discovery of diamonds in Griqualand West in 1867, and the Witwatersrand Gold Rush in 1886, prompted a flood of immigration into South Africa. In 1895 the city's first public power station, the Graaff Electric Lighting Works, was opened.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Conflicts between the Boer republics in the interior and the British colonial government resulted in the Second Boer War of 1899–1902. Britain's victory in this war led to the formation of a united South Africa. From 1891 to 1901, the city's population more than doubled from 67,000 to 171,000.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "As the 19th century came to an end, the economic and political dominance of Cape Town in the Southern Africa region during the 19th century started to give way to the dominance of Johannesburg and Pretoria in the 20th century.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 1910, Britain established the Union of South Africa, which unified the Cape Colony with the two defeated Boer Republics and the British colony of Natal. Cape Town became the legislative capital of the Union, and later of the Republic of South Africa.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "By the time of the 1936 census, Johannesburg had overtaken Cape Town as the largest city in the country.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 1945 the expansion of the Cape Town foreshore was completed adding an additional 194 ha (480 acres) to the City Bowl area to the city centre.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Prior to the mid-twentieth century, Cape Town was one of the most racially integrated cities in South Africa. In the 1948 national elections, the National Party won on a platform of apartheid (racial segregation) under the slogan of \"swart gevaar\" (Afrikaans for \"black danger\"). This led to the erosion and eventual abolition of the Cape's multiracial franchise.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In 1950, the apartheid government first introduced the Group Areas Act, which classified and segregated urban areas according to race. Formerly multi-racial suburbs of Cape Town were either purged of residents deemed unlawful by apartheid legislation, or demolished. The most infamous example of this in Cape Town was the suburb of District Six. After it was declared a whites-only area in 1965, all housing there was demolished and over 60,000 residents were forcibly removed. Many of these residents were relocated to the Cape Flats.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The earliest of the Cape Flats forced removals saw the expulsion of Black South Africans to the Langa, Cape Town's first and oldest township, in line with the 1923 Native Urban Areas Act.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Under apartheid, the Cape was considered a \"Coloured labour preference area\", to the exclusion of \"Bantus\", i.e. Black Africans. The implementation of this policy was widely opposed by trade unions, civil society and opposition parties. It is notable that this policy was not advocated for by any Coloured political group, and its implementation was a unilateral decision by the apartheid government. During the student-led Soweto Uprising of June 1976, school students from Langa, Gugulethu and Nyanga in Cape Town reacted to the news of the protests against Bantu Education by organising gatherings and marches of their own. A number of school buildings were burnt down and the protest action was met with forceful resistance from the police.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Cape Town has been home to many leaders of the anti-apartheid movement. In Table Bay, 10 km (6 mi) from the city is Robben Island. This penitentiary island was the site of a maximum security prison where many famous apartheird-era political prisoners served long prison sentences. Famous prisoners include activist, lawyer and future president Nelson Mandela who served 18 of his 27 years of imprisonment on the island, as well as two other future presidents, Kgalema Motlanthe and Jacob Zuma.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In one of the most famous moments marking the end of apartheid, Nelson Mandela made his first public speech since his imprisonment, from the balcony of Cape Town City Hall, hours after being released on 11 February 1990. His speech heralded the beginning of a new era for the country. The first democratic election, was held four years later, on 27 April 1994.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Nobel Square in the Victoria & Alfred Waterfront features statues of South Africa's four Nobel Peace Prize winners: Albert Luthuli, Desmond Tutu, F. W. de Klerk and Nelson Mandela.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Cape Town faced a severe water shortage from 2015 to 2018.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Since the 2010s, Cape Town and the wider Western Cape province have seen the rise of a small secessionist movement. Support for parties \"which have formally adopted Cape independence\" was around 5% in the 2021 municipal elections.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Cape Town is located at latitude 33.55° S (approximately the same as Sydney and Buenos Aires and equivalent to Casablanca and Los Angeles in the northern hemisphere) and longitude 18.25° E.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 31,
"text": "Table Mountain, with its near vertical cliffs and flat-topped summit over 1,000 m (3,300 ft) high, and with Devil's Peak and Lion's Head on either side, together form a dramatic mountainous backdrop enclosing the central area of Cape Town, the so-called City Bowl. A thin strip of cloud, known colloquially as the \"tablecloth\" (\"Karos\" in Afrikaans), sometimes forms on top of the mountain. To the immediate south of the city, the Cape Peninsula is a scenic mountainous spine jutting 40 km (25 mi) southward into the Atlantic Ocean and terminating at Cape Point.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 32,
"text": "There are over 70 peaks above 300 m (980 ft) within Cape Town's official metropolitan limits. Many of the city's suburbs lie on the large plain called the Cape Flats, which extends over 50 km (30 mi) to the east and joins the peninsula to the mainland. The Cape Town region is characterised by an extensive coastline, rugged mountain ranges, coastal plains and inland valleys.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 33,
"text": "The extent of Cape Town has varied considerably over time. It originated as a small settlement at the foot of Table Mountain and has grown beyond its city limits as a metropolitan area to encompass the entire Cape Peninsula to the south, the Cape Flats, the Helderberg basin and part of the Steenbras catchment area to the east, and the Tygerberg hills, Blouberg and other areas to the north. Robben Island in Table Bay is also part of Cape Town. It is bounded by the Atlantic Ocean to the west, and False Bay to the south. To the north and east, the extent is demarcated by boundaries of neighbouring municipalities within the Western Cape province.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 34,
"text": "The official boundaries of the city proper extend between the City Bowl and the Atlantic Seaboard to the east and the Southern Suburbs to the south. The City of Cape Town, the metropolitan municipality that takes its name from the city covers the Greater Cape Town metropolitan area, known as the Cape Metropole, extending beyond the city proper itself to include a number of satellite towns, suburbs and rural areas such as Milnerton, Atlantis, Bellville, Brackenfell, Durbanville, Goodwood, Gordon's Bay, Hout Bay, Kraaifontein, Kuilsrivier, Muizenberg, Simon's Town, Somerset West and Strand among others.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 35,
"text": "The Cape Peninsula is 52 km (30 mi) long from Mouille Point in the north to Cape Point in the south, with an area of about 470 km (180 sq mi), and it displays more topographical variety than other similar sized areas in southern Africa, and consequently spectacular scenery. There are diverse low-nutrient soils, large rocky outcrops, scree slopes, a mainly rocky coastline with embayed beaches, and considerable local variation in climatic conditions. The sedimentary rocks of the Cape Supergroup, of which parts of the Graafwater and Peninsula Formations remain, were uplifted between 280 and 21S million years ago, and were largely eroded away during the Mesozoic. The region was geologically stable during the Tertiary, which has led to slow denudation of the durable sandstones. Erosion rate and drainage has been influenced by fault lines and fractures, leaving remnant steep-sided massifs like Table Mountain surrounded by flatter slopes of deposits of the eroded material overlaying the older rocks,",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 36,
"text": "There are two internationally notable landmarks, Table Mountain and Cape Point, at opposite ends of the Peninsula Mountain Chain, with the Cape Flats and False Bay to the east and the Atlantic Ocean to the west. The landscape is dominated by sandstone plateaux and ridges, which generally drop steeply at their margins to the surrounding debris slopes, interrupted by a major gap at the Fish Hoek–Noordhoek valley. In the south much of the area is a low sandstone plateau with sand dunes. Maximum altitude is 1113 m on Table Mountain. The Cape Flats (Afrikaans: Kaapse Vlakte) is a flat, low-lying, sandy area, area to the east the Cape Peninsula, and west of the Helderberg much of which was wetland and dunes within recent history. To the north are the Tygerberg Hills and the Stellenbosch district.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 37,
"text": "The Helderberg area of Greater Cape Town, previously known as the \"Hottentots-Holland\" area, is mostly residential, but also a wine-producing area east of the Cape Flats, west of the Hottentots Holland mountain range and south of the Helderberg mountain, from which it gets its current name. The Helderberg consists of the previous municipalities of Somerset West, Strand, Gordons Bay and a few other towns. Industry and commerce is largely in service of the area. After the Cape Peninsula, Helderberg is the next most mountainous part of Greater Cape Town, bordered to the north and east by the highest peaks in the region along the watershed of the Helderberg and Hottentots Holland Mountains, which are part of the Cape Fold Belt with Cape Supergroup strata on a basement of Tygerberg Formation rocks intruded by part of the Stellenbosch granite pluton. The region includes the entire catchment of the Lourens and Sir Lowry's rivers, separated by the Schapenberg hill, and a small part of the catchment of the Eerste River to the west. The Helderberg is ecologically highly diverse, rivaling the Cape Peninsula, and has its own endemic ecoregions and several conservation areas.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 38,
"text": "To the east of the Hottentots Holland mountains is the valley of the Steenbras River, in which the Steenbras Dam was built as a water supply for Cape Town. The dam has been supplemented by several other dams around the western Cape, some of them considerably larger. This is almost entirely a conservation area, of high biodiversity. Bellville, Brackenfell, Durbanville, Kraaifontein, Goodwood and Parow are a few of the towns that make up the Northern Suburbs of Cape Town. In current popular culture these areas are often referred to as being beyond the \"boerewors curtain,\" a play on the term \"iron curtain.\"",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 39,
"text": "UNESCO declared Robben Island in the Western Cape a World Heritage Site in 1999. Robben Island is located in Table Bay, some 6 km (3.7 mi) west of Bloubergstrand, a coastal suburb north of Cape Town, and stands some 30m above sea level. Robben Island has been used as a prison where people were isolated, banished, and exiled for nearly 400 years. It was also used as a leper colony, a post office, a grazing ground, a mental hospital, and an outpost.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 40,
"text": "The Cape Peninsula is a rocky and mountainous peninsula that juts out into the Atlantic Ocean at the south-western extremity of the continent. At its tip is Cape Point and the Cape of Good Hope. The peninsula forms the west side of False Bay and the Cape Flats. On the east side are the Helderberg and Hottentots Holland mountains. The three main rock formations are the late-Precambrian Malmebury group (sedimentary and metamorphic rock), the Cape Granite suit, comprising the huge Peninsula, Kuilsrivier-Helderberg, and Stellenbosch batholiths, that were intruded into the Malmesbury Group about 630 million years ago, and the Table Mountain group sandstones that were deposited on the eroded surface of the granite and Malmesbury series basement about 450 million years ago.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 41,
"text": "The sand, silt and mud deposits were lithified by pressure and then folded during the Cape Orogeny to form the Cape Fold Belt, which extends in an arc along the western and southern coasts. The present landscape is due to prolonged erosion having carved out deep valleys, removing parts of the once continuous Table Mountain Group sandstone cover from over the Cape Flats and False Bay, and leaving high residual mountain ridges.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 42,
"text": "At times the sea covered the Cape Flats and Noordhoek valley and the Cape Peninsula was then a group of islands. During glacial periods the sea level dropped to expose the bottom of False Bay to weathering and erosion, with the last major regression leaving the entire bottom of False Bay exposed. During this period an extensive system of dunes was formed on the sandy floor of False Bay. At this time the drainage outlets lay between Rocky Bank Cape Point to the west, and between Rocky Bank and Hangklip Ridge to the east, with the watershed roughly along the line of the contact zone east of Seal Island and Whittle Rock.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 43,
"text": "Cape Town has a warm Mediterranean climate (Köppen: Csb), with mild, moderately wet winters and dry, warm summers. Winter, which lasts from June to September, may see large cold fronts entering for limited periods from the Atlantic Ocean with significant precipitation and strong north-westerly winds. Winter months in the city average a maximum of 18 °C (64 °F) and minimum of 8.5 °C (47 °F). Total annual rainfall in the city averages 515 mm (20.3 in) although in the Southern Suburbs, close to the mountains, rainfall is significantly higher and averages closer to 1,000 mm (39.4 in).",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 44,
"text": "Summer, which lasts from December to March, is warm and dry with an average maximum of 26 °C (79 °F) and minimum of 16 °C (61 °F). The region can get uncomfortably hot when the Berg Wind, meaning \"mountain wind\", blows from the Karoo interior. Spring and summer generally feature a strong wind from the south-east, known locally as the south-easter or the Cape Doctor, so called because it blows air pollution away. This wind is caused by a persistent high-pressure system over the South Atlantic to the west of Cape Town, known as the South Atlantic High, which shifts latitude seasonally, following the sun, and influencing the strength of the fronts and their northward reach. Cape Town receives about 3,100 hours of sunshine per year.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 45,
"text": "Water temperatures range greatly, between 10 °C (50 °F) on the Atlantic Seaboard, to over 22 °C (72 °F) in False Bay. Average annual ocean surface temperatures are between 13 °C (55 °F) on the Atlantic Seaboard (similar to Californian waters, such as San Francisco or Big Sur), and 17 °C (63 °F) in False Bay (similar to Northern Mediterranean temperatures, such as Nice or Monte Carlo).",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 46,
"text": "Unlike other parts of the country the city does not have many thunderstorms, and most of those that do occur, happen around October to December and March to April.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 47,
"text": "A 2019 paper published in PLOS One estimated that under Representative Concentration Pathway 4.5, a \"moderate\" scenario of climate change where global warming reaches ~2.5–3 °C (4.5–5.4 °F) by 2100, the climate of Cape Town in the year 2050 would most closely resemble the current climate of Perth in Australia. The annual temperature would increase by 1.1 °C (2.0 °F), and the temperature of the coldest month by 0.3 °C (0.54 °F), while the temperature of the warmest month would be 2.3 °C (4.1 °F) higher. According to Climate Action Tracker, the current warming trajectory appears consistent with 2.7 °C (4.9 °F), which closely matches RCP 4.5.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 48,
"text": "Moreover, according to the 2022 IPCC Sixth Assessment Report, Cape Town is one of 12 major African cities (Abidjan, Alexandria, Algiers, Cape Town, Casablanca, Dakar, Dar es Salaam, Durban, Lagos, Lomé, Luanda and Maputo) which would be the most severely affected by future sea level rise. It estimates that they would collectively sustain cumulative damages of US$65 billion under RCP 4.5 and US$86.5 billion for the high-emission scenario RCP 8.5 by the year 2050. Additionally, RCP 8.5 combined with the hypothetical impact from marine ice sheet instability at high levels of warming would involve up to US$137.5 billion in damages, while the additional accounting for the \"low-probability, high-damage events\" may increase aggregate risks to US$187 billion for the \"moderate\" RCP4.5, US$206 billion for RCP8.5 and US$397 billion under the high-end ice sheet instability scenario. Since sea level rise would continue for about 10,000 years under every scenario of climate change, future costs of sea level rise would only increase, especially without adaptation measures.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 49,
"text": "Cape Town's coastal water ranges from cold to mild, and the difference between the two sides of the peninsula can be dramatic. While the Atlantic Seaboard averages annual sea surface temperatures around 13 °C (55 °F), the False Bay coast is much warmer, averaging between 16 and 17 °C (61 and 63 °F) annually. In summer, False Bay water averages slightly over 20 °C (68 °F), with 22 °C (72 °F) an occasional high. Beaches located on the Atlantic Coast tend to have colder water due to the wind driven upwellings which contribute to the Benguela Current which originates off the Cape Peninsula, while the water at False Bay beaches may occasionally be warmer by up to 10 °C (18 °F) at the same time in summer.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 50,
"text": "In summer False Bay is thermally stratified, with a vertical temperature variation of 5 to 9˚C between the warmer surface water and cooler depths below 50 m, while in winter the water column is at nearly constant temperature at all depths. The development of a thermocline is strongest around late December and peaks in late summer to early autumn. In summer the south easterly winds generate a zone of upwelling near Cape Hangklip, where surface water temperatures can be 6 to 7 °C colder than the surrounding areas, and bottom temperatures below 12 °C.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 51,
"text": "In the summer to early autumn (January–March), cold water upwelling near Cape Hangklip causes a strong surface temperature gradient between the south-western and north-eastern corners of the bay. In winter the surface temperature tends to be much the same everywhere. In the northern sector surface temperature varies a bit more (13 to 22 °C) than in the south (14 to 20 °C) during the year.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 52,
"text": "Surface temperature variation from year to year is linked to the El Niño–Southern Oscillation. During El Niño years the South Atlantic high is shifted, reducing the south-easterly winds, so upwelling and evaporative cooling are reduced and sea surface temperatures throughout the bay are warmer, while in La Niña years there is more wind and upwelling and consequently lower temperatures. Surface water heating during El Niño increases vertical stratification. The relationship is not linear. Occasionally eddies from the Agulhas current will bring warmer water and vagrant sea life carried from the south and east coasts into False Bay.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 53,
"text": "Located in a Conservation International biodiversity hotspot as well as the unique Cape Floristic Region, the city of Cape Town has one of the highest levels of biodiversity of any equivalent area in the world. These protected areas are a World Heritage Site, and an estimated 2,200 species of plants are confined to Table Mountain – more than exist in the whole of the United Kingdom which has 1200 plant species and 67 endemic plant species. Many of these species, including a great many types of proteas, are endemic to the mountain and can be found nowhere else.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 54,
"text": "It is home to a total of 19 different vegetation types, of which several are endemic to the city and occur nowhere else in the world. It is also the only habitat of hundreds of endemic species, and hundreds of others which are severely restricted or threatened. This enormous species diversity is mainly because the city is uniquely located at the convergence point of several different soil types and micro-climates.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 55,
"text": "Table Mountain has an unusually rich biodiversity. Its vegetation consists predominantly of several different types of the unique and rich Cape Fynbos. The main vegetation type is endangered Peninsula Sandstone Fynbos, but critically endangered Peninsula Granite Fynbos, Peninsula Shale Renosterveld and Afromontane forest occur in smaller portions on the mountain.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 56,
"text": "Rapid population growth and urban sprawl has covered much of these ecosystems with development. Consequently, Cape Town now has over 300 threatened plant species and 13 which are now extinct. The Cape Peninsula, which lies entirely within the city of Cape Town, has the highest concentration of threatened species of any continental area of equivalent size in the world. Tiny remnant populations of critically endangered or near extinct plants sometimes survive on road sides, pavements and sports fields. The remaining ecosystems are partially protected through a system of over 30 nature reserves – including the massive Table Mountain National Park.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 57,
"text": "Cape Town reached first place in the 2019 iNaturalist City Nature Challenge in two out of the three categories: Most Observations, and Most Species. This was the first entry by Capetonians in this annual competition to observe and record the local biodiversity over a four-day long weekend during what is considered the worst time of the year for local observations. A worldwide survey suggested that the extinction rate of endemic plants from the City of Cape Town is one of the highest in the world, at roughly three per year since 1900 – partly a consequence of the very small and localised habitats and high endemicity.",
"title": "Geography and the natural environment"
},
{
"paragraph_id": 58,
"text": "Cape Town is governed by a 231-member city council elected in a system of mixed-member proportional representation. The city is divided into 116 wards, each of which elects a councillor by first-past-the-post voting. The remaining 115 councillors are elected from party lists so that the total number of councillors for each party is proportional to the number of votes received by that party.",
"title": "Government"
},
{
"paragraph_id": 59,
"text": "In the 2021 Municipal Elections, the Democratic Alliance (DA) kept its majority, this time diminished, taking 136 seats. The African National Congress lost substantially, receiving 43 of the seats. The Democratic Alliance candidate for the Cape Town mayoralty, Geordin Hill-Lewis was elected mayor.",
"title": "Government"
},
{
"paragraph_id": 60,
"text": "Cape Town has nineteen active sister city agreements",
"title": "Government"
},
{
"paragraph_id": 61,
"text": "The City of Cape Town has expressed explicit support for Ukraine during the 2022 invasion of the country by Russia. To show this support the City of Cape Town lit up the Old City Hall in the colours of the Ukrainian flag on 2 March 2022. This has differentiated the city from the officially neutral foreign policy position taken by the South African national government.",
"title": "Government"
},
{
"paragraph_id": 62,
"text": "According to the South African National Census of 2011, the population of the City of Cape Town metropolitan municipality – an area that includes suburbs and exurbs – is 3,740,026 people. This represents an annual growth rate of 2.6% compared to the results of the previous census in 2001 which found a population of 2,892,243 people. Of those residents who were asked about their first language, 35.7% spoke Afrikaans, 29.8% spoke Xhosa and 28.4% spoke English. 24.8% of the population is under the age of 15, while 5.5% is 65 or older. The sex ratio is 96, meaning that there are slightly more women than men.",
"title": "Demographics"
},
{
"paragraph_id": 63,
"text": "Of those residents aged 20 or older, 1.8% have no schooling, 8.1% have some schooling but did not finish primary school, 4.6% finished primary school but have no secondary schooling, 38.9% have some secondary schooling but did not finish Grade 12, 29.9% finished Grade 12 but have no higher education, and 16.7% have higher education. Overall, 46.6% have at least a Grade 12 education. Of those aged between 5 and 25, 67.8% are attending an educational institution. Amongst those aged between 15 and 65 the unemployment rate is 23.7%. The average annual household income is R161,762.",
"title": "Demographics"
},
{
"paragraph_id": 64,
"text": "The total number of households grew from 653,085 in 1996 to 1,068,572 in 2011, which represents an increase of 63.6%. The average number of household members declined from 3,92 in 1996 to 3,50 in 2011. Of those households, 78.4% are in formal structures (houses or flats), while 20.5% are in informal structures (shacks). 97.3% of City-supplied households have access to electricity, and 94.0% of households use electricity for lighting. 87.3% of households have piped water to the dwelling, while 12.0% have piped water through a communal tap. 94.9% of households have regular refuse collection service. 91.4% of households have a flush toilet or chemical toilet, while 4.5% still use a bucket toilet. 82.1% of households have a refrigerator, 87.3% have a television and 70.1% have a radio. Only 34.0% have a landline telephone, but 91.3% have a cellphone. 37.9% have a computer, and 49.3% have access to the Internet (either through a computer or a cellphone).",
"title": "Demographics"
},
{
"paragraph_id": 65,
"text": "In 2011 over 70% of cross provincial South African migrants coming into the Western Cape settled in Cape Town; 53.64% of South African migrants into the Western Cape came from the Eastern Cape, the old Cape Colony's former native reserve, and 20.95% came from Gauteng province.",
"title": "Demographics"
},
{
"paragraph_id": 66,
"text": "According to the 2016 City of Cape Town community survey, there were 4,004,793 people in the City of Cape Town metro. Out of this population, 45.7% identified as Black African, 35.1% identified as Coloured, 16.2% identified as White and 1.6% identified as Asian.",
"title": "Demographics"
},
{
"paragraph_id": 67,
"text": "During the outbreak of the COVID-19 pandemic in South Africa, local media reported that increasing numbers of wealthy and middle-class South Africans have started moving from inland areas to coastal regions of the country, most notably Cape Town, in a phenomenon referred to as \"semigration.\" Declining municipal services in the rest of the country and the South African energy crisis are other cited reasons for semigration.",
"title": "Demographics"
},
{
"paragraph_id": 68,
"text": "The city's population is expected to grow by an additional 400,000 residents between 2020 and 2025 with 76% of those new residents falling into the low-income bracket earning less than R 13,000 a month.",
"title": "Demographics"
},
{
"paragraph_id": 69,
"text": "In the 2015 General Household Survey 82.3% of respondents self identified as Christian, 8% as Muslim, 3.8% as following a traditional African religion and 3.1% as \"nothing in particular.\"",
"title": "Demographics"
},
{
"paragraph_id": 70,
"text": "Most places of worship in the city are Christian churches and cathedrals: Zion Christian Church, Apostolic Faith Mission of South Africa, Assemblies of God, Baptist Union of Southern Africa (Baptist World Alliance), Methodist Church of Southern Africa (World Methodist Council), Anglican Church of Southern Africa (Anglican Communion), Presbyterian Church of Africa (World Communion of Reformed Churches), Roman Catholic Archdiocese of Cape Town (Catholic Church), the Orthodox Archbishopric of Good Hope (Greek Orthodox Cathedral of St George) and the Church of Jesus Christ of Latter-day Saints (LDS Church). The LDS Church announced 4 April 2021 the construction of a temple with groundbreaking dates yet to be announced.",
"title": "Demographics"
},
{
"paragraph_id": 71,
"text": "Islam is the city's second largest religion with a long history in Cape Town, resulting in a number of mosques and other Muslim religious sites spread across the city, such as the Auwal Mosque, South Africa's first mosque.",
"title": "Demographics"
},
{
"paragraph_id": 72,
"text": "Cape Town's significant Jewish population supports a number of synagogues most notably the historic Gardens Shul, the oldest Jewish congregation in South Africa. Marais Road Shul in the city's Jewish hub, Sea Point, is the largest Jewish congregation in South Africa. Temple Israel (Cape Town Progressive Jewish Congregation) also has three temples in the city. There is also a Chabad centre in Sea Point and a Chabad on Campus at the University of Cape Town, catering to Jewish students.",
"title": "Demographics"
},
{
"paragraph_id": 73,
"text": "Other religious sites in the city include Hindu, Buddhist and Baháʼí temples.",
"title": "Demographics"
},
{
"paragraph_id": 74,
"text": "In recent years, the city has struggled with drugs, a surge in violent drug-related crime and more recently gang violence. In the Cape Flats alone, there were approximately 100,000 people in over 130 different gangs in 2018. While there are some alliances, this multitude and division is also cause for conflict between groups. At the same time, the economy has grown due to the boom in the tourism and the real estate industries. Since July 2019 widespread violent crime in poorer gang dominated areas of greater Cape Town has resulted in an ongoing military presence in these neighbourhoods. Cape Town had the highest murder rate among large South African cities at 77 murders per 100,000 people in the period April 2018 to March 2019, with 3157 murders mostly occurring in poor townships created under the apartheid regime. In 2022 the Mexican Council for Public Security and Criminal Justice ranked Cape Town as one of the 50 most violent cities in the world.",
"title": "Demographics"
},
{
"paragraph_id": 75,
"text": "The city is South Africa's second main economic centre and Africa's third main economic hub city. It serves as the regional manufacturing centre in the Western Cape. In 2019 the city's GMP of R489 billion (US$33.04 billion) represented 71.1% of the Western Cape's total GRP and 9.6% of South Africa's total GDP; the city also accounted for 11.1% of all employed people in the country and had a citywide GDP per capita of R111,364 (US$7,524). Since the global financial crisis of 2007 the city's economic growth rate has mirrored South Africa's decline in growth whilst the population growth rate for the city has remained steady at around 2% a year. Around 80% of the city's economic activity is generated by the tertiary sector of the economy with the finance, retail, real-estate, food and beverage industries being the four largest contributors to the city's economic growth rate.",
"title": "Economy"
},
{
"paragraph_id": 76,
"text": "In 2008 the city was named as the most entrepreneurial city in South Africa, with the percentage of Capetonians pursuing business opportunities almost three times higher than the national average. Those aged between 18 and 64 were 190% more likely to pursue new business, whilst in Johannesburg, the same demographic group was only 60% more likely than the national average to pursue a new business.",
"title": "Economy"
},
{
"paragraph_id": 77,
"text": "With the highest number of successful information technology companies in Africa, Cape Town is an important centre for the industry on the continent. This includes an increasing number of companies in the space industry. Growing at an annual rate of 8.5% and an estimated worth of R77 billion in 2010, nationwide the high tech industry in Cape Town is becoming increasingly important to the city's economy. A number of entrepreneurship initiatives and universities hosting technology startups such as Jumo, Yoco, Aerobotics, Luno, Rain telecommunication and The Sun Exchange are located in the city.",
"title": "Economy"
},
{
"paragraph_id": 78,
"text": "The city has the largest film industry in the Southern Hemisphere generating R5 billion (US$476.19 million) in revenue and providing an estimated 6,058 direct and 2,502 indirect jobs in 2013. Much of the industry is based out of the Cape Town Film Studios.",
"title": "Economy"
},
{
"paragraph_id": 79,
"text": "Most companies headquartered in the city are insurance companies, retail groups, publishers, design houses, fashion designers, shipping companies, petrochemical companies, architects and advertising agencies. Some of the most notable companies headquartered in the city are food and fashion retailer Woolworths, supermarket chain Pick n Pay Stores and Shoprite, New Clicks Holdings Limited, fashion retailer Foschini Group, internet service provider MWEB, Mediclinic International, eTV, multinational mass media giant Naspers, and financial services giant Sanlam and Old Mutual Park.",
"title": "Economy"
},
{
"paragraph_id": 80,
"text": "Other notable companies include Belron, Ceres Fruit Juices, Coronation Fund Managers, Vida e Caffè, Capitec Bank. The city is a manufacturing base for several multinational companies including, Johnson & Johnson, GlaxoSmithKline, Levi Strauss & Co., Adidas, Bokomo Foods, Yoco and Nampak. Amazon Web Services maintains one of its largest facilities in the world in Cape Town with the city serving as the Africa headquarters for its parent company Amazon.",
"title": "Economy"
},
{
"paragraph_id": 81,
"text": "The city of Cape Town's Gini coefficient of 0.58 is lower than South Africa's Gini coefficient of 0.7 making it more equal than the rest of the country or any other major South Africa city although still highly unequal by international standards. Between 2001 and 2010 the city's Gini coefficient, a measure of inequality, improved by dropping from 0.59 in 2007 to 0.57 in 2010 only to increase to 0.58 by 2017.",
"title": "Economy"
},
{
"paragraph_id": 82,
"text": "The Western Cape is a highly important tourist region in South Africa; the tourism industry accounts for 9.8% of the GDP of the province and employs 9.6% of the province's workforce. In 2010, over 1.5 million international tourists visited the area. Cape Town is not only a popular international tourist destination in South Africa, but Africa as a whole. This is due to its mild climate, natural setting, and well-developed infrastructure. The city has several well-known natural features that attract tourists, most notably Table Mountain, which forms a large part of the Table Mountain National Park and is the back end of the City Bowl. Reaching the top of the mountain can be achieved either by hiking up, or by taking the Table Mountain Cableway. Cape Point is the dramatic headland at the end of the Cape Peninsula. Many tourists also drive along Chapman's Peak Drive, a narrow road that links Noordhoek with Hout Bay, for the views of the Atlantic Ocean and nearby mountains. It is possible to either drive or hike up Signal Hill for closer views of the City Bowl and Table Mountain.",
"title": "Economy"
},
{
"paragraph_id": 83,
"text": "Many tourists also visit Cape Town's beaches, which are popular with local residents. It is possible to visit several different beaches in the same day, each with a different setting and atmosphere. Both coasts are popular, although the beaches in affluent Clifton and elsewhere on the Atlantic Coast are better developed with restaurants and cafés, with a strip of restaurants and bars accessible to the beach at Camps Bay. The Atlantic seaboard, known as Cape Town's Riviera, is regarded as one of the most scenic routes in South Africa, along the slopes of the Twelve Apostles to the boulders and white sand beaches of Llandudno, with the route ending in Hout Bay, a diverse suburb with a fishing and recreational boating harbour near a small island with a breeding colony of African fur seals. This suburb is also accessible by road from the Constantia valley over the mountains to the northeast, and via the picturesque Chapman's Peak drive from the residential suburb Noordhoek in the Fish Hoek valley to the south-east. Boulders Beach near Simon's Town is known for its colony of African penguins.",
"title": "Economy"
},
{
"paragraph_id": 84,
"text": "The city has several notable cultural attractions. The Victoria & Alfred Waterfront, built on top of part of the docks of the Port of Cape Town, is the city's most visited tourist attraction. It is also one of the city's most popular shopping venues, with several hundred shops as well as the Two Oceans Aquarium. The V&A also hosts the Nelson Mandela Gateway, through which ferries depart for Robben Island. It is possible to take a ferry from the V&A to Hout Bay, Simon's Town and the Cape fur seal colonies on Seal and Duiker Islands. Several companies offer tours of the Cape Flats, a region of mostly Coloured & Black townships.",
"title": "Economy"
},
{
"paragraph_id": 85,
"text": "Within the metropolitan area, the most popular areas for visitors to stay include Camps Bay, Sea Point, the V&A Waterfront, the City Bowl, Hout Bay, Constantia, Rondebosch, Newlands, and Somerset West. In November 2013, Cape Town was voted the best global city in The Daily Telegraph's annual Travel Awards. Cape Town offers tourists a range of air, land and sea-based adventure activities, including helicopter rides, paragliding and skydiving, snorkelling and scuba diving, boat trips, game-fishing, hiking, mountain biking and rock climbing. Surfing is popular and the city hosts the Red Bull Big Wave Africa surfing competition every year, and there is some local and international recreational scuba tourism.",
"title": "Economy"
},
{
"paragraph_id": 86,
"text": "The City of Cape Town works closely with Cape Town Tourism to promote the city both locally and internationally. The primary focus of Cape Town Tourism is to represent Cape Town as a tourist destination. Cape Town Tourism receives a portion of its funding from the City of Cape Town while the remainder is made up of membership fees and own-generated funds. The Tristan da Cunha government owns and operates a lodging facility in Cape Town which charges discounted rates to Tristan da Cunha residents and non-resident natives. Cape Town's transport system links it to the rest of South Africa; it serves as the gateway to other destinations within the province. The Cape Winelands and in particular the towns of Stellenbosch, Paarl and Franschhoek are popular day trips from the city for sightseeing and wine tasting.",
"title": "Economy"
},
{
"paragraph_id": 87,
"text": "Most goods are handled through the Port of Cape Town or Cape Town International Airport. Most major shipbuilding companies have offices in Cape Town. The province is also a centre of energy development for the country, with the existing Koeberg nuclear power station providing energy for the Western Cape's needs.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 88,
"text": "Greater Cape Town has four major commercial nodes, with Cape Town Central Business District containing the majority of job opportunities and office space. Century City, the Bellville/Tygervalley strip and Claremont commercial nodes are well established and contain many offices and corporate headquarters.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 89,
"text": "Public primary and secondary schools in Cape Town are run by the Western Cape Education Department. This provincial department is divided into seven districts; four of these are \"Metropole\" districts – Metropole Central, North, South, and East – which cover various areas of the metropolis. There are also many private schools, both religious and secular. Cape Town has a well-developed higher system of public universities. Cape Town is served by three public universities: the University of Cape Town (UCT), the University of the Western Cape (UWC) and the Cape Peninsula University of Technology (CPUT). Stellenbosch University, while not based in the metropolitan area itself, has its main campus and administrative section 50 kilometres from the City Bowl and has additional campuses, such as the Tygerberg Faculty of Medicine and Health Sciences and the Bellville Business Park, north-west of the city in the town of Bellville.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 90,
"text": "Both the University of Cape Town and Stellenbosch University are leading universities in South Africa. This is due in large part to substantial financial contributions made to these institutions by both the public and private sector. UCT is an English-language tuition institution. It has over 21,000 students and has an MBA programme that was ranked 51st by the Financial Times in 2006. It is also the top-ranked university in Africa, being the only African university to make the world's Top 200 university list at number 146. Since the African National Congress has become the country's ruling party, some restructuring of Western Cape universities has taken place and as such, traditionally non-white universities have seen increased financing, which has evidently benefitted the University of the Western Cape.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 91,
"text": "The Cape Peninsula University of Technology was formed on 1 January 2005, when two separate institutions – Cape Technikon and Peninsula Technikon – were merged. The new university offers education primarily in English, although one may take courses in any of South Africa's official languages. The institution generally awards the National Diploma. Students from the universities and high schools are involved in the South African SEDS, Students for the Exploration and Development of Space. This is the South African SEDS, and there are many SEDS branches in other countries, preparing enthusiastic students and young professionals for the growing Space industry. As well as the Universities, there are also several colleges in and around Cape Town. Including the College of Cape Town, False Bay College and Northlink College. Many students use NSFAS funding to help pay for tertiary education at these TVET colleges. Cape Town has also become a popular study abroad destination for many international college students. Many study abroad providers offer semester, summer, short-term, and internship programs in partnership with Cape Town universities as a chance for international students to gain intercultural understanding.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 92,
"text": "The Western Cape Water Supply System (WCWSS) is a complex water supply system in the Western Cape region of South Africa, comprising an inter-linked system of six main dams, pipelines, tunnels and distribution networks, and a number of minor dams, some owned and operated by the Department of Water and Sanitation and some by the City of Cape Town.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 93,
"text": "The Cape Town water crisis of 2017 to 2018 was a period of severe water shortage in the Western Cape region, most notably affecting the City of Cape Town. While dam water levels had been declining since 2015, the Cape Town water crisis peaked during mid-2017 to mid-2018 when water levels hovered between 15 and 30 percent of total dam capacity.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 94,
"text": "In late 2017, there were first mentions of plans for \"Day Zero\", a shorthand reference for the day when the water level of the major dams supplying the city could fall below 13.5 percent. \"Day Zero\" would mark the start of Level 7 water restrictions, when municipal water supplies would be largely switched off and it was envisioned that residents could have to queue for their daily ration of water. If this had occurred, it would have made the City of Cape Town the first major city in the world to run out of water.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 95,
"text": "The city of Cape Town implemented significant water restrictions in a bid to curb water usage, and succeeded in reducing its daily water usage by more than half to around 500 million litres (130,000,000 US gal) per day in March 2018. The fall in water usage led the city to postpone its estimate for \"Day Zero\", and strong rains starting in June 2018 led to dam levels recovering. In September 2018, with dam levels close to 70 percent, the city began easing water restrictions, indicating that the worst of the water crisis was over. Good rains in 2020 effectively broke the drought and resulting water shortage when dam levels reached 95 percent. Concerns have been raised, however, that unsustainble demand and limited water supply could result in future drought events.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 96,
"text": "Cape Town International Airport serves both domestic and international flights. It is the second-largest airport in South Africa and serves as a major gateway for travelers to the Cape region. Cape Town has regularly scheduled services to Southern Africa, East Africa, Mauritius, Middle East, Far East, Europe and the United States as well as eleven domestic destinations. Cape Town International Airport opened a brand new central terminal building that was developed to handle an expected increase in air traffic as tourism numbers increased in the lead-up to the tournament of the 2010 FIFA World Cup. Other renovations include several large new parking garages, a revamped domestic departure terminal, a new Bus Rapid Transit system station and a new double-decker road system. The airport's cargo facilities are also being expanded and several large empty lots are being developed into office space and hotels.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 97,
"text": "Cape Town is one of five internationally recognised Antarctic gateway cities with transportation connections. Since 2021, commercial flights have operated from Cape Town to Wolf's Fang Runway, Antarctica. The Cape Town International Airport was among the winners of the World Travel Awards for being Africa's leading airport. Cape Town International Airport is located 18 km from the Central Business District.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 98,
"text": "Cape Town has a long tradition as a port city. The Port of Cape Town, the city's main port, is in Table Bay directly to the north of the CBD. The port is a hub for ships in the southern Atlantic: it is located along one of the busiest shipping corridors in the world, and acts as a stopover point for goods en route to or from Latin America and Asia. It is also an entry point into the South African market. It is the second-busiest container port in South Africa after Durban. In 2004, it handled 3,161 ships and 9.2 million tonnes of cargo.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 99,
"text": "Simon's Town Harbour on the False Bay coast of the Cape Peninsula is the main operational base of the South African Navy.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 100,
"text": "Until the 1970s the city was served by the Union Castle Line with service to the United Kingdom and St Helena. The RMS St Helena provided passenger and cargo service between Cape Town and St Helena until the opening of St Helena Airport.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 101,
"text": "The cargo vessel M/V Helena, under AW Shipping Management, takes a limited number of passengers, between Cape Town and St Helena and Ascension Island on its voyages. Multiple vessels also take passengers to and from Tristan da Cunha, inaccessible by aircraft, to and from Cape Town. In addition NSB Niederelbe Schiffahrtsgesellschaft [de] takes passengers on its cargo service to the Canary Islands and Hamburg, Germany.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 102,
"text": "The Shosholoza Meyl is the passenger rail operations of Spoornet and operates two long-distance passenger rail services from Cape Town: a daily service to and from Johannesburg via Kimberley and a weekly service to and from Durban via Kimberley, Bloemfontein and Pietermaritzburg. These trains terminate at Cape Town railway station and make a brief stop at Bellville. Cape Town is also one terminus of the luxury tourist-oriented Blue Train as well as the five-star Rovos Rail.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 103,
"text": "Metrorail operates a commuter rail service in Cape Town and the surrounding area. The Metrorail network consists of 96 stations throughout the suburbs and outskirts of Cape Town.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 104,
"text": "Cape Town is the origin of three national roads. The N1 and N2 begin in the foreshore area near the City Centre and the N7, which runs North toward Namibia. The N1 runs East-North-East through Edgemead, Parow, Bellville, Brackenfell and Kraaifontein. It connects Cape Town to major cities further inland, namely Bloemfontein, Johannesburg, and Pretoria An older at-grade road, the R101, runs parallel to the N1 from Bellville. The N2 runs East-South-East through Rondebosch, Guguletu, Khayelitsha, Macassar to Somerset West. It becomes a multiple-carriageway, at-grade road from the intersection with the R44 onward. The N2 continues east along the coast, linking Cape Town to the coastal cities of Mossel Bay, George, Port Elizabeth, East London and Durban. An older at-grade road, the R102, runs parallel to the N1 initially, before veering south at Bellville, to join the N2 at Somerset West via the suburbs of Kuils River and Eerste River. The N7 originates from the N1 at Wingfield Interchange near Edgemead. It begins, initially as a highway, but becoming an at-grade road from the intersection with the M5 onward.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 105,
"text": "There are also a number of regional routes linking Cape Town with surrounding areas. The R27 originates from the N1 near the Foreshore and runs north parallel to the N7, but nearer to the coast. It passes through the suburbs of Milnerton, Table View and Bloubergstrand and links the city to the West Coast, ending at the town of Velddrif. The R44 enters the east of the metro from the north, from Stellenbosch. It connects Stellenbosch to Somerset West, then crosses the N2 to Strand and Gordon's Bay. It exits the metro heading south hugging the coast, leading to the towns of Betty's Bay and Kleinmond.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 106,
"text": "Of the three-digit routes, the R300 is an expressway linking the N1 at Brackenfell to the N2 near Mitchells Plain and the Cape Town International Airport. The R302 runs from the R102 in Bellville, heading north across the N1 through Durbanville leaving the metro to Malmesbury. The R304 enters the northern limits of the metro from Stellenbosch, running NNW before veering west to cross the N7 at Philadelphia to end at Atlantis at a junction with the R307. This R307 starts north of Koeberg from the R27 and, after meeting the R304, continues north to Darling. The R310 originates from Muizenberg and runs along the coast, to the south of Mitchell's Plain and Khayelitsha, before veering north-east, crossing the N2 west of Macassar, and exiting the metro heading to Stellenbosch.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 107,
"text": "Cape Town, like most South African cities, uses Metropolitan or \"M\" routes for important intra-city routes, a layer below National (N) roads and Regional (R) routes. Each city's M roads are independently numbered. Most are at-grade roads. The M3 splits from the N2 and runs to the south along the eastern slopes of Table Mountain, connecting the City Bowl with Muizenberg. Except for a section between Rondebosch and Newlands that has at-grade intersections, this route is a highway. The M5 splits from the N1 further east than the M3, and links the Cape Flats to the CBD. It is a highway as far as the interchange with the M68 at Ottery, before continuing as an at-grade road. Cape Town has the worst traffic congestion in South Africa.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 108,
"text": "Golden Arrow Bus Services operates scheduled bus services in the Cape Town metropolitan area. Several companies run long-distance bus services from Cape Town to the other cities in South Africa.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 109,
"text": "Cape Town has a public transport system in about 10% of the city, running north to south along the west coastline of the city, comprising Phase 1 of the IRT system. This is known as the MyCiTi service.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 110,
"text": "MyCiTi Phase 1 includes services linking the Airport to the Cape Town inner city, as well as the following areas: Blouberg / Table View, Dunoon, Atlantis and Melkbosstrand, Milnerton, Paarden Eiland, Century City, Salt River and Walmer Estate, and all suburbs of the City Bowl and Atlantic Seaboard all the way to Llandudno and Hout Bay.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 111,
"text": "The MyCiTi N2 Express service consists of two routes each linking the Cape Town inner city and Khayelitsha and Mitchells Plain on the Cape Flats.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 112,
"text": "The service use high floor articulated and standard size buses in dedicated busways, low floor articulated and standard size buses on the N2 Express service, and smaller 9 m (30 ft) Optare buses in suburban and inner city areas. It offers universal access through level boarding and numerous other measures, and requires cashless fare payment using the EMV compliant smart card system, called myconnect. Headway of services (i.e. the time between buses on the same route) range from three to twenty minutes in peak times to an hour in off-peak times.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 113,
"text": "Cape Town has taxis as well as e-hailing services such as Uber. Taxis are either metered taxis or minibus taxis. Unlike many cities, metered taxis can be found at transport hubs as well as other tourist establishments, while minibus taxis can be found at taxi ranks or travelling along main streets. Minibus taxis can be hailed from the road.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 114,
"text": "Cape Town metered taxi cabs mostly operate in the city bowl, suburbs and Cape Town International Airport areas. Large companies that operate fleets of cabs can be reached by phone and are cheaper than the single operators that apply for hire from taxi ranks and Victoria and Alfred Waterfront. There are about one thousand meter taxis in Cape Town. Their rates vary from R8 per kilometre to about R15 per kilometre. The larger taxi companies in Cape Town are Excite Taxis, Cabnet and Intercab and single operators are reachable by cellular phone. The seven seated Toyota Avanza are the most popular with larger Taxi companies. Meter cabs are mostly used by tourists and are safer to use than minibus taxis.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 115,
"text": "Minibus taxis are the standard form of transport for the majority of the population who cannot afford private vehicles. Although essential, these taxis are often poorly maintained and are frequently not road-worthy. These taxis make frequent unscheduled stops to pick up passengers, which can cause accidents. With the high demand for transport by the working class of South Africa, minibus taxis are often filled over their legal passenger allowance. Minibuses are generally owned and operated in fleets.",
"title": "Infrastructure and services"
},
{
"paragraph_id": 116,
"text": "Cape Town is noted for its architectural heritage, with the highest density of Cape Dutch style buildings in the world. Cape Dutch style, which combines the architectural traditions of the Netherlands, Germany, France and Indonesia, is most visible in Constantia, the old government buildings in the Central Business District, and along Long Street. The annual Cape Town Minstrel Carnival, also known by its Afrikaans name of Kaapse Klopse, is a large minstrel festival held annually on 2 January or \"Tweede Nuwe Jaar\" (Second New Year). Competing teams of minstrels parade in brightly coloured costumes, performing Cape Jazz, either carrying colourful umbrellas or playing an array of musical instruments. The Artscape Theatre Centre is the largest performing arts venue in Cape Town. The city was named the World Design Capital for 2014 by the International Council of Societies of Industrial Design.",
"title": "Culture"
},
{
"paragraph_id": 117,
"text": "The city also encloses the 36 hectare Kirstenbosch National Botanical Garden that contains protected natural forest and fynbos along with a variety of animals and birds. There are over 7,000 species in cultivation at Kirstenbosch, including many rare and threatened species of the Cape Floristic Region. In 2004 this Region, including Kirstenbosch, was declared a UNESCO World Heritage Site.",
"title": "Culture"
},
{
"paragraph_id": 118,
"text": "Whale watching is popular amongst tourists: southern right whales and humpback whales are seen off the coast during the breeding season (August to November) and Bryde's whales and orca can be seen any time of the year. The nearby town of Hermanus is known for its Whale Festival, but whales can also be seen in False Bay. Heaviside's dolphins are endemic to the area and can be seen from the coast north of Cape Town; dusky dolphins live along the same coast and can occasionally be seen from the ferry to Robben Island.",
"title": "Culture"
},
{
"paragraph_id": 119,
"text": "The only complete windmill in South Africa is Mostert's Mill, Mowbray. It was built in 1796 and restored in 1935 and again in 1995.",
"title": "Culture"
},
{
"paragraph_id": 120,
"text": "Food originating from or synonymous with Cape Town includes the savoury sweet spiced meat dish Bobotie that dates from the 17th century. The Gatsby, a sandwich filled with slap chips and other toppings, was first served in 1976 in the suburb of Athlone and is also synonymous with the city. The koe'sister is a traditional Cape Malay pastry described as a cinnamon infused dumpling with a cake-like texture, finished off with a sprinkling of desiccated coconut. Malva pudding (sometimes known as Cape Malva pudding) is a sticky sweet dessert often served with hot custard is also associated with the city and dates back to the 17th century. A related dessert dish, Cape Brandy Pudding, is also associated with the city and surrounding region. Cape Town is also the home of the South African wine industry with the first wine produced in the country being bottled in the city; a number of notable wineries still exist in the city including Groot Constantia and Klein Constantia.",
"title": "Culture"
},
{
"paragraph_id": 121,
"text": "Several newspapers, magazines and printing facilities have their offices in the city. Independent News and Media publishes the major English language papers in the city, the Cape Argus and the Cape Times. Naspers, the largest media conglomerate in South Africa, publishes Die Burger, the major Afrikaans language paper.",
"title": "Culture"
},
{
"paragraph_id": 122,
"text": "Cape Town has many local community newspapers. Some of the largest community newspapers in English are the Athlone News from Athlone, the Atlantic Sun, the Constantiaberg Bulletin from Constantiaberg, the City Vision from Bellville, the False Bay Echo from False Bay, the Helderberg Sun from Helderberg, the Plainsman from Michell's Plain, the Sentinel News from Hout Bay, the Southern Mail from the Southern Peninsula, the Southern Suburbs Tatler from the Southern Suburbs, Table Talk from Table View and Tygertalk from Tygervalley/Durbanville. Afrikaans language community newspapers include the Landbou-Burger and the Tygerburger. Vukani, based in the Cape Flats, is published in Xhosa.",
"title": "Culture"
},
{
"paragraph_id": 123,
"text": "Cape Town is a centre for major broadcast media with several radio stations that only broadcast within the city. 94.5 Kfm (94.5 MHz FM) and Good Hope FM (94–97 MHz FM) mostly play pop music. Heart FM (104.9 MHz FM), the former P4 Radio, plays jazz and R&B, while Fine Music Radio (101.3 FM) plays classical music and jazz, and Magic Music Radio (828 kHz MW) plays adult contemporary and classic rock from the '60s, '70s, '80s, '90s and '00s. Bush Radio is a community radio station (89.5 MHz FM). The Voice of the Cape (95.8 MHz FM) and Cape Talk (567 kHz MW) are the major talk radio stations in the city. Bokradio (98.9 MHz FM) is an Afrikaans music station. The University of Cape Town also runs its own radio station, UCT Radio (104.5 MHz FM).",
"title": "Culture"
},
{
"paragraph_id": 124,
"text": "The SABC has a small presence in the city, with satellite studios located at Sea Point. e.tv has a greater presence, with a large complex located at Longkloof Studios in Gardens. M-Net is not well represented with infrastructure within the city. Cape Town TV is a local TV station, supported by numerous organisation and focusing mostly on documentaries. Numerous productions companies and their support industries are located in the city, mostly supporting the production of overseas commercials, model shoots, TV-series and movies. The local media infrastructure remains primarily in Johannesburg.",
"title": "Culture"
},
{
"paragraph_id": 125,
"text": "Cape Town's most popular sports by participation are cricket, association football, swimming, and rugby union. In rugby union, Cape Town is the home of the Western Province side, who play at Cape Town Stadium and compete in the Currie Cup. In addition, Western Province players (along with some from Wellington's Boland Cavaliers) comprise the Stormers in the United Rugby Championship competition. Cape Town has also been a host city for both the 1995 Rugby World Cup and 2010 FIFA World Cup, and annually hosts the Africa leg of the World Rugby 7s. It will also be host to the 2023 Netball World Cup.",
"title": "Culture"
},
{
"paragraph_id": 126,
"text": "Association football, which is mostly known as soccer in South Africa, is also popular. Two clubs from Cape Town play in the Premier Soccer League (PSL), South Africa's premier league. These teams are Ajax Cape Town, which formed as a result of the 1999 amalgamation of the Seven Stars and the Cape Town Spurs and resurrected Cape Town City F.C. Cape Town was also the location of several of the matches of the FIFA 2010 World Cup including a semi-final, held in South Africa. The Mother City built a new 70,000-seat stadium (Cape Town Stadium) in the Green Point area.",
"title": "Culture"
},
{
"paragraph_id": 127,
"text": "In cricket, the Cape Cobras represent Cape Town at the Newlands Cricket Ground. The team is the result of an amalgamation of the Western Province Cricket and Boland Cricket teams. They take part in the Supersport and Standard Bank Cup Series. The Newlands Cricket Ground regularly hosts international matches.",
"title": "Culture"
},
{
"paragraph_id": 128,
"text": "Cape Town has had Olympic aspirations. For example, in 1996, Cape Town was one of the five candidate cities shortlisted by the IOC to launch official candidatures to host the 2004 Summer Olympics. Although the Games ultimately went to Athens, Cape Town came in third place. There has been some speculation that Cape Town was seeking the South African Olympic Committee's nomination to be South Africa's bid city for the 2020 Summer Olympic Games. That was quashed when the International Olympic Committee awarded the 2020 Games to Tokyo.",
"title": "Culture"
},
{
"paragraph_id": 129,
"text": "The city of Cape Town has vast experience in hosting major national and international sports events. The Cape Town Cycle Tour is the world's largest individually timed road cycling race – and the first event outside Europe to be included in the International Cycling Union's Golden Bike series. It sees over 35,000 cyclists tackling a 109 km (68 mi) route around Cape Town. The Absa Cape Epic is the largest full-service mountain bike stage race in the world. Some notable events hosted by Cape Town have included the 1995 Rugby World Cup, 2003 ICC Cricket World Cup, and World Championships in various sports such as athletics, fencing, weightlifting, hockey, cycling, canoeing, gymnastics and others. Cape Town was also a host city to the 2010 FIFA World Cup from 11 June to 11 July 2010, further enhancing its profile as a major events city. It was also one of the host cities of the 2009 Indian Premier League cricket tournament. The Mother City has also played host to the Africa leg of the annual World Rugby 7s event since 2015; for nine seasons, from 2002 until 2010, the event was staged in George in the Western Cape, before moving to Port Elizabeth for the 2011 edition, and then to Cape Town in 2015. The event usually takes place in mid-December, and is hosted at the Cape Town Stadium in Green Point.",
"title": "Culture"
},
{
"paragraph_id": 130,
"text": "There are several golf courses in Cape Town. The Clovelly Country Club and Metropolitan Golf Club are two of the best Golf Courses in Cape Town both offering superb views while playing the 18 holes.",
"title": "Culture"
},
{
"paragraph_id": 131,
"text": "The coastline of Cape Town is relatively long, and the varied exposure to weather conditions makes it fairly common for water conditions to be conducive to recreational scuba diving at some part of the city's coast. There is considerable variation in the underwater environment and regional ecology as there are dive sites on reefs and wrecks on both sides of the Cape Peninsula and False Bay, split between two coastal marine ecoregions by the Cape Peninsula, and also variable by depth zone.",
"title": "Culture"
},
{
"paragraph_id": 132,
"text": "False Bay is open to the south, and the prevailing open ocean swell arrives from the southwest, so the exposure varies considerably around the coastline. The inshore bathymetry near Cape Point is shallow enough for a moderate amount of refraction of long period swell, but deep enough to have less effect on short period swell, and acts as a filter to pass mainly the longer swell components to the Western shores, although they are significantly attenuated. The eastern shores get more of the open ocean spectrum, and this results in very different swell conditions between the two sides at any given time. The fetch is generally too short for southeasterly winds to produce good surf. There are more than 20 named breaks in False Bay. The north-wester can have a long fetch and can produce large waves, but they may also be associated with local wind and be very poorly sorted. The Atlantic coast is exposed to the full power of the South-westerly swell produced by the westerly winds of the southern ocean, often a long way away, so the swell has time to separate into similar wavelengths, and there are some world class big wave breaks among the named breaks of the Atlantic shore.",
"title": "Culture"
}
] | Cape Town is the legislative capital of South Africa. It is the country's oldest city and the seat of the Parliament of South Africa. It is the country's second-largest city, after Johannesburg, and the largest in the Western Cape. The city is part of the City of Cape Town metropolitan municipality. The city is known for its harbour, its natural setting in the Cape Floristic Region, and for landmarks such as Table Mountain and Cape Point. In 2014, Cape Town was named the best place in the world to visit by The New York Times and similarly by The Daily Telegraph in 2016. Located on the shore of Table Bay, the City Bowl area of Cape Town is the oldest urban area in the Western Cape, with a significant cultural heritage. It was founded by the Dutch East India Company (VOC) as a supply station for Dutch ships sailing to East Africa, India, and the Far East. Jan van Riebeeck's arrival on 6 April 1652 established the VOC Cape Colony, the first permanent European settlement in South Africa. Cape Town outgrew its original purpose as the first European outpost at the Castle of Good Hope, becoming the economic and cultural hub of the Cape Colony. Until the Witwatersrand Gold Rush and the development of Johannesburg, Cape Town was the largest city in southern Africa. The metropolitan area has a long coastline on the Atlantic Ocean, which includes False Bay, and extends to the Hottentots Holland mountains to the east. The Table Mountain National Park is within the city boundaries and there are several other nature reserves and marine-protected areas within, and adjacent to, the city, protecting the diverse terrestrial and marine natural environment. | 2001-10-01T19:11:53Z | 2023-12-28T08:29:34Z | [
"Template:Notelist",
"Template:Commons",
"Template:Authority control",
"Template:Citation needed",
"Template:Annotated link",
"Template:Use dmy dates",
"Template:Efn",
"Template:Lang",
"Template:Flagicon",
"Template:When",
"Template:Cite journal",
"Template:Short description",
"Template:About",
"Template:Citation",
"Template:Cvt",
"Template:Relevance inline",
"Template:Not a typo",
"Template:Politics of Western Cape",
"Template:Ill",
"Template:Wikisource1911Enc",
"Template:Use South African English",
"Template:Infobox settlement",
"Template:Webarchive",
"Template:Provincial capitals of South Africa",
"Template:Wide image",
"Template:Div col",
"Template:Dash",
"Template:Cbignore",
"Template:Western Cape Province",
"Template:Multiple image",
"Template:Historical populations",
"Template:Further",
"Template:Cite encyclopedia",
"Template:Doi",
"Template:List of African capitals",
"Template:Main",
"Template:Lang-nl",
"Template:Div col end",
"Template:Currency",
"Template:Portal",
"Template:Cite book",
"Template:Dead link",
"Template:Oweb",
"Template:Rp",
"Template:See also",
"Template:Cite news",
"Template:Cite web",
"Template:Wikivoyage",
"Template:Cape Town",
"Template:Weather box",
"Template:Reflist"
] | https://en.wikipedia.org/wiki/Cape_Town |
6,654 | Chicago Cubs | The Chicago Cubs are an American professional baseball team based in Chicago. The Cubs compete in Major League Baseball (MLB) as part of the National League (NL) Central division. The club plays its home games at Wrigley Field, which is located on Chicago's North Side. The Cubs are one of two major league teams based in Chicago; the other, the Chicago White Sox, are a member of the American League (AL) Central division. The Cubs, first known as the White Stockings, were a founding member of the NL in 1876, becoming the Chicago Cubs in 1903.
Throughout the club's history, the Cubs have played in a total of 11 World Series. The 1906 Cubs won 116 games, finishing 116–36 and posting a modern-era record winning percentage of .763, before losing the World Series to the Chicago White Sox ("The Hitless Wonders") by four games to two. The Cubs won back-to-back World Series championships in 1907 and 1908, becoming the first major league team to play in three consecutive World Series, and the first to win it twice. Most recently, the Cubs won the 2016 National League Championship Series and 2016 World Series, which ended a 71-year National League pennant drought and a 108-year World Series championship drought, both of which are record droughts in Major League Baseball. The 108-year drought was also the longest such occurrence in all major sports leagues in the United States and Canada. Since the start of divisional play in 1969, the Cubs have appeared in the postseason 11 times through the 2022 season.
The Cubs are known as "the North Siders", a reference to the location of Wrigley Field within the city of Chicago, and in contrast to the White Sox, whose home field (Guaranteed Rate Field) is located on the South Side.
Through 2023, the franchise's all-time record is 11,244–10,688(.513).
The Cubs began in 1870 as the Chicago White Stockings, playing their home games at West Side Grounds.
Six years later, they joined the National League (NL) as a charter member. In the runup to their NL debut, owner William Hulbert signed various star players, such as pitcher Albert Spalding and infielders Ross Barnes, Deacon White, and Adrian "Cap" Anson. The White Stockings quickly established themselves as one of the new league's top teams. Spalding won forty-seven games and Barnes led the league in hitting at .429 as Chicago won the first National League pennant, which at the time was the game's top prize.
After back-to-back pennants in 1880 and 1881, Hulbert died, and Spalding, who had retired from playing to start Spalding sporting goods, assumed ownership of the club. The White Stockings, with Anson acting as player-manager, captured their third consecutive pennant in 1882, and Anson established himself as the game's first true superstar. In 1885 and 1886, after winning NL pennants, the White Stockings met the champions of the short-lived American Association in that era's version of a World Series. Both seasons resulted in matchups with the St. Louis Brown Stockings; the clubs tied in 1885 and St. Louis won in 1886. This was the genesis of what would eventually become one of the greatest rivalries in sports. In all, the Anson-led Chicago Base Ball Club won six National League pennants between 1876 and 1886. By 1890, the team had become known the Chicago Colts, or sometimes "Anson's Colts", referring to Cap's influence within the club. Anson was the first player in history credited with 3,000 career hits. In 1897, after a disappointing record of 59–73 and a ninth-place finish, Anson was released by the club as both a player and manager. His departure after 22 years led local newspaper reporters to refer to the Colts as the "Orphans".
After the 1900 season, the American Base-Ball League formed as a rival professional league. The club's old White Stockings nickname (eventually shortened to White Sox) was adopted by a new American League neighbor to the south.
In 1902, Spalding, who by this time had revamped the roster to boast what would soon be one of the best teams of the early century, sold the club to Jim Hart. The franchise was nicknamed the Cubs by the Chicago Daily News in 1902; it officially took the name five years later. During this period, which has become known as baseball's dead-ball era, Cub infielders Joe Tinker, Johnny Evers, and Frank Chance were made famous as a double-play combination by Franklin P. Adams' poem "Baseball's Sad Lexicon". The poem first appeared in the July 18, 1910 edition of the New York Evening Mail. Mordecai "Three-Finger" Brown, Jack Taylor, Ed Reulbach, Jack Pfiester, and Orval Overall were several key pitchers for the Cubs during this time period. With Chance acting as player-manager from 1905 to 1912, the Cubs won four pennants and two World Series titles over a five-year span. Although they fell to the "Hitless Wonders" White Sox in the 1906 World Series, the Cubs recorded a record 116 victories and the best winning percentage (.763) in Major League history. With mostly the same roster, Chicago won back-to-back World Series championships in 1907 and 1908, becoming the first Major League club to play three times in the Fall Classic and the first to win it twice. However, the Cubs would not win another World Series until 2016; this remains the longest championship drought in North American professional sports.
The next season, veteran catcher Johnny Kling left the team to become a professional pocket billiards player. Some historians think Kling's absence was significant enough to prevent the Cubs from also winning a third straight title in 1909, as they finished 6 games out of first place. When Kling returned the next year, the Cubs won the pennant again, but lost to the Philadelphia Athletics in the 1910 World Series.
In 1914, advertising executive Albert Lasker obtained a large block of the club's shares and before the 1916 season assumed majority ownership of the franchise. Lasker brought in a wealthy partner, Charles Weeghman, the proprietor of a popular chain of lunch counters who had previously owned the Chicago Whales of the short-lived Federal League. As principal owners, the pair moved the club from the West Side Grounds to the much newer Weeghman Park, which had been constructed for the Whales only two years earlier, where they remain to this day. The Cubs responded by winning a pennant in the war-shortened season of 1918, where they played a part in another team's curse: the Boston Red Sox defeated Grover Cleveland Alexander's Cubs four games to two in the 1918 World Series, Boston's last Series championship until 2004.
Beginning in 1916, Bill Wrigley of chewing-gum fame acquired an increasing quantity of stock in the Cubs. By 1921 he was the majority owner, maintaining that status into the 1930s.
Meanwhile, the year 1919 saw the start of the tenure of Bill Veeck, Sr. as team president. Veeck would hold that post throughout the 1920s and into the 30s. The management team of Wrigley and Veeck came to be known as the "double-Bills".
Near the end of the first decade of the double-Bills' guidance, the Cubs won the NL Pennant in 1929 and then achieved the unusual feat of winning a pennant every three years, following up the 1929 flag with league titles in 1932, 1935, and 1938. Unfortunately, their success did not extend to the Fall Classic, as they fell to their AL rivals each time. The '32 series against the Yankees featured Babe Ruth's "called shot" at Wrigley Field in game three. There were some historic moments for the Cubs as well; In 1930, Hack Wilson, one of the top home run hitters in the game, had one of the most impressive seasons in MLB history, hitting 56 home runs and establishing the current runs-batted-in record of 191. That 1930 club, which boasted six eventual hall of fame members (Wilson, Gabby Hartnett, Rogers Hornsby, George "High Pockets" Kelly, Kiki Cuyler and manager Joe McCarthy) established the current team batting average record of .309. In 1935 the Cubs claimed the pennant in thrilling fashion, winning a record 21 games in a row in September. The '38 club saw Dizzy Dean lead the team's pitching staff and provided a historic moment when they won a crucial late-season game at Wrigley Field over the Pittsburgh Pirates with a walk-off home run by Gabby Hartnett, which became known in baseball lore as "The Homer in the Gloamin'".
After the "Double-Bills" (Wrigley and Veeck) died in 1932 and 1933 respectively, P.K. Wrigley, son of Bill Wrigley, took over as majority owner. He was unable to extend his father's baseball success beyond 1938, and the Cubs slipped into years of mediocrity, although the Wrigley family would retain control of the team until 1981.
The Cubs enjoyed one more pennant at the close of World War II, finishing 98–56. Due to the wartime travel restrictions, the first three games of the 1945 World Series were played in Detroit, where the Cubs won two games, including a one-hitter by Claude Passeau, and the final four were played at Wrigley. The Cubs lost the series, and did not return until the 2016 World Series. After losing the 1945 World Series to the Detroit Tigers, the Cubs finished with a respectable 82–71 record in the following year, but this was only good enough for third place.
In the following two decades, the Cubs played mostly forgettable baseball, finishing among the worst teams in the National League on an almost annual basis. From 1947 to 1966, they only notched one winning season. Longtime infielder-manager Phil Cavarretta, who had been a key player during the 1945 season, was fired during spring training in 1954 after admitting the team was unlikely to finish above fifth place. Although shortstop Ernie Banks would become one of the star players in the league during the next decade, finding help for him proved a difficult task, as quality players such as Hank Sauer were few and far between. This, combined with poor ownership decisions such as the College of Coaches, and the ill-fated trade of future Hall of Fame member Lou Brock to the Cardinals for pitcher Ernie Broglio (who won only seven games over the next three seasons), hampered on-field performance.
The late-1960s brought hope of a renaissance, with third baseman Ron Santo, pitcher Ferguson Jenkins, and outfielder Billy Williams joining Banks. After losing a dismal 103 games in 1966, the Cubs brought home consecutive winning records in '67 and '68, marking the first time a Cub team had accomplished that feat in over two decades.
In 1969 the Cubs, managed by Leo Durocher, built a substantial lead in the newly created National League Eastern Division by mid-August. Ken Holtzman pitched a no-hitter on August 19, and the division lead grew to 8 1⁄2 games over the St. Louis Cardinals and by 9 1⁄2 games over the New York Mets. After the game of September 2, the Cubs record was 84–52 with the Mets in second place at 77–55. But then a losing streak began just as a Mets winning streak was beginning. The Cubs lost the final game of a series at Cincinnati, then came home to play the resurgent Pittsburgh Pirates (who would finish in third place). After losing the first two games by scores of 9–2 and 13–4, the Cubs led going into the ninth inning. A win would be a positive springboard since the Cubs were to play a crucial series with the Mets the next day. But Willie Stargell drilled a two-out, two-strike pitch from the Cubs' ace reliever, Phil Regan, onto Sheffield Avenue to tie the score in the top of the ninth. The Cubs would lose 7–5 in extra innings.[6] Burdened by a four-game losing streak, the Cubs traveled to Shea Stadium for a short two-game set. The Mets won both games, and the Cubs left New York with a record of 84–58 just 1⁄2 game in front. More of the same followed in Philadelphia, as a 99 loss Phillies team nonetheless defeated the Cubs twice, to extend Chicago's losing streak to eight games. In a key play in the second game, on September 11, Cubs starter Dick Selma threw a surprise pickoff attempt to third baseman Ron Santo, who was nowhere near the bag or the ball. Selma's throwing error opened the gates to a Phillies rally. After that second Philly loss, the Cubs were 84–60 and the Mets had pulled ahead at 85–57. The Mets would not look back. The Cubs' eight-game losing streak finally ended the next day in St. Louis, but the Mets were in the midst of a ten-game winning streak, and the Cubs, wilting from team fatigue, generally deteriorated in all phases of the game.[1] The Mets (who had lost a record 120 games 7 years earlier), would go on to win the World Series. The Cubs, despite a respectable 92–70 record, would be remembered for having lost a remarkable 17½ games in the standings to the Mets in the last quarter of the season.
Following the 1969 season, the club posted winning records for the next few seasons, but no playoff action. After the core players of those teams started to move on, the 70s got worse for the team, and they became known as "the Loveable Losers". In 1977, the team found some life, but ultimately experienced one of its biggest collapses. The Cubs hit a high-water mark on June 28 at 47–22, boasting an 8+1⁄2 game NL East lead, as they were led by Bobby Murcer (27 HR/89 RBI), and Rick Reuschel (20–10). However, the Philadelphia Phillies cut the lead to two by the All-star break, as the Cubs sat 19 games over .500, but they swooned late in the season, going 20–40 after July 31. The Cubs finished in fourth place at 81–81, while Philadelphia surged, finishing with 101 wins. The following two seasons also saw the Cubs get off to a fast start, as the team rallied to over 10 games above .500 well into both seasons, only to again wear down and play poorly later on, and ultimately settling back to mediocrity. This trait became known as the "June Swoon". Again, the Cubs' unusually high number of day games is often pointed to as one reason for the team's inconsistent late-season play.
Wrigley died in 1977. The Wrigley family sold the team to the Chicago Tribune in 1981, ending a 65-year family relationship with the Cubs.
After over a dozen more subpar seasons, in 1981 the Cubs hired GM Dallas Green from Philadelphia to turn around the franchise. Green had managed the 1980 Phillies to the World Series title. One of his early GM moves brought in a young Phillies minor-league 3rd baseman named Ryne Sandberg, along with Larry Bowa for Iván DeJesús. The 1983 Cubs had finished 71–91 under Lee Elia, who was fired before the season ended by Green. Green continued the culture of change and overhauled the Cubs roster, front-office and coaching staff prior to 1984. Jim Frey was hired to manage the 1984 Cubs, with Don Zimmer coaching 3rd base and Billy Connors serving as pitching coach.
Green shored up the 1984 roster with a series of transactions. In December 1983 Scott Sanderson was acquired from Montreal in a three-team deal with San Diego for Carmelo Martínez. Pinch hitter Richie Hebner (.333 BA in 1984) was signed as a free-agent. In spring training, moves continued: LF Gary Matthews and CF Bobby Dernier came from Philadelphia on March 26, for Bill Campbell and a minor leaguer. Reliever Tim Stoddard (10–6 3.82, 7 saves) was acquired the same day for a minor leaguer; veteran pitcher Ferguson Jenkins was released.
The team's commitment to contend was complete when Green made a midseason deal on June 15 to shore up the starting rotation due to injuries to Rick Reuschel (5–5) and Sanderson. The deal brought 1979 NL Rookie of the Year pitcher Rick Sutcliffe from the Cleveland Indians. Joe Carter (who was with the Triple-A Iowa Cubs at the time) and right fielder Mel Hall were sent to Cleveland for Sutcliffe and back-up catcher Ron Hassey (.333 with Cubs in 1984). Sutcliffe (5–5 with the Indians) immediately joined Sanderson (8–5 3.14), Eckersley (10–8 3.03), Steve Trout (13–7 3.41) and Dick Ruthven (6–10 5.04) in the starting rotation. Sutcliffe proceeded to go 16–1 for Cubs and capture the Cy Young Award.
The Cubs 1984 starting lineup was very strong. It consisted of LF Matthews (.291 14–82 101 runs 17 SB), C Jody Davis (.256 19–94), RF Keith Moreland (.279 16–80), SS Larry Bowa (.223 10 SB), 1B Leon "Bull" Durham (.279 23–96 16SB), CF Dernier (.278 45 SB), 3B Ron Cey (.240 25–97), Closer Lee Smith (9–7 3.65 33 saves) and 1984 NL MVP Ryne Sandberg (.314 19–84 114 runs, 19 triples, 32 SB).
Reserve players Hebner, Thad Bosley, Henry Cotto, Hassey and Dave Owen produced exciting moments. The bullpen depth of Rich Bordi, George Frazier, Warren Brusstar and Dickie Noles did their job in getting the game to Smith or Stoddard.
At the top of the order, Dernier and Sandberg were exciting, aptly coined "the Daily Double" by Harry Caray. With strong defense – Dernier CF and Sandberg 2B, won the NL Gold Glove- solid pitching and clutch hitting, the Cubs were a well-balanced team. Following the "Daily Double", Matthews, Durham, Cey, Moreland and Davis gave the Cubs an order with no gaps to pitch around. Sutcliffe anchored a strong top-to-bottom rotation, and Smith was one of the top closers in the game.
The shift in the Cubs' fortunes was characterized June 23 on the "NBC Saturday Game of the Week" contest against the St. Louis Cardinals; it has since been dubbed simply "The Sandberg Game". With the nation watching and Wrigley Field packed, Sandberg emerged as a superstar with not one, but two game-tying home runs against Cardinals closer Bruce Sutter. With his shots in the 9th and 10th innings, Wrigley Field erupted and Sandberg set the stage for a comeback win that cemented the Cubs as the team to beat in the East. No one would catch them.
In early August the Cubs swept the Mets in a 4-game home series that further distanced them from the pack. An infamous Keith Moreland-Ed Lynch fight erupted after Lynch hit Moreland with a pitch, perhaps forgetting Moreland was once a linebacker at the University of Texas. It was the second game of a doubleheader and the Cubs had won the first game in part due to a three-run home run by Moreland. After the bench-clearing fight, the Cubs won the second game, and the sweep put the Cubs at 68–45.
In 1984, each league had two divisions, East and West. The divisional winners met in a best-of-5 series to advance to the World Series, in a "2–3" format, first two games were played at the home of the team who did not have home-field advantage. Then the last three games were played at the home of the team, with home-field advantage. Thus the first two games were played at Wrigley Field and the next three at the home of their opponents, San Diego. A common and unfounded myth is that since Wrigley Field did not have lights at that time the National League decided to give the home field advantage to the winner of the NL West. In fact, home-field advantage had rotated between the winners of the East and West since 1969 when the league expanded. In even-numbered years, the NL West had home-field advantage. In odd-numbered years, the NL East had home-field advantage. Since the NL East winners had had home-field advantage in 1983, the NL West winners were entitled to it.
The confusion may stem from the fact that Major League Baseball did decide that, should the Cubs make it to the World Series, the American League winner would have home-field advantage. At the time home field advantage was rotated between each league. Odd-numbered years the AL had home-field advantage. Even-numbered years the NL had home-field advantage. In the 1982 World Series the St. Louis Cardinals of the NL had home-field advantage. In the 1983 World Series the Baltimore Orioles of the AL had home-field advantage.
In the NLCS, the Cubs easily won the first two games at Wrigley Field against the San Diego Padres. The Padres were the winners of the Western Division with Steve Garvey, Tony Gwynn, Eric Show, Goose Gossage and Alan Wiggins. With wins of 13–0 and 4–2, the Cubs needed to win only one game of the next three in San Diego to make it to the World Series. After being beaten in Game 3 7–1, the Cubs lost Game 4 when Smith, with the game tied 5–5, allowed a game-winning home run to Garvey in the bottom of the ninth inning. In Game 5 the Cubs took a 3–0 lead into the 6th inning, and a 3–2 lead into the seventh with Sutcliffe (who won the Cy Young Award that year) still on the mound. Then, Leon Durham had a sharp grounder go under his glove. This critical error helped the Padres win the game 6–3, with a 4-run 7th inning and keep Chicago out of the 1984 World Series against the Detroit Tigers. The loss ended a spectacular season for the Cubs, one that brought alive a slumbering franchise and made the Cubs relevant for a whole new generation of Cubs fans.
The Padres would be defeated in 5 games by Sparky Anderson's Tigers in the World Series.
The 1985 season brought high hopes. The club started out well, going 35–19 through mid-June, but injuries to Sutcliffe and others in the pitching staff contributed to a 13-game losing streak that pushed the Cubs out of contention.
In 1989, the first full season with night baseball at Wrigley Field, Don Zimmer's Cubs were led by a core group of veterans in Ryne Sandberg, Rick Sutcliffe and Andre Dawson, who were boosted by a crop of youngsters such as Mark Grace, Shawon Dunston, Greg Maddux, Rookie of the Year Jerome Walton, and Rookie of the Year Runner-Up Dwight Smith. The Cubs won the NL East once again that season winning 93 games. This time the Cubs met the San Francisco Giants in the NLCS. After splitting the first two games at home, the Cubs headed to the Bay Area, where despite holding a lead at some point in each of the next three games, bullpen meltdowns and managerial blunders ultimately led to three straight losses. The Cubs could not overcome the efforts of Will Clark, whose home run off Maddux, just after a managerial visit to the mound, led Maddux to think Clark knew what pitch was coming. Afterward, Maddux would speak into his glove during any mound conversation, beginning what is a norm today. Mark Grace was 11–17 in the series with 8 RBI. Eventually, the Giants lost to the "Bash Brothers" and the Oakland A's in the famous "Earthquake Series".
The 1998 season began on a somber note with the death of broadcaster Harry Caray. After the retirement of Sandberg and the trade of Dunston, the Cubs had holes to fill, and the signing of Henry Rodríguez to bat cleanup provided protection for Sammy Sosa in the lineup, as Rodriguez slugged 31 round-trippers in his first season in Chicago. Kevin Tapani led the club with a career-high 19 wins while Rod Beck anchored a strong bullpen and Mark Grace turned in one of his best seasons. The Cubs were swamped by media attention in 1998, and the team's two biggest headliners were Sosa and rookie flamethrower Kerry Wood. Wood's signature performance was one-hitting the Houston Astros, a game in which he tied the major league record of 20 strikeouts in nine innings. His torrid strikeout numbers earned Wood the nickname "Kid K", and ultimately earned him the 1998 NL Rookie of the Year award. Sosa caught fire in June, hitting a major league record 20 home runs in the month, and his home run race with Cardinal's slugger Mark McGwire transformed the pair into international superstars in a matter of weeks. McGwire finished the season with a new major league record of 70 home runs, but Sosa's .308 average and 66 homers earned him the National League MVP Award. After a down-to-the-wire Wild Card chase with the San Francisco Giants, Chicago and San Francisco ended the regular season tied, and thus squared off in a one-game playoff at Wrigley Field. Third baseman Gary Gaetti hit the eventual game-winning homer in the playoff game. The win propelled the Cubs into the postseason for the first time since 1989 with a 90–73 regular-season record. Unfortunately, the bats went cold in October, as manager Jim Riggleman's club batted .183 and scored only four runs en route to being swept by Atlanta in the National League Division Series. The home run chase between Sosa, McGwire and Ken Griffey Jr. helped professional baseball to bring in a new crop of fans as well as bringing back some fans who had been disillusioned by the 1994 strike. The Cubs retained many players who experienced career years in 1998, but, after a fast start in 1999, they collapsed again (starting with being swept at the hands of the cross-town White Sox in mid-June) and finished in the bottom of the division for the next two seasons.
Despite losing fan favorite Grace to free agency and the lack of production from newcomer Todd Hundley, skipper Don Baylor's Cubs put together a good season in 2001. The season started with Mack Newton being brought in to preach "positive thinking". One of the biggest stories of the season transpired as the club made a midseason deal for Fred McGriff, which was drawn out for nearly a month as McGriff debated waiving his no-trade clause. The Cubs led the wild card race by 2.5 games in early September, but crumbled when Preston Wilson hit a three-run walk-off homer off of closer Tom "Flash" Gordon, which halted the team's momentum. The team was unable to make another serious charge, and finished at 88–74, five games behind both Houston and St. Louis, who tied for first. Sosa had perhaps his finest season and Jon Lieber led the staff with a 20-win season.
The Cubs had high expectations in 2002, but the squad played poorly. On July 5, 2002, the Cubs promoted assistant general manager and player personnel director Jim Hendry to the General Manager position. The club responded by hiring Dusty Baker and by making some major moves in 2003. Most notably, they traded with the Pittsburgh Pirates for outfielder Kenny Lofton and third baseman Aramis Ramírez, and rode dominant pitching, led by Kerry Wood and Mark Prior, as the Cubs led the division down the stretch.
Chicago halted St. Louis' run to the playoffs by taking four of five games from the Cardinals at Wrigley Field in early September, after which they won their first division title in 14 years. They then went on to defeat the Atlanta Braves in a dramatic five-game Division Series, the franchise's first postseason series win since beating the Detroit Tigers in the 1908 World Series.
After losing an extra-inning game in Game 1, the Cubs rallied and took a three-games-to-one lead over the Wild Card Florida Marlins in the National League Championship Series. Florida shut the Cubs out in Game 5, but the Cubs returned home to Wrigley Field with young pitcher Mark Prior to lead the Cubs in Game 6 as they took a 3–0 lead into the 8th inning. It was at this point when a now-infamous incident took place. Several spectators attempted to catch a foul ball off the bat of Luis Castillo. A Chicago Cubs fan by the name of Steve Bartman, of Northbrook, Illinois, reached for the ball and deflected it away from the glove of Moisés Alou for the second out of the eighth inning. Alou reacted angrily toward the stands and after the game stated that he would have caught the ball. Alou at one point recanted, saying he would not have been able to make the play, but later said this was just an attempt to make Bartman feel better and believing the whole incident should be forgotten. Interference was not called on the play, as the ball was ruled to be on the spectator side of the wall. Castillo was eventually walked by Prior. Two batters later, and to the chagrin of the packed stadium, Cubs shortstop Alex Gonzalez misplayed an inning-ending double play, loading the bases. The error would lead to eight Florida runs and a Marlin victory. Despite sending Kerry Wood to the mound and holding a lead twice, the Cubs ultimately dropped Game 7, and failed to reach the World Series.
The "Steve Bartman incident" was seen as the "first domino" in the turning point of the era, and the Cubs did not win a playoff game for the next eleven seasons.
In 2004, the Cubs were a consensus pick by most media outlets to win the World Series. The offseason acquisition of Derek Lee (who was acquired in a trade with Florida for Hee-seop Choi) and the return of Greg Maddux only bolstered these expectations. Despite a mid-season deal for Nomar Garciaparra, misfortune struck the Cubs again. They led the Wild Card by 1.5 games over San Francisco and Houston on September 25. On that day, both teams lost, giving the Cubs a chance at increasing the lead to 2.5 games with only eight games remaining in the season, but reliever LaTroy Hawkins blew a save to the Mets, and the Cubs lost the game in extra innings. The defeat seemingly deflated the team, as they proceeded to drop six of their last eight games as the Astros won the Wild Card.
Despite the fact that the Cubs had won 89 games, this fallout was decidedly unlovable, as the Cubs traded superstar Sammy Sosa after he had left the season's final game after the first pitch, which resulted in a fine (Sosa later stated that he had gotten permission from Baker to leave early, but he regretted doing so). Already a controversial figure in the clubhouse after his corked-bat incident, Sosa's actions alienated much of his once strong fan base as well as the few teammates still on good terms with him, to the point where his boombox was reportedly smashed after he left to signify the end of an era. The disappointing season also saw fans start to become frustrated with the constant injuries to ace pitchers Mark Prior and Kerry Wood. Additionally, the 2004 season led to the departure of popular commentator Steve Stone, who had become increasingly critical of management during broadcasts and was verbally attacked by reliever Kent Mercker. Things were no better in 2005, despite a career year from first baseman Derrek Lee and the emergence of closer Ryan Dempster. The club struggled and suffered more key injuries, only managing to win 79 games after being picked by many to be a serious contender for the NL pennant. In 2006, the bottom fell out as the Cubs finished 66–96, last in the NL Central.
After finishing last in the NL Central with 66 wins in 2006, the Cubs re-tooled and went from "worst to first" in 2007. In the offseason they signed Alfonso Soriano to a contract at eight years for $136 million, and replaced manager Dusty Baker with fiery veteran manager Lou Piniella. After a rough start, which included a brawl between Michael Barrett and Carlos Zambrano, the Cubs overcame the Milwaukee Brewers, who had led the division for most of the season. The Cubs traded Barrett to the Padres, and later acquired catcher Jason Kendall from Oakland. Kendall was highly successful with his management of the pitching rotation and helped at the plate as well. By September, Geovany Soto became the full-time starter behind the plate, replacing the veteran Kendall. Winning streaks in June and July, coupled with a pair of dramatic, late-inning wins against the Reds, led to the Cubs ultimately clinching the NL Central with a record of 85–77. They met Arizona in the NLDS, but controversy followed as Piniella, in a move that has since come under scrutiny, pulled Carlos Zambrano after the sixth inning of a pitcher's duel with D-Backs ace Brandon Webb, to "....save Zambrano for (a potential) Game 4." The Cubs, however, were unable to come through, losing the first game and eventually stranding over 30 baserunners in a three-game Arizona sweep.
The Tribune company, in financial distress, was acquired by real-estate mogul Sam Zell in December 2007. This acquisition included the Cubs. However, Zell did not take an active part in running the baseball franchise, instead concentrating on putting together a deal to sell it.
The Cubs successfully defended their National League Central title in 2008, going to the postseason in consecutive years for the first time since 1906–08. The offseason was dominated by three months of unsuccessful trade talks with the Orioles involving 2B Brian Roberts, as well as the signing of Chunichi Dragons star Kosuke Fukudome. The team recorded their 10,000th win in April, while establishing an early division lead. Reed Johnson and Jim Edmonds were added early on and Rich Harden was acquired from the Oakland Athletics in early July. The Cubs headed into the All-Star break with the NL's best record, and tied the league record with eight representatives to the All-Star game, including catcher Geovany Soto, who was named Rookie of the Year. The Cubs took control of the division by sweeping a four-game series in Milwaukee. On September 14, in a game moved to Miller Park due to Hurricane Ike, Zambrano pitched a no-hitter against the Astros, and six days later the team clinched by beating St. Louis at Wrigley. The club ended the season with a 97–64 record and met Los Angeles in the NLDS. The heavily favored Cubs took an early lead in Game 1, but James Loney's grand slam off Ryan Dempster changed the series' momentum. Chicago committed numerous critical errors and were outscored 20–6 in a Dodger sweep, which provided yet another sudden ending.
The Ricketts family acquired a majority interest in the Cubs in 2009, ending the Tribune years. Apparently handcuffed by the Tribune's bankruptcy and the sale of the club to the Ricketts siblings, led by chairman Thomas S. Ricketts, the Cubs' quest for a NL Central three-peat started with notice that there would be less invested into contracts than in previous years. Chicago engaged St. Louis in a see-saw battle for first place into August 2009, but the Cardinals played to a torrid 20–6 pace that month, designating their rivals to battle in the Wild Card race, from which they were eliminated in the season's final week. The Cubs were plagued by injuries in 2009, and were only able to field their Opening Day starting lineup three times the entire season. Third baseman Aramis Ramírez injured his throwing shoulder in an early May game against the Milwaukee Brewers, sidelining him until early July and forcing journeyman players like Mike Fontenot and Aaron Miles into more prominent roles. Additionally, key players like Derrek Lee (who still managed to hit .306 with 35 home runs and 111 RBI that season), Alfonso Soriano, and Geovany Soto also nursed nagging injuries. The Cubs posted a winning record (83–78) for the third consecutive season, the first time the club had done so since 1972, and a new era of ownership under the Ricketts family was approved by MLB owners in early October.
Rookie Starlin Castro debuted in early May (2010) as the starting shortstop. The club played poorly in the early season, finding themselves 10 games under .500 at the end of June. In addition, long-time ace Carlos Zambrano was pulled from a game against the White Sox on June 25 after a tirade and shoving match with Derrek Lee, and was suspended indefinitely by Jim Hendry, who called the conduct "unacceptable". On August 22, Lou Piniella, who had already announced his retirement at the end of the season, announced that he would leave the Cubs prematurely to take care of his sick mother. Mike Quade took over as the interim manager for the final 37 games of the year. Despite being well out of playoff contention the Cubs went 24–13 under Quade, the best record in baseball during that 37 game stretch, earning Quade the manager position going forward on October 19.
On December 3, 2010, Cubs broadcaster and former third baseman, Ron Santo, died due to complications from bladder cancer and diabetes. He spent 13 seasons as a player with the Cubs, and at the time of his death was regarded as one of the greatest players not in the Hall of Fame. He was posthumously elected to the Major League Baseball Hall of Fame in 2012.
Despite trading for pitcher Matt Garza and signing free-agent slugger Carlos Peña, the Cubs finished the 2011 season 20 games under .500 with a record of 71–91. Weeks after the season came to an end, the club was rejuvenated in the form of a new philosophy, as new owner Tom Ricketts signed Theo Epstein away from the Boston Red Sox, naming him club President and giving him a five-year contract worth over $18 million, and subsequently discharged manager Mike Quade. Epstein, a proponent of sabremetrics and one of the architects of the 2004 and 2007 World Series championships in Boston, brought along Jed Hoyer from the Padres to fill the role of GM and hired Dale Sveum as manager. Although the team had a dismal 2012 season, losing 101 games (the worst record since 1966), it was largely expected. The youth movement ushered in by Epstein and Hoyer began as longtime fan favorite Kerry Wood retired in May, followed by Ryan Dempster and Geovany Soto being traded to Texas at the All-Star break for a group of minor league prospects headlined by Christian Villanueva, but also included little thought of Kyle Hendricks. The development of Castro, Anthony Rizzo, Darwin Barney, Brett Jackson and pitcher Jeff Samardzija, as well as the replenishing of the minor-league system with prospects such as Javier Baez, Albert Almora, and Jorge Soler became the primary focus of the season, a philosophy which the new management said would carry over at least through the 2013 season.
The 2013 season resulted in much as the same the year before. Shortly before the trade deadline, the Cubs traded Matt Garza to the Texas Rangers for Mike Olt, Carl Edwards Jr, Neil Ramirez, and Justin Grimm. Three days later, the Cubs sent Alfonso Soriano to the New York Yankees for minor leaguer Corey Black. The mid season fire sale led to another last place finish in the NL Central, finishing with a record of 66–96. Although there was a five-game improvement in the record from the year before, Anthony Rizzo and Starlin Castro seemed to take steps backward in their development. On September 30, 2013, Theo Epstein made the decision to fire manager Dale Sveum after just two seasons at the helm of the Cubs. The regression of several young players was thought to be the main focus point, as the front office said Sveum would not be judged based on wins and losses. In two seasons as skipper, Sveum finished with a record of 127–197.
The 2013 season was also notable as the Cubs drafted future Rookie of the Year and MVP Kris Bryant with the second overall selection.
On November 7, 2013, the Cubs hired San Diego Padres bench coach Rick Renteria to be the 53rd manager in team history. The Cubs finished the 2014 season in last place with a 73–89 record in Rentería's first and only season as manager. Despite the poor record, the Cubs improved in many areas during 2014, including rebound years by Anthony Rizzo and Starlin Castro, ending the season with a winning record at home for the first time since 2009, and compiling a 33–34 record after the All-Star Break. However, following unexpected availability of Joe Maddon when he exercised a clause that triggered on October 14 with the departure of General Manager Andrew Friedman to the Los Angeles Dodgers, the Cubs relieved Rentería of his managerial duties on October 31, 2014. During the season, the Cubs drafted Kyle Schwarber with the fourth overall selection.
Hall of Famer Ernie Banks died of a heart attack on January 23, 2015, shortly before his 84th birthday. The 2015 uniform carried a commemorative #14 patch on both its home and away jerseys in his honor.
On November 2, 2014, the Cubs announced that Joe Maddon had signed a five-year contract to be the 54th manager in team history. On December 10, 2014, Maddon announced that the team had signed free agent Jon Lester to a six-year, $155 million contract. Many other trades and acquisitions occurred during the off season. The opening day lineup for the Cubs contained five new players including center fielder Dexter Fowler. Rookies Kris Bryant and Addison Russell were in the starting lineup by mid-April, along with the addition of rookie Kyle Schwarber who was added in mid-June. On August 30, Jake Arrieta threw a no hitter against the Los Angeles Dodgers. The Cubs finished the 2015 season in third place in the NL Central, with a record of 97–65, the third best record in the majors and earned a wild card berth. On October 7, in the 2015 National League Wild Card Game, Arrieta pitched a complete game shutout and the Cubs defeated the Pittsburgh Pirates 4–0.
The Cubs defeated the Cardinals in the NLDS three-games-to-one, qualifying for a return to the NLCS for the first time in 12 years, where they faced the New York Mets. This was the first time in franchise history that the Cubs had clinched a playoff series at Wrigley Field. However, they were swept in four games by the Mets and were unable to make it to their first World Series since 1945. After the season, Arrieta won the National League Cy Young Award, becoming the first Cubs pitcher to win the award since Greg Maddux in 1992.
Before the 2016 season, in an effort to shore up their lineup, free agents Ben Zobrist, Jason Heyward and John Lackey were signed. To make room for the Zobrist signing, Starlin Castro was traded to the Yankees for Adam Warren and Brendan Ryan, the latter of whom was released a week later. Also during the middle of the season, the Cubs traded their top prospect Gleyber Torres for Aroldis Chapman.
In a season that included another no-hitter on April 21 by Jake Arrieta as well as an MVP award for Kris Bryant, the Cubs finished with the best record in Major League Baseball and won their first National League Central title since the 2008 season, winning by 17.5 games. The team also reached the 100-win mark for the first time since 1935 and won 103 total games, the most wins for the franchise since 1910. The Cubs defeated the San Francisco Giants in the National League Division Series and returned to the National League Championship Series for the second year in a row, where they defeated the Los Angeles Dodgers in six games. This was their first NLCS win since the series was created in 1969. The win earned the Cubs their first World Series appearance since 1945 and a chance for their first World Series win since 1908. Coming back from a three-games-to-one deficit, the Cubs defeated the Cleveland Indians in seven games in the 2016 World Series, They were the first team to come back from a three-games-to-one deficit since the Kansas City Royals in 1985. On November 4, the city of Chicago held a victory parade and rally for the Cubs that began at Wrigley Field, headed down Lake Shore Drive, and ended in Grant Park. The city estimated that over five million people attended the parade and rally, which made it one of the largest recorded gatherings in history.
In an attempt to be the first team to repeat as World Series champions since the Yankees in 1998, 1999, and 2000, the Cubs struggled for most of the first half of the 2017 season, never moving more than four games over .500 and finishing the first half two games under .500. On July 15, the Cubs fell to a season-high 5.5 games out of first in the NL Central. The Cubs struggled mainly due to their pitching as Jake Arrieta and Jon Lester struggled and no starting pitcher managed to win more than 14 games (four pitchers won 15 games or more for the Cubs in 2016). The Cubs offense also struggled as Kyle Schwarber batted near .200 for most of the first half and was even sent to the minors. However, the Cubs recovered in the second half of the season to finish 22 games over .500 and win the NL Central by six games over the Milwaukee Brewers. The Cubs pulled out a five-game NLDS series win over the Washington Nationals to advance to the NLCS for the third consecutive year. For the second consecutive year, they faced the Dodgers. This time, however, the Dodgers defeated the Cubs in five games. In May 2017, the Cubs and the Rickets family formed Marquee Sports & Entertainment as a central sales and marketing company for the various Rickets family sports and entertainment assets: the Cubs, Wrigley Rooftops and Hickory Street Capital.
Prior to the 2018 season, the Cubs made several key free agent signings to bolster their pitching staff. The team signed starting pitcher Yu Darvish to a six-year, $126 million contract and veteran closer Brandon Morrow to two-year, $21-million contract, in addition to Tyler Chatwood and Steve Cishek. However, the Cubs struggled to stay healthy throughout the season. Anthony Rizzo missed much of April due to a back injury, and Bryant missed almost a month due to shoulder injury. However, Darvish, who only started eight games in 2018, was lost for the season due to elbow and triceps injuries. Morrow also faced two injuries before the team ruled him out for the season in September. The team maintained first place in their division for much of the season. The injury-depleted team only went 16–11 during September, which allowed the Milwaukee Brewers, to finish with the same record. The Brewers defeated the Cubs in a tie-breaker game to win the Central Division and secure the top-seed in the National League. The Cubs subsequently lost to the Colorado Rockies in the 2018 National League Wild Card Game for their earliest playoff exit in three seasons.
The Cubs' roster remained largely intact going into the 2019 season. The team led the Central Division by a half-game over the Brewers at the All-Star Break. However, the team's control over the division once again dissipated going into final months of the season. The Cubs lost several key players to injuries, including Javier Báez, Anthony Rizzo, and Kris Bryant during this stretch. The team's postseason chances were compromised after suffering a nine-game losing streak in late September. The Cubs were eliminated from playoff contention on September 25, marking the first time the team had failed to qualify for the playoffs since 2014. The Cubs announced they would not renew manager Joe Maddon's contract at the end of the season.
On October 24, 2019, the Cubs hired David Ross as their new manager. Ross led the Cubs to a 34–26 record during the 2020 season, which was shortened due to the COVID-19 pandemic. Starting pitcher Yu Darvish rebounded with an 8–3 record and 2.01 ERA, while also finishing as the runner-up for the NL Cy Young Award. The Cubs as a whole also won the first ever "team" Gold Glove Award and finished first in the NL Central, but were swept by the Miami Marlins in the Wild Card round.
Following the 2020 season, the Cubs' president, Theo Epstein, resigned from his position on November 17, 2020. He was succeeded Jed Hoyer, who previously served as the team's general manager since 2011. However, it was announced that Hoyer would also remain as general manager until the team could conduct a proper search for a replacement. Prior to the 2021 season, the Cubs announced they would not re-sign Jon Lester, Kyle Schwarber, or Albert Almora. In addition, the team then traded Darvish and Victor Caratini to the San Diego Padres in exchange for prospects. After suffering an 11-game losing streak in late June and early July 2021 that put the Cubs out of the pennant race, they traded Javier Báez, Kris Bryant, and Anthony Rizzo and other pieces at the trade deadline. These trades allowed journeymen such as Rafael Ortega and Patrick Wisdom to craft larger roles on the team, the latter of whom set a Cubs rookie record for home runs at 28. By the end of the season, the only remaining players from the World Series team were Willson Contreras, Jason Heyward, and Kyle Hendricks.
On October 15, 2021, the Cubs hired Cleveland assistant general manager Carter Hawkins as the new general manager. Following his hiring, the Cubs signed Marcus Stroman to a 3-year $71 million deal and previous World Series foe Yan Gomes to a 2-year $13 million deal. In another rebuilding year, the Cubs finished the 2022 season 74–88, finishing third in the division and 19 games out of first. In the ensuing off-season, Jason Heyward was released and Willson Contreras left in free agency, leaving Kyle Hendricks as the only remaining player from their 2016 championship team. Additionally, fan-favorite Rafael Ortega was non-tendered, signaling a new chapter for the Cubs after two straight years of mediocrity.
In an attempt to bolster the team, the Cubs made big moves in free agency, signing all-star, reigning gold glove shortstop Dansby Swanson to a 7-year, $177 million contract as well as former MVP Cody Bellinger to a 1-year, $17.5 million deal. In addition, the ballclub added veterans such as Jameson Taillon, Trey Mancini, Mike Tauchman and Tucker Barnhart as well as trading for utility-man Miles Mastrobuoni. The team also extended key contributors from the previous season including Ian Happ, Nico Hoerner, and Drew Smyly. Despite these moves, the Cubs entered the 2023 season with low expectations. Projection systems such as PECOTA projected them to finish under .500 for the third year in a row. In May 2023, multiple top prospects were called up, namely Miguel Amaya, Matt Mervis, and Christopher Morel; although Mervis was eventually sent back down. After falling as far as 10 games below .500, the Cubs were propelled by an 8-game win streak versus the White Sox and Cardinals in late July, prompting the front office to become "buyers" at the August 1st trade deadline. Thus, the team acquired former-Cub Jeimer Candelario from the Nationals and reliever José Cuas from the Royals, firmly cementing their intent to compete and contend for postseason baseball. The Cubs were poised to earn a wild-card berth entering September 2023. However, the team lost 15 of their last 22 games and were eliminated from the playoffs after their penultimate game of the season. The Cubs finished the season with an 83–79 record.
On November 6, the Cubs fired Ross and hired Craig Counsell as their new manager.
The Cubs have played their home games at Wrigley Field, also known as "The Friendly Confines" since 1916. It was built in 1914 as Weeghman Park for the Chicago Whales, a Federal League baseball team. The Cubs also shared the park with the Chicago Bears of the NFL for 50 years. The ballpark includes a manual scoreboard, ivy-covered brick walls, and relatively small dimensions.
Located in Chicago's Lake View neighborhood, Wrigley Field sits on an irregular block bounded by Clark and Addison Streets and Waveland and Sheffield Avenues. The area surrounding the ballpark is typically referred to as Wrigleyville. There is a dense collection of sports bars and restaurants in the area, most with baseball-inspired themes, including Sluggers, Murphy's Bleachers and The Cubby Bear. Many of the apartment buildings surrounding Wrigley Field on Waveland and Sheffield Avenues have built bleachers on their rooftops for fans to view games and other sell space for advertisement. One building on Sheffield Avenue has a sign atop its roof which says "Eamus Catuli!" which roughly translates into Latin as "Let's Go Cubs!" and another chronicles the years since the last Division title, National League pennant, and World Series championship. On game days, many residents rent out their yards and driveways to people looking for parking spots. The uniqueness of the neighborhood itself has ingrained itself into the culture of the Chicago Cubs as well as the Wrigleyville neighborhood, and has led to being used for concerts and other sporting events, such as the 2010 NHL Winter Classic between the Chicago Blackhawks and Detroit Red Wings, as well as a 2010 NCAA men's football game between the Northwestern Wildcats and Illinois Fighting Illini.
In 2013, Tom Ricketts and team president Crane Kenney unveiled plans for a five-year, $575 million privately funded renovation of Wrigley Field. Called the 1060 Project, the proposed plans included vast improvements to the stadium's facade, infrastructure, restrooms, concourses, suites, press box, bullpens, and clubhouses, as well as a 6,000-square-foot (560 m) jumbotron to be added in the left field bleachers, batting tunnels, a 3,000-square-foot (280 m) video board in right field, and, eventually, an adjacent hotel, plaza, and office-retail complex. In previous years mostly all efforts to conduct any large-scale renovations to the field had been opposed by the city, former mayor Richard M. Daley (a staunch White Sox fan), and especially the rooftop owners.
Months of negotiations between the team, a group of rooftop properties investors, local Alderman Tom Tunney, and Chicago Mayor Rahm Emanuel followed with the eventual endorsements of the city's Landmarks Commission, the Plan Commission and final approval by the Chicago City Council in July 2013. The project began at the conclusion of the 2014 season.
The "Bleacher Bums" is a name given to fans, many of whom spend much of the day heckling, who sit in the bleacher section at Wrigley Field. Initially, the group was called "bums" because they attended most of the games, and as Wrigley did not yet have lights, these were all day games, so it was jokingly presumed these fans were jobless. The group was started in 1967 by dedicated fans Ron Grousl, Tom Nall and "mad bugler" Mike Murphy, who was a sports radio host during mid days on Chicago-based WSCR AM 670 "The Score". Murphy has said that Grousl started the Wrigley tradition of throwing back opposing teams' home run balls. A 1977 Broadway play called Bleacher Bums, starring Joe Mantegna, Dennis Farina, Dennis Franz, and James Belushi, was based on a group of Cub fans who frequented the club's games.
Beginning in the days of P.K. Wrigley and the 1937 bleacher/scoreboard reconstruction, and prior to modern media saturation, a flag with either a "W" or an "L" has flown from atop the scoreboard masthead, indicating the day's result(s) when baseball was played at Wrigley. In case of a split doubleheader, both the "W" and "L" flags are flown.
Past Cubs media guides show that originally the flags were blue with a white "W" and white with a blue "L". In 1978, consistent with the dominant colors of the flags, blue and white lights were mounted atop the scoreboard, denoting "win" and "loss" respectively for the benefit of nighttime passers-by.
The flags were replaced by 1990, the first year in which the Cubs media guide reports the switch to the now-familiar colors of the flags: White with blue "W" and blue with white "L". In addition to needing to replace the worn-out flags, by then the retired numbers of Banks and Williams were flying on the foul poles, as white with blue numbers; so the "good" flag was switched to match that scheme.
This long-established tradition has evolved to fans carrying the white-with-blue-W flags to both home and away games, and displaying them after a Cub win. The flags are known as the Cubs Win Flag. The flags have become more and more popular each season since 1998, and are now even sold as T-shirts with the same layout. In 2009, the tradition spilled over to the NHL as Chicago Blackhawks fans adopted a red and black "W" flag of their own.
During the early and mid-2000s, Chip Caray usually declared that a Cubs win at home meant it was "White flag time at Wrigley!" More recently, the Cubs have promoted the phrase "Fly the W!" among fans and on social media.
The official Cubs team mascot is a young bear cub, named Clark, described by the team's press release as a young and friendly Cub. Clark made his debut at Advocate Health Care on January 13, 2014, the same day as the press release announcing his installation as the club's first-ever official physical mascot. The bear cub itself was used in the clubs since the early 1900s and was the inspiration of the Chicago Staleys changing their team's name to the Chicago Bears, because the Cubs allowed the bigger football players—like bears to cubs—to play at Wrigley Field in the 1930s.
The Cubs had no official physical mascot prior to Clark, though a man in a 'polar bear' looking outfit, called "The Bear-man" (or Beeman), which was mildly popular with the fans, paraded the stands briefly in the early 1990s. There is no record of whether or not he was just a fan in a costume or employed by the club. Through the 2013 season, there were "Cubbie-bear" mascots outside of Wrigley on game day, but none were employed by the team. They pose for pictures with fans for tips. The most notable of these was "Billy Cub" who worked outside of the stadium for over six years until July 2013, when the club asked him to stop. Billy Cub, who is played by fan John Paul Weier, had unsuccessfully petitioned the team to become the official mascot.
Another unofficial but much more well-known mascot is Ronnie "Woo Woo" Wickers who is a longtime fan and local celebrity in the Chicago area. He is known to Wrigley Field visitors for his idiosyncratic cheers at baseball games, generally punctuated with an exclamatory "Woo!" (e.g., "Cubs, woo! Cubs, woo! Big-Z, woo! Zambrano, woo! Cubs, woo!") Longtime Cubs announcer Harry Caray dubbed Wickers "Leather Lungs" for his ability to shout for hours at a time. He is not employed by the team, although the club has on two separate occasions allowed him into the broadcast booth and allow him some degree of freedom once he purchases or is given a ticket by fans to get into the games. He is largely allowed to roam the park and interact with fans by Wrigley Field security.
During the summer of 1969, a Chicago studio group produced a single record called "Hey Hey! Holy Mackerel! (The Cubs Song)" whose title and lyrics incorporated the catch-phrases of the respective TV and radio announcers for the Cubs, Jack Brickhouse and Vince Lloyd. Several members of the Cubs recorded an album called Cub Power which contained a cover of the song. The song received a good deal of local airplay that summer, associating it very strongly with that season. It was played much less frequently thereafter, although it remained an unofficial Cubs theme song for some years after.
For many years, Cubs radio broadcasts started with "It's a Beautiful Day for a Ball Game" by the Harry Simeone Chorale. In 1979, Roger Bain released a 45 rpm record of his song "Thanks Mr. Banks", to honor "Mr. Cub" Ernie Banks.
The song "Go, Cubs, Go!" by Steve Goodman was recorded early in the 1984 season, and was heard frequently during that season. Goodman died in September of that year, four days before the Cubs clinched the National League Eastern Division title, their first title in 39 years. Since 1984, the song started being played from time to time at Wrigley Field; since 2007, the song has been played over the loudspeakers following each Cubs home victory.
The Mountain Goats recorded a song entitled "Cubs in Five" on its 1995 EP Nine Black Poppies which refers to the seeming impossibility of the Cubs winning a World Series in both its title and Chorus.
In 2007, Pearl Jam frontman Eddie Vedder composed a song dedicated to the team called "All the Way". Vedder, a Chicago native, and lifelong Cubs fan, composed the song at the request of Ernie Banks. Pearl Jam has played this song live multiple times several of which occurring at Wrigley Field. Eddie Vedder has played this song live twice, at his solo shows at the Chicago Auditorium on August 21 and 22, 2008.
An album entitled Take Me Out to a Cubs Game was released in 2008. It is a collection of 17 songs and other recordings related to the team, including Harry Caray's final performance of "Take Me Out to the Ball Game" on September 21, 1997, the Steve Goodman song mentioned above, and a newly recorded rendition of "Talkin' Baseball" (subtitled "Baseball and the Cubs") by Terry Cashman. The album was produced in celebration of the 100th anniversary of the Cubs' 1908 World Series victory and contains sounds and songs of the Cubs and Wrigley Field.
Season 1 Episode 3 of the American television show Kolchak: The Night Stalker ("They Have Been, They Are, They Will Be...") is supposed to take place during a fictional 1974 World Series matchup between the Chicago Cubs and the Boston Red Sox.
The 1986 film Ferris Bueller's Day Off showed a game played by the Cubs when Ferris' principal goes to a bar looking for him.
The 1989 film Back to the Future Part II depicts the Chicago Cubs defeating a baseball team from Miami in the 2015 World Series, ending the longest championship drought in all four of the major North American professional sports leagues. In 2015, the Miami Marlins failed to make the playoffs but the Cubs were able to make it to the 2015 National League Wild Card round and move on to the 2015 National League Championship Series by October 21, 2015, the date where protagonist Marty McFly traveled to the future in the film. However, it was on October 21 that the Cubs were swept by the New York Mets in the NLCS.
The 1993 film Rookie of the Year, directed by Daniel Stern, centers on the Cubs as a team going nowhere into August when the team chances upon 12-year-old Cubs fan Henry Rowengartner (Thomas Ian Nicholas), whose right (throwing) arm tendons have healed tightly after a broken arm and granted him the ability to regularly pitch at speeds in excess of 100 miles per hour (160 km/h). Following the Cubs' win over the Cleveland Indians in Game 7 of the 2016 World Series, Nicholas, in celebration, tweeted the final shot from the movie: Henry holding his fist up to the camera to show a Cubs World Series ring. Director Daniel Stern, also reprised his role as Brickma during the Cubs playoff run.
"Baseball's Sad Lexicon", also known as "Tinker to Evers to Chance" after its refrain, is a 1910 baseball poem by Franklin Pierce Adams. The poem is presented as a single, rueful stanza from the point of view of a New York Giants fan seeing the talented Chicago Cubs infield of shortstop Joe Tinker, second baseman Johnny Evers, and first baseman Frank Chance complete a double play. The trio began playing together with the Cubs in 1902, and formed a double-play combination that lasted through April 1912. The Cubs won the pennant four times between 1906 and 1910, often defeating the Giants en route to the World Series.
The poem was first published in the New York Evening Mail on July 12, 1912. Popular among sportswriters, numerous additional verses were written. The poem gave Tinker, Evers, and Chance increased popularity and has been credited with their elections to the National Baseball Hall of Fame in 1946.
The Cardinals–Cubs rivalry refers to games between the Cubs and St. Louis Cardinals. The rivalry is also known as the Downstate Illinois rivalry or the I-55 Series (in earlier years as the Route 66 Series) as both cities are located along Interstate 55 (which itself succeeded the famous U.S. Route 66). The Cubs lead the series 1,253–1,196, through October 2021, while the Cardinals lead in National League pennants with 19 against the Cubs' 17. The Cubs have won 11 of those pennants in Major League Baseball's Modern Era (1901–present), while all 19 of the Cardinals' pennants have been won since 1926. The Cardinals also have an edge when it comes to World Series successes, having won 11 championships to the Cubs' 3. Games featuring the Cardinals and Cubs see numerous visiting fans in either Busch Stadium in St. Louis or Wrigley Field in Chicago given the proximity of both cities. When the National League split into multiple divisions, the Cardinals and Cubs remained together through the two realignments. This has added intensity to several pennant races over the years. The Cardinals and Cubs have played each other once in the postseason, 2015 National League Division Series, which the Cubs won 3–1.
The Cubs' rivalry with the Milwaukee Brewers refers to games between the Milwaukee Brewers and Chicago Cubs, the rivalry is also known as the I-94 rivalry due to the proximity between clubs' ballparks along an 83.3-mile drive along Interstate 94. The rivalry followed a 1969–97 rivalry between the Brewers, then in the American League, and the Chicago White Sox. The proximity of the two cities and the Bears-Packers football rivalry helped make the Cubs-Brewers rivalry one of baseball's best. In the 2018 season, the teams faced off in a Game 163 for the NL Central division title, which Milwaukee won.
The Cubs have held a longtime rivalry with crosstown foes the Chicago White Sox as Chicago has only retained two franchises in one major sports league since the Chicago Cardinals of the NFL relocated in 1960. The rivalry takes multiple names such as the Wintrust Crosstown Cup, Crosstown Classic, The Windy City Showdown, Red Line Series, City Series, Crosstown Series, Crosstown Cup or Crosstown Showdown referring to both Major League Baseball teams fighting for dominance across Chicago. The terms "North Siders" and "South Siders" are synonymous with the respective teams and their fans as Wrigley Field is located in the North side of the city while Guaranteed Rate Field is in the South, setting up an enduring cross-town rivalry with the White Sox.
Notably this rivalry predates the Interleague Play Era, with the only postseason meeting against the Sox occurring in the 1906 World Series. It was the first World Series between teams from the same city. The White Sox won the series 4 games to 2, over the highly favored Cubs who had won a record 116 games during the regular season. The rivalry continued through of exhibition games, culminating in the Crosstown Classic from 1985 to 1995, in which the White Sox were undefeated at 10–0–2. The White Sox currently lead the regular season series 72–64.
The Cubs currently wear pinstriped white uniforms at home. This design dates back to 1957 when the Cubs debuted the first version of the uniform. The basic look has the Cubs logo on the left chest, along with blue pinstripes and blue numbers. A left sleeve patch featuring the cub head logo was added in 1962. This design was then tweaked to include a red circle and angrier expression in 1979, before returning to a cuter version in 1994. In 1997, the patch was changed to the current "walking cub" logo. During this period the uniform received a few alterations, going from zippers to pullovers with sleeve stripes to the current buttoned look. The primary Cubs logo also received thicker letters and circle, while blue numbers received red trim and player names were added.
The Cubs' road gray uniform has been in use since 1997. This design has "Chicago" in blue letters with white trim arranged in a radial arch, along with red chest numbers with white trim. The back of the uniform has player names in blue with white trim, and numbers in red with white trim. This set also features the "walking cub" patch on the left sleeve.
The Cubs also wear a blue alternate uniform. The current design, first introduced in 1997, has the "walking cub" logo on the left chest, along with red letters and numbers with white trim. Prior to 2023, the National League logo took its place on the right sleeve; this has since been removed in anticipation of a future advertisement patch. The Cubs alternates are usually worn on road games, though in the past it was also worn at home, and at one point, a home blue version minus the player's name was used as well.
All three designs are paired with an all-blue cap with the red "C" trimmed in white, which was first worn in 1959.
Beginning in 2021, Major League Baseball and Nike introduced the "City Connect" series, featuring uniquely designed uniforms inspired by each city's community and personality. The Cubs' design is navy blue with light blue accents on both the uniform and pants, and features the "Wrigleyville" wordmark inspired by the Wrigley Field marquee. Caps are navy blue with a light blue brim, and features the trademark "C" crest in white with light blue trim, along with the red six-point star inside. The left sleeve patch features the full team name inside a navy circle, along with a specially designed municipal device incorporating the Chicago city flag.
Prior to unveiling their current look, the Cubs went through a variety of uniform looks in their early years, incorporating either a "standing cub" logo, a primitive version of the "C-UBS" logo, a "wishbone C" mark (later adopted by the Chicago Bears of the NFL), or the team or city name in various fonts. The uniform itself went from having pinstripes to racing stripes and chest piping. Navy blue and sometimes red served as the team colors through the mid-1940s when the team switched to the more familiar royal blue and red color scheme.
After unveiling the first version of what later became their current home uniform in 1957, the Cubs went through various changes with the road uniform. It had the full team name in red letters for its first season, before going to a more basic city name in blue letters with red trim. A cub head logo was also added to the sleeves in 1962, with several alterations coming afterward. By 1969, the red trim was removed, and chest numbers were added. Switching to pullovers in 1972, the Cubs' road uniform remained gray, but the chest numbers were moved to the middle before returning to the left side the following year. This was then changed to a powder blue base in 1976, added pinstripes in 1978, and added player names the following year. From 1982 to 1989, the Cubs wore blue tops with plain white pants for road games, featuring the primary Cubs logo in front and red letters with white trim.
In 1990, the Cubs returned to wearing gray uniforms with buttons on the road. However, it also went through some cosmetic changes, from a straight "Chicago" wordmark with red chest numbers (later with a taller font and red back numbers), to a script "Cubs" wordmark written diagonally. A blue alternate uniform returned in 1994, also incorporating the script "Cubs" wordmark in red minus the chest numbers. This was then changed to the "walking cub" logo in 1997, which was also incorporated as a sleeve patch on the road uniform. From 1994 to 2008, the Cubs also wore an alternate road blue cap with red brim. In 2014, the Cubs wore a second gray road uniform, this time with the block "Cubs" lettering with blue piping and red block numbers, but only lasted two seasons.
*Due to the COVID-19 pandemic, no fans were allowed at Wrigley Field during the 2020 season.
**Attendance capped at 20% capacity until June 11.
Throughout the history of the Chicago Cubs' franchise, 15 different Cubs pitchers have pitched no-hitters; however, no Cubs pitcher has thrown a perfect game.
As of 2020, the Chicago Cubs are ranked as the 17th most valuable sports team in the world, 14th in the United States, fourth in MLB, and tied for second in the city of Chicago with the Bulls.
The Chicago Cubs retired numbers are commemorated on pinstriped flags flying from the foul poles at Wrigley Field, with the exception of Jackie Robinson, the Brooklyn Dodgers player whose number 42 was retired for all clubs. The first retired number flag, Ernie Banks' number 14, was raised on the left-field pole, and they have alternated since then. 14, 10 and 31 (Jenkins) fly on the left-field pole; and 26, 23 and 31 (Maddux) fly on the right-field pole.
* Robinson's number was retired by all MLB clubs.
In August 2021, the Cubs reintroduced the Hall of Fame exhibit. The team had first established a Cubs Hall of Fame in 1982, inducting 41 members in the next four years. Six years later, it began again with the Cubs Walk of Fame, which enshrined nine until it was paused in 1998. As such, every member of those exhibits was inducted into the new Hall of Fame alongside the five most recent Cubs to enter the National Baseball Hall of Fame (Sutter, Dawson, Santo, Maddux, Smith). The 2021 class inducted one new member with Margaret Donahue (team corporate/executive secretary and vice president) to make 56 names inducted as the inaugural members of the Hall.
Two stipulations were put for induction: at least five years as a Cub and significant contributions done as a member of the Cubs. The exhibit is located in the Budweiser Bleacher concourse in left field of Wrigley Field.
The Chicago Cubs farm system consists of seven minor league affiliates.
The Chicago White Stockings, (today's Chicago Cubs), began spring training in Hot Springs, Arkansas, in 1886. President Albert Spalding (founder of Spalding Sporting Goods) and player/manager Cap Anson brought their players to Hot Springs and played at the Hot Springs Baseball Grounds. The concept was for the players to have training and fitness before the start of the regular season, utilizing the bath houses of Hot Springs after practices. After the White Stockings had a successful season in 1886, winning the National League Pennant, other teams began bringing their players to Hot Springs for "spring training". The Chicago Cubs, St. Louis Browns, New York Yankees, St. Louis Cardinals, Cleveland Spiders, Detroit Tigers, Pittsburgh Pirates, Cincinnati Reds, New York Highlanders, Brooklyn Dodgers and Boston Red Sox were among the early squads to arrive. Whittington Park (1894) and later Majestic Park (1909) and Fogel Field (1912) were all built in Hot Springs specifically to host Major League teams.
The Cubs' current spring training facility is located in Sloan Park in Mesa, Arizona, where they play in the Cactus League. The park seats 15,000, making it Major League baseball's largest spring training facility by capacity. The Cubs annually sell out most of their games both at home and on the road. Before Sloan Park opened in 2014, the team played games at HoHoKam Park – Dwight Patterson Field from 1979. "HoHoKam" is literally translated from Native American as "those who vanished". The North Siders have called Mesa their spring home for most seasons since 1952.
In addition to Mesa, the club has held spring training in Hot Springs, Arkansas (1886, 1896–1900), (1909–1910) New Orleans (1870, 1907, 1911–1912); Champaign, Illinois (1901–02, 1906); Los Angeles (1903–04, 1948–1949), Santa Monica, California (1905); French Lick, Indiana (1908, 1943–1945); Tampa, Florida (1913–1916); Pasadena, California (1917–1921); Santa Catalina Island, California (1922–1942, 1946–1947, 1950–1951); Rendezvous Park in Mesa (1952–1965); Blair Field in Long Beach, California (1966); and Scottsdale, Arizona (1967–1978).
The curious location on Catalina Island stemmed from Cubs owner William Wrigley Jr.'s then-majority interest in the island in 1919. Wrigley constructed a ballpark on the island to house the Cubs in spring training: it was built to the same dimensions as Wrigley Field. The ballpark was called Wrigley Field of Avalon. (The ballpark is long gone, but a clubhouse built by Wrigley to house the Cubs exists as the Catalina County Club.) However, by 1951 the team chose to leave Catalina Island and spring training was shifted to Mesa, Arizona. The Cubs' 30-year association with Catalina is chronicled in the book, The Cubs on Catalina, by Jim Vitti, which was named International 'Book of the Year' by The Sporting News. The Cubs left Catalina after some bad weather in 1951, choosing to move to Mesa, a city where the Wrigleys also had interests. Today, there is an exhibit at the Catalina Museum dedicated to the Cubs' spring training on the island.
The former location in Mesa is actually the second Hohokam Park (Hohokam Stadium 1997–2013); the first was built in 1976 as the spring-training home of the Oakland Athletics who left the park in 1979. Apart from HoHoKam Park and Sloan Park the Cubs also have another Mesa training facility called Fitch Park, this complex provides 25,000 square feet (2,300 m) of team facilities, including major league clubhouse, four practice fields, one practice infield, enclosed batting tunnels, batting cages, a maintenance facility, and administrative offices for the Cubs.
Cubs radio rights are held by Entercom; its acquisition of the radio rights effective 2015 (under CBS Radio) ended the team's 90-year association with 720 WGN. During the first season of the contract, Cubs games aired on WBBM, taking over as flagship of the Chicago Cubs Radio Network. On November 11, 2015, CBS announced that the Cubs would move to WBBM's all-sports sister station, WSCR, beginning in the 2016 season. The move was enabled by WSCR's end of their rights agreement for the White Sox, who moved to WLS.
The play-by-play voice of the Cubs is Pat Hughes, who has held the position since 1996, joined by Ron Coomer. Former Cubs third baseman and fan favorite Ron Santo had been Hughes' long-time partner until his death in 2010. Keith Moreland replaced Hall of Fame inductee Santo for three seasons, followed by Coomer for the 2014 season.
The club publishes a traditional media guide. Formerly, the club also produced an official magazine Vineline, which had 12 annual issues and ran for 33 years, spotlighting players and events involving the club. The club discontinued the magazine in 2018.
As of the 2020 season, all Cubs games not aired on broadcast television will air on Marquee Sports Network, a joint venture between the team and Sinclair Broadcast Group. The venture was officially announced in February 2019.
WGN-TV had a long-term association with the team, having aired Cubs games via its WGN Sports department from its establishment in 1948, through the 2019 season. For a period, WGN's Cubs games aired nationally on WGN America (formerly Superstation WGN); however, prior to the 2015 season, the Cubs, as well as all other Chicago sports programming, was dropped from the channel as part of its re-positioning as a general entertainment cable channel. To compensate, all games carried by over-the-air channels were syndicated to a network of other television stations within the Cubs' market, which includes Illinois and parts of Indiana and Iowa. Due to limits on program pre-emptions imposed by WGN's former affiliations with The WB and its successor The CW, WGN occasionally sub-licensed some of its sports broadcasts to another station in the market, particularly independent station WCIU-TV (and later MyNetworkTV station WPWR-TV).
In November 2013, the Cubs exercised an option to terminate its existing broadcast rights with WGN-TV after the 2014 season, requesting a higher-valued contract lasting through the 2019 season (which would be aligned with the end of its contract with CSN Chicago). The team would split its over-the-air package with a second partner, ABC owned-and-operated station WLS-TV, who would acquire rights to 25 games per season from 2015 through 2019. On January 7, 2015, WGN announced that it would air 45 games per-season through 2019.
From 1999, regional sports network FSN Chicago served as a cable rightsholder for games not on WGN or MLB's national television outlets. In 2003, the owners of the Cubs, White Sox, Blackhawks, and Bulls all broke away from FSN Chicago, and partnered with Comcast to form Comcast SportsNet Chicago (CSN Chicago, now NBC Sports Chicago) in 2004, assuming cable rights to all four teams.
As of the 2021 season, Jon Sciambi serves as the Cubs' lead television play-by-play announcer; when Sciambi is on national TV/radio assignment with ESPN, his role would be filled by either Chris Myers, Beth Mowins, or Pat Hughes. Sciambi is joined by Jim Deshaies, Ryan Dempster, Joe Girardi and/or Rick Sutcliffe.
Len Kasper (play-by-play, 2005–2020), Bob Brenly (analyst, 2005–2012), Chip Caray (play-by-play, 1998–2004), Steve Stone (analyst, 1983–2000, 2003–04), Joe Carter (analyst for WGN-TV games, 2001–02) and Dave Otto (analyst for FSN Chicago games, 2001–02) also have spent time broadcasting from the Cubs booth since the death of Harry Caray in 1998. | [
{
"paragraph_id": 0,
"text": "The Chicago Cubs are an American professional baseball team based in Chicago. The Cubs compete in Major League Baseball (MLB) as part of the National League (NL) Central division. The club plays its home games at Wrigley Field, which is located on Chicago's North Side. The Cubs are one of two major league teams based in Chicago; the other, the Chicago White Sox, are a member of the American League (AL) Central division. The Cubs, first known as the White Stockings, were a founding member of the NL in 1876, becoming the Chicago Cubs in 1903.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Throughout the club's history, the Cubs have played in a total of 11 World Series. The 1906 Cubs won 116 games, finishing 116–36 and posting a modern-era record winning percentage of .763, before losing the World Series to the Chicago White Sox (\"The Hitless Wonders\") by four games to two. The Cubs won back-to-back World Series championships in 1907 and 1908, becoming the first major league team to play in three consecutive World Series, and the first to win it twice. Most recently, the Cubs won the 2016 National League Championship Series and 2016 World Series, which ended a 71-year National League pennant drought and a 108-year World Series championship drought, both of which are record droughts in Major League Baseball. The 108-year drought was also the longest such occurrence in all major sports leagues in the United States and Canada. Since the start of divisional play in 1969, the Cubs have appeared in the postseason 11 times through the 2022 season.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Cubs are known as \"the North Siders\", a reference to the location of Wrigley Field within the city of Chicago, and in contrast to the White Sox, whose home field (Guaranteed Rate Field) is located on the South Side.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Through 2023, the franchise's all-time record is 11,244–10,688(.513).",
"title": ""
},
{
"paragraph_id": 4,
"text": "The Cubs began in 1870 as the Chicago White Stockings, playing their home games at West Side Grounds.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Six years later, they joined the National League (NL) as a charter member. In the runup to their NL debut, owner William Hulbert signed various star players, such as pitcher Albert Spalding and infielders Ross Barnes, Deacon White, and Adrian \"Cap\" Anson. The White Stockings quickly established themselves as one of the new league's top teams. Spalding won forty-seven games and Barnes led the league in hitting at .429 as Chicago won the first National League pennant, which at the time was the game's top prize.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "After back-to-back pennants in 1880 and 1881, Hulbert died, and Spalding, who had retired from playing to start Spalding sporting goods, assumed ownership of the club. The White Stockings, with Anson acting as player-manager, captured their third consecutive pennant in 1882, and Anson established himself as the game's first true superstar. In 1885 and 1886, after winning NL pennants, the White Stockings met the champions of the short-lived American Association in that era's version of a World Series. Both seasons resulted in matchups with the St. Louis Brown Stockings; the clubs tied in 1885 and St. Louis won in 1886. This was the genesis of what would eventually become one of the greatest rivalries in sports. In all, the Anson-led Chicago Base Ball Club won six National League pennants between 1876 and 1886. By 1890, the team had become known the Chicago Colts, or sometimes \"Anson's Colts\", referring to Cap's influence within the club. Anson was the first player in history credited with 3,000 career hits. In 1897, after a disappointing record of 59–73 and a ninth-place finish, Anson was released by the club as both a player and manager. His departure after 22 years led local newspaper reporters to refer to the Colts as the \"Orphans\".",
"title": "History"
},
{
"paragraph_id": 7,
"text": "After the 1900 season, the American Base-Ball League formed as a rival professional league. The club's old White Stockings nickname (eventually shortened to White Sox) was adopted by a new American League neighbor to the south.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1902, Spalding, who by this time had revamped the roster to boast what would soon be one of the best teams of the early century, sold the club to Jim Hart. The franchise was nicknamed the Cubs by the Chicago Daily News in 1902; it officially took the name five years later. During this period, which has become known as baseball's dead-ball era, Cub infielders Joe Tinker, Johnny Evers, and Frank Chance were made famous as a double-play combination by Franklin P. Adams' poem \"Baseball's Sad Lexicon\". The poem first appeared in the July 18, 1910 edition of the New York Evening Mail. Mordecai \"Three-Finger\" Brown, Jack Taylor, Ed Reulbach, Jack Pfiester, and Orval Overall were several key pitchers for the Cubs during this time period. With Chance acting as player-manager from 1905 to 1912, the Cubs won four pennants and two World Series titles over a five-year span. Although they fell to the \"Hitless Wonders\" White Sox in the 1906 World Series, the Cubs recorded a record 116 victories and the best winning percentage (.763) in Major League history. With mostly the same roster, Chicago won back-to-back World Series championships in 1907 and 1908, becoming the first Major League club to play three times in the Fall Classic and the first to win it twice. However, the Cubs would not win another World Series until 2016; this remains the longest championship drought in North American professional sports.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The next season, veteran catcher Johnny Kling left the team to become a professional pocket billiards player. Some historians think Kling's absence was significant enough to prevent the Cubs from also winning a third straight title in 1909, as they finished 6 games out of first place. When Kling returned the next year, the Cubs won the pennant again, but lost to the Philadelphia Athletics in the 1910 World Series.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 1914, advertising executive Albert Lasker obtained a large block of the club's shares and before the 1916 season assumed majority ownership of the franchise. Lasker brought in a wealthy partner, Charles Weeghman, the proprietor of a popular chain of lunch counters who had previously owned the Chicago Whales of the short-lived Federal League. As principal owners, the pair moved the club from the West Side Grounds to the much newer Weeghman Park, which had been constructed for the Whales only two years earlier, where they remain to this day. The Cubs responded by winning a pennant in the war-shortened season of 1918, where they played a part in another team's curse: the Boston Red Sox defeated Grover Cleveland Alexander's Cubs four games to two in the 1918 World Series, Boston's last Series championship until 2004.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Beginning in 1916, Bill Wrigley of chewing-gum fame acquired an increasing quantity of stock in the Cubs. By 1921 he was the majority owner, maintaining that status into the 1930s.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Meanwhile, the year 1919 saw the start of the tenure of Bill Veeck, Sr. as team president. Veeck would hold that post throughout the 1920s and into the 30s. The management team of Wrigley and Veeck came to be known as the \"double-Bills\".",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Near the end of the first decade of the double-Bills' guidance, the Cubs won the NL Pennant in 1929 and then achieved the unusual feat of winning a pennant every three years, following up the 1929 flag with league titles in 1932, 1935, and 1938. Unfortunately, their success did not extend to the Fall Classic, as they fell to their AL rivals each time. The '32 series against the Yankees featured Babe Ruth's \"called shot\" at Wrigley Field in game three. There were some historic moments for the Cubs as well; In 1930, Hack Wilson, one of the top home run hitters in the game, had one of the most impressive seasons in MLB history, hitting 56 home runs and establishing the current runs-batted-in record of 191. That 1930 club, which boasted six eventual hall of fame members (Wilson, Gabby Hartnett, Rogers Hornsby, George \"High Pockets\" Kelly, Kiki Cuyler and manager Joe McCarthy) established the current team batting average record of .309. In 1935 the Cubs claimed the pennant in thrilling fashion, winning a record 21 games in a row in September. The '38 club saw Dizzy Dean lead the team's pitching staff and provided a historic moment when they won a crucial late-season game at Wrigley Field over the Pittsburgh Pirates with a walk-off home run by Gabby Hartnett, which became known in baseball lore as \"The Homer in the Gloamin'\".",
"title": "History"
},
{
"paragraph_id": 14,
"text": "After the \"Double-Bills\" (Wrigley and Veeck) died in 1932 and 1933 respectively, P.K. Wrigley, son of Bill Wrigley, took over as majority owner. He was unable to extend his father's baseball success beyond 1938, and the Cubs slipped into years of mediocrity, although the Wrigley family would retain control of the team until 1981.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Cubs enjoyed one more pennant at the close of World War II, finishing 98–56. Due to the wartime travel restrictions, the first three games of the 1945 World Series were played in Detroit, where the Cubs won two games, including a one-hitter by Claude Passeau, and the final four were played at Wrigley. The Cubs lost the series, and did not return until the 2016 World Series. After losing the 1945 World Series to the Detroit Tigers, the Cubs finished with a respectable 82–71 record in the following year, but this was only good enough for third place.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In the following two decades, the Cubs played mostly forgettable baseball, finishing among the worst teams in the National League on an almost annual basis. From 1947 to 1966, they only notched one winning season. Longtime infielder-manager Phil Cavarretta, who had been a key player during the 1945 season, was fired during spring training in 1954 after admitting the team was unlikely to finish above fifth place. Although shortstop Ernie Banks would become one of the star players in the league during the next decade, finding help for him proved a difficult task, as quality players such as Hank Sauer were few and far between. This, combined with poor ownership decisions such as the College of Coaches, and the ill-fated trade of future Hall of Fame member Lou Brock to the Cardinals for pitcher Ernie Broglio (who won only seven games over the next three seasons), hampered on-field performance.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "The late-1960s brought hope of a renaissance, with third baseman Ron Santo, pitcher Ferguson Jenkins, and outfielder Billy Williams joining Banks. After losing a dismal 103 games in 1966, the Cubs brought home consecutive winning records in '67 and '68, marking the first time a Cub team had accomplished that feat in over two decades.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 1969 the Cubs, managed by Leo Durocher, built a substantial lead in the newly created National League Eastern Division by mid-August. Ken Holtzman pitched a no-hitter on August 19, and the division lead grew to 8 1⁄2 games over the St. Louis Cardinals and by 9 1⁄2 games over the New York Mets. After the game of September 2, the Cubs record was 84–52 with the Mets in second place at 77–55. But then a losing streak began just as a Mets winning streak was beginning. The Cubs lost the final game of a series at Cincinnati, then came home to play the resurgent Pittsburgh Pirates (who would finish in third place). After losing the first two games by scores of 9–2 and 13–4, the Cubs led going into the ninth inning. A win would be a positive springboard since the Cubs were to play a crucial series with the Mets the next day. But Willie Stargell drilled a two-out, two-strike pitch from the Cubs' ace reliever, Phil Regan, onto Sheffield Avenue to tie the score in the top of the ninth. The Cubs would lose 7–5 in extra innings.[6] Burdened by a four-game losing streak, the Cubs traveled to Shea Stadium for a short two-game set. The Mets won both games, and the Cubs left New York with a record of 84–58 just 1⁄2 game in front. More of the same followed in Philadelphia, as a 99 loss Phillies team nonetheless defeated the Cubs twice, to extend Chicago's losing streak to eight games. In a key play in the second game, on September 11, Cubs starter Dick Selma threw a surprise pickoff attempt to third baseman Ron Santo, who was nowhere near the bag or the ball. Selma's throwing error opened the gates to a Phillies rally. After that second Philly loss, the Cubs were 84–60 and the Mets had pulled ahead at 85–57. The Mets would not look back. The Cubs' eight-game losing streak finally ended the next day in St. Louis, but the Mets were in the midst of a ten-game winning streak, and the Cubs, wilting from team fatigue, generally deteriorated in all phases of the game.[1] The Mets (who had lost a record 120 games 7 years earlier), would go on to win the World Series. The Cubs, despite a respectable 92–70 record, would be remembered for having lost a remarkable 17½ games in the standings to the Mets in the last quarter of the season.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Following the 1969 season, the club posted winning records for the next few seasons, but no playoff action. After the core players of those teams started to move on, the 70s got worse for the team, and they became known as \"the Loveable Losers\". In 1977, the team found some life, but ultimately experienced one of its biggest collapses. The Cubs hit a high-water mark on June 28 at 47–22, boasting an 8+1⁄2 game NL East lead, as they were led by Bobby Murcer (27 HR/89 RBI), and Rick Reuschel (20–10). However, the Philadelphia Phillies cut the lead to two by the All-star break, as the Cubs sat 19 games over .500, but they swooned late in the season, going 20–40 after July 31. The Cubs finished in fourth place at 81–81, while Philadelphia surged, finishing with 101 wins. The following two seasons also saw the Cubs get off to a fast start, as the team rallied to over 10 games above .500 well into both seasons, only to again wear down and play poorly later on, and ultimately settling back to mediocrity. This trait became known as the \"June Swoon\". Again, the Cubs' unusually high number of day games is often pointed to as one reason for the team's inconsistent late-season play.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Wrigley died in 1977. The Wrigley family sold the team to the Chicago Tribune in 1981, ending a 65-year family relationship with the Cubs.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "After over a dozen more subpar seasons, in 1981 the Cubs hired GM Dallas Green from Philadelphia to turn around the franchise. Green had managed the 1980 Phillies to the World Series title. One of his early GM moves brought in a young Phillies minor-league 3rd baseman named Ryne Sandberg, along with Larry Bowa for Iván DeJesús. The 1983 Cubs had finished 71–91 under Lee Elia, who was fired before the season ended by Green. Green continued the culture of change and overhauled the Cubs roster, front-office and coaching staff prior to 1984. Jim Frey was hired to manage the 1984 Cubs, with Don Zimmer coaching 3rd base and Billy Connors serving as pitching coach.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Green shored up the 1984 roster with a series of transactions. In December 1983 Scott Sanderson was acquired from Montreal in a three-team deal with San Diego for Carmelo Martínez. Pinch hitter Richie Hebner (.333 BA in 1984) was signed as a free-agent. In spring training, moves continued: LF Gary Matthews and CF Bobby Dernier came from Philadelphia on March 26, for Bill Campbell and a minor leaguer. Reliever Tim Stoddard (10–6 3.82, 7 saves) was acquired the same day for a minor leaguer; veteran pitcher Ferguson Jenkins was released.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The team's commitment to contend was complete when Green made a midseason deal on June 15 to shore up the starting rotation due to injuries to Rick Reuschel (5–5) and Sanderson. The deal brought 1979 NL Rookie of the Year pitcher Rick Sutcliffe from the Cleveland Indians. Joe Carter (who was with the Triple-A Iowa Cubs at the time) and right fielder Mel Hall were sent to Cleveland for Sutcliffe and back-up catcher Ron Hassey (.333 with Cubs in 1984). Sutcliffe (5–5 with the Indians) immediately joined Sanderson (8–5 3.14), Eckersley (10–8 3.03), Steve Trout (13–7 3.41) and Dick Ruthven (6–10 5.04) in the starting rotation. Sutcliffe proceeded to go 16–1 for Cubs and capture the Cy Young Award.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The Cubs 1984 starting lineup was very strong. It consisted of LF Matthews (.291 14–82 101 runs 17 SB), C Jody Davis (.256 19–94), RF Keith Moreland (.279 16–80), SS Larry Bowa (.223 10 SB), 1B Leon \"Bull\" Durham (.279 23–96 16SB), CF Dernier (.278 45 SB), 3B Ron Cey (.240 25–97), Closer Lee Smith (9–7 3.65 33 saves) and 1984 NL MVP Ryne Sandberg (.314 19–84 114 runs, 19 triples, 32 SB).",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Reserve players Hebner, Thad Bosley, Henry Cotto, Hassey and Dave Owen produced exciting moments. The bullpen depth of Rich Bordi, George Frazier, Warren Brusstar and Dickie Noles did their job in getting the game to Smith or Stoddard.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "At the top of the order, Dernier and Sandberg were exciting, aptly coined \"the Daily Double\" by Harry Caray. With strong defense – Dernier CF and Sandberg 2B, won the NL Gold Glove- solid pitching and clutch hitting, the Cubs were a well-balanced team. Following the \"Daily Double\", Matthews, Durham, Cey, Moreland and Davis gave the Cubs an order with no gaps to pitch around. Sutcliffe anchored a strong top-to-bottom rotation, and Smith was one of the top closers in the game.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The shift in the Cubs' fortunes was characterized June 23 on the \"NBC Saturday Game of the Week\" contest against the St. Louis Cardinals; it has since been dubbed simply \"The Sandberg Game\". With the nation watching and Wrigley Field packed, Sandberg emerged as a superstar with not one, but two game-tying home runs against Cardinals closer Bruce Sutter. With his shots in the 9th and 10th innings, Wrigley Field erupted and Sandberg set the stage for a comeback win that cemented the Cubs as the team to beat in the East. No one would catch them.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In early August the Cubs swept the Mets in a 4-game home series that further distanced them from the pack. An infamous Keith Moreland-Ed Lynch fight erupted after Lynch hit Moreland with a pitch, perhaps forgetting Moreland was once a linebacker at the University of Texas. It was the second game of a doubleheader and the Cubs had won the first game in part due to a three-run home run by Moreland. After the bench-clearing fight, the Cubs won the second game, and the sweep put the Cubs at 68–45.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "In 1984, each league had two divisions, East and West. The divisional winners met in a best-of-5 series to advance to the World Series, in a \"2–3\" format, first two games were played at the home of the team who did not have home-field advantage. Then the last three games were played at the home of the team, with home-field advantage. Thus the first two games were played at Wrigley Field and the next three at the home of their opponents, San Diego. A common and unfounded myth is that since Wrigley Field did not have lights at that time the National League decided to give the home field advantage to the winner of the NL West. In fact, home-field advantage had rotated between the winners of the East and West since 1969 when the league expanded. In even-numbered years, the NL West had home-field advantage. In odd-numbered years, the NL East had home-field advantage. Since the NL East winners had had home-field advantage in 1983, the NL West winners were entitled to it.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "The confusion may stem from the fact that Major League Baseball did decide that, should the Cubs make it to the World Series, the American League winner would have home-field advantage. At the time home field advantage was rotated between each league. Odd-numbered years the AL had home-field advantage. Even-numbered years the NL had home-field advantage. In the 1982 World Series the St. Louis Cardinals of the NL had home-field advantage. In the 1983 World Series the Baltimore Orioles of the AL had home-field advantage.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In the NLCS, the Cubs easily won the first two games at Wrigley Field against the San Diego Padres. The Padres were the winners of the Western Division with Steve Garvey, Tony Gwynn, Eric Show, Goose Gossage and Alan Wiggins. With wins of 13–0 and 4–2, the Cubs needed to win only one game of the next three in San Diego to make it to the World Series. After being beaten in Game 3 7–1, the Cubs lost Game 4 when Smith, with the game tied 5–5, allowed a game-winning home run to Garvey in the bottom of the ninth inning. In Game 5 the Cubs took a 3–0 lead into the 6th inning, and a 3–2 lead into the seventh with Sutcliffe (who won the Cy Young Award that year) still on the mound. Then, Leon Durham had a sharp grounder go under his glove. This critical error helped the Padres win the game 6–3, with a 4-run 7th inning and keep Chicago out of the 1984 World Series against the Detroit Tigers. The loss ended a spectacular season for the Cubs, one that brought alive a slumbering franchise and made the Cubs relevant for a whole new generation of Cubs fans.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "The Padres would be defeated in 5 games by Sparky Anderson's Tigers in the World Series.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The 1985 season brought high hopes. The club started out well, going 35–19 through mid-June, but injuries to Sutcliffe and others in the pitching staff contributed to a 13-game losing streak that pushed the Cubs out of contention.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In 1989, the first full season with night baseball at Wrigley Field, Don Zimmer's Cubs were led by a core group of veterans in Ryne Sandberg, Rick Sutcliffe and Andre Dawson, who were boosted by a crop of youngsters such as Mark Grace, Shawon Dunston, Greg Maddux, Rookie of the Year Jerome Walton, and Rookie of the Year Runner-Up Dwight Smith. The Cubs won the NL East once again that season winning 93 games. This time the Cubs met the San Francisco Giants in the NLCS. After splitting the first two games at home, the Cubs headed to the Bay Area, where despite holding a lead at some point in each of the next three games, bullpen meltdowns and managerial blunders ultimately led to three straight losses. The Cubs could not overcome the efforts of Will Clark, whose home run off Maddux, just after a managerial visit to the mound, led Maddux to think Clark knew what pitch was coming. Afterward, Maddux would speak into his glove during any mound conversation, beginning what is a norm today. Mark Grace was 11–17 in the series with 8 RBI. Eventually, the Giants lost to the \"Bash Brothers\" and the Oakland A's in the famous \"Earthquake Series\".",
"title": "History"
},
{
"paragraph_id": 35,
"text": "The 1998 season began on a somber note with the death of broadcaster Harry Caray. After the retirement of Sandberg and the trade of Dunston, the Cubs had holes to fill, and the signing of Henry Rodríguez to bat cleanup provided protection for Sammy Sosa in the lineup, as Rodriguez slugged 31 round-trippers in his first season in Chicago. Kevin Tapani led the club with a career-high 19 wins while Rod Beck anchored a strong bullpen and Mark Grace turned in one of his best seasons. The Cubs were swamped by media attention in 1998, and the team's two biggest headliners were Sosa and rookie flamethrower Kerry Wood. Wood's signature performance was one-hitting the Houston Astros, a game in which he tied the major league record of 20 strikeouts in nine innings. His torrid strikeout numbers earned Wood the nickname \"Kid K\", and ultimately earned him the 1998 NL Rookie of the Year award. Sosa caught fire in June, hitting a major league record 20 home runs in the month, and his home run race with Cardinal's slugger Mark McGwire transformed the pair into international superstars in a matter of weeks. McGwire finished the season with a new major league record of 70 home runs, but Sosa's .308 average and 66 homers earned him the National League MVP Award. After a down-to-the-wire Wild Card chase with the San Francisco Giants, Chicago and San Francisco ended the regular season tied, and thus squared off in a one-game playoff at Wrigley Field. Third baseman Gary Gaetti hit the eventual game-winning homer in the playoff game. The win propelled the Cubs into the postseason for the first time since 1989 with a 90–73 regular-season record. Unfortunately, the bats went cold in October, as manager Jim Riggleman's club batted .183 and scored only four runs en route to being swept by Atlanta in the National League Division Series. The home run chase between Sosa, McGwire and Ken Griffey Jr. helped professional baseball to bring in a new crop of fans as well as bringing back some fans who had been disillusioned by the 1994 strike. The Cubs retained many players who experienced career years in 1998, but, after a fast start in 1999, they collapsed again (starting with being swept at the hands of the cross-town White Sox in mid-June) and finished in the bottom of the division for the next two seasons.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Despite losing fan favorite Grace to free agency and the lack of production from newcomer Todd Hundley, skipper Don Baylor's Cubs put together a good season in 2001. The season started with Mack Newton being brought in to preach \"positive thinking\". One of the biggest stories of the season transpired as the club made a midseason deal for Fred McGriff, which was drawn out for nearly a month as McGriff debated waiving his no-trade clause. The Cubs led the wild card race by 2.5 games in early September, but crumbled when Preston Wilson hit a three-run walk-off homer off of closer Tom \"Flash\" Gordon, which halted the team's momentum. The team was unable to make another serious charge, and finished at 88–74, five games behind both Houston and St. Louis, who tied for first. Sosa had perhaps his finest season and Jon Lieber led the staff with a 20-win season.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "The Cubs had high expectations in 2002, but the squad played poorly. On July 5, 2002, the Cubs promoted assistant general manager and player personnel director Jim Hendry to the General Manager position. The club responded by hiring Dusty Baker and by making some major moves in 2003. Most notably, they traded with the Pittsburgh Pirates for outfielder Kenny Lofton and third baseman Aramis Ramírez, and rode dominant pitching, led by Kerry Wood and Mark Prior, as the Cubs led the division down the stretch.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "Chicago halted St. Louis' run to the playoffs by taking four of five games from the Cardinals at Wrigley Field in early September, after which they won their first division title in 14 years. They then went on to defeat the Atlanta Braves in a dramatic five-game Division Series, the franchise's first postseason series win since beating the Detroit Tigers in the 1908 World Series.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "After losing an extra-inning game in Game 1, the Cubs rallied and took a three-games-to-one lead over the Wild Card Florida Marlins in the National League Championship Series. Florida shut the Cubs out in Game 5, but the Cubs returned home to Wrigley Field with young pitcher Mark Prior to lead the Cubs in Game 6 as they took a 3–0 lead into the 8th inning. It was at this point when a now-infamous incident took place. Several spectators attempted to catch a foul ball off the bat of Luis Castillo. A Chicago Cubs fan by the name of Steve Bartman, of Northbrook, Illinois, reached for the ball and deflected it away from the glove of Moisés Alou for the second out of the eighth inning. Alou reacted angrily toward the stands and after the game stated that he would have caught the ball. Alou at one point recanted, saying he would not have been able to make the play, but later said this was just an attempt to make Bartman feel better and believing the whole incident should be forgotten. Interference was not called on the play, as the ball was ruled to be on the spectator side of the wall. Castillo was eventually walked by Prior. Two batters later, and to the chagrin of the packed stadium, Cubs shortstop Alex Gonzalez misplayed an inning-ending double play, loading the bases. The error would lead to eight Florida runs and a Marlin victory. Despite sending Kerry Wood to the mound and holding a lead twice, the Cubs ultimately dropped Game 7, and failed to reach the World Series.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "The \"Steve Bartman incident\" was seen as the \"first domino\" in the turning point of the era, and the Cubs did not win a playoff game for the next eleven seasons.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "In 2004, the Cubs were a consensus pick by most media outlets to win the World Series. The offseason acquisition of Derek Lee (who was acquired in a trade with Florida for Hee-seop Choi) and the return of Greg Maddux only bolstered these expectations. Despite a mid-season deal for Nomar Garciaparra, misfortune struck the Cubs again. They led the Wild Card by 1.5 games over San Francisco and Houston on September 25. On that day, both teams lost, giving the Cubs a chance at increasing the lead to 2.5 games with only eight games remaining in the season, but reliever LaTroy Hawkins blew a save to the Mets, and the Cubs lost the game in extra innings. The defeat seemingly deflated the team, as they proceeded to drop six of their last eight games as the Astros won the Wild Card.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "Despite the fact that the Cubs had won 89 games, this fallout was decidedly unlovable, as the Cubs traded superstar Sammy Sosa after he had left the season's final game after the first pitch, which resulted in a fine (Sosa later stated that he had gotten permission from Baker to leave early, but he regretted doing so). Already a controversial figure in the clubhouse after his corked-bat incident, Sosa's actions alienated much of his once strong fan base as well as the few teammates still on good terms with him, to the point where his boombox was reportedly smashed after he left to signify the end of an era. The disappointing season also saw fans start to become frustrated with the constant injuries to ace pitchers Mark Prior and Kerry Wood. Additionally, the 2004 season led to the departure of popular commentator Steve Stone, who had become increasingly critical of management during broadcasts and was verbally attacked by reliever Kent Mercker. Things were no better in 2005, despite a career year from first baseman Derrek Lee and the emergence of closer Ryan Dempster. The club struggled and suffered more key injuries, only managing to win 79 games after being picked by many to be a serious contender for the NL pennant. In 2006, the bottom fell out as the Cubs finished 66–96, last in the NL Central.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "After finishing last in the NL Central with 66 wins in 2006, the Cubs re-tooled and went from \"worst to first\" in 2007. In the offseason they signed Alfonso Soriano to a contract at eight years for $136 million, and replaced manager Dusty Baker with fiery veteran manager Lou Piniella. After a rough start, which included a brawl between Michael Barrett and Carlos Zambrano, the Cubs overcame the Milwaukee Brewers, who had led the division for most of the season. The Cubs traded Barrett to the Padres, and later acquired catcher Jason Kendall from Oakland. Kendall was highly successful with his management of the pitching rotation and helped at the plate as well. By September, Geovany Soto became the full-time starter behind the plate, replacing the veteran Kendall. Winning streaks in June and July, coupled with a pair of dramatic, late-inning wins against the Reds, led to the Cubs ultimately clinching the NL Central with a record of 85–77. They met Arizona in the NLDS, but controversy followed as Piniella, in a move that has since come under scrutiny, pulled Carlos Zambrano after the sixth inning of a pitcher's duel with D-Backs ace Brandon Webb, to \"....save Zambrano for (a potential) Game 4.\" The Cubs, however, were unable to come through, losing the first game and eventually stranding over 30 baserunners in a three-game Arizona sweep.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "The Tribune company, in financial distress, was acquired by real-estate mogul Sam Zell in December 2007. This acquisition included the Cubs. However, Zell did not take an active part in running the baseball franchise, instead concentrating on putting together a deal to sell it.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "The Cubs successfully defended their National League Central title in 2008, going to the postseason in consecutive years for the first time since 1906–08. The offseason was dominated by three months of unsuccessful trade talks with the Orioles involving 2B Brian Roberts, as well as the signing of Chunichi Dragons star Kosuke Fukudome. The team recorded their 10,000th win in April, while establishing an early division lead. Reed Johnson and Jim Edmonds were added early on and Rich Harden was acquired from the Oakland Athletics in early July. The Cubs headed into the All-Star break with the NL's best record, and tied the league record with eight representatives to the All-Star game, including catcher Geovany Soto, who was named Rookie of the Year. The Cubs took control of the division by sweeping a four-game series in Milwaukee. On September 14, in a game moved to Miller Park due to Hurricane Ike, Zambrano pitched a no-hitter against the Astros, and six days later the team clinched by beating St. Louis at Wrigley. The club ended the season with a 97–64 record and met Los Angeles in the NLDS. The heavily favored Cubs took an early lead in Game 1, but James Loney's grand slam off Ryan Dempster changed the series' momentum. Chicago committed numerous critical errors and were outscored 20–6 in a Dodger sweep, which provided yet another sudden ending.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "The Ricketts family acquired a majority interest in the Cubs in 2009, ending the Tribune years. Apparently handcuffed by the Tribune's bankruptcy and the sale of the club to the Ricketts siblings, led by chairman Thomas S. Ricketts, the Cubs' quest for a NL Central three-peat started with notice that there would be less invested into contracts than in previous years. Chicago engaged St. Louis in a see-saw battle for first place into August 2009, but the Cardinals played to a torrid 20–6 pace that month, designating their rivals to battle in the Wild Card race, from which they were eliminated in the season's final week. The Cubs were plagued by injuries in 2009, and were only able to field their Opening Day starting lineup three times the entire season. Third baseman Aramis Ramírez injured his throwing shoulder in an early May game against the Milwaukee Brewers, sidelining him until early July and forcing journeyman players like Mike Fontenot and Aaron Miles into more prominent roles. Additionally, key players like Derrek Lee (who still managed to hit .306 with 35 home runs and 111 RBI that season), Alfonso Soriano, and Geovany Soto also nursed nagging injuries. The Cubs posted a winning record (83–78) for the third consecutive season, the first time the club had done so since 1972, and a new era of ownership under the Ricketts family was approved by MLB owners in early October.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "Rookie Starlin Castro debuted in early May (2010) as the starting shortstop. The club played poorly in the early season, finding themselves 10 games under .500 at the end of June. In addition, long-time ace Carlos Zambrano was pulled from a game against the White Sox on June 25 after a tirade and shoving match with Derrek Lee, and was suspended indefinitely by Jim Hendry, who called the conduct \"unacceptable\". On August 22, Lou Piniella, who had already announced his retirement at the end of the season, announced that he would leave the Cubs prematurely to take care of his sick mother. Mike Quade took over as the interim manager for the final 37 games of the year. Despite being well out of playoff contention the Cubs went 24–13 under Quade, the best record in baseball during that 37 game stretch, earning Quade the manager position going forward on October 19.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "On December 3, 2010, Cubs broadcaster and former third baseman, Ron Santo, died due to complications from bladder cancer and diabetes. He spent 13 seasons as a player with the Cubs, and at the time of his death was regarded as one of the greatest players not in the Hall of Fame. He was posthumously elected to the Major League Baseball Hall of Fame in 2012.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "Despite trading for pitcher Matt Garza and signing free-agent slugger Carlos Peña, the Cubs finished the 2011 season 20 games under .500 with a record of 71–91. Weeks after the season came to an end, the club was rejuvenated in the form of a new philosophy, as new owner Tom Ricketts signed Theo Epstein away from the Boston Red Sox, naming him club President and giving him a five-year contract worth over $18 million, and subsequently discharged manager Mike Quade. Epstein, a proponent of sabremetrics and one of the architects of the 2004 and 2007 World Series championships in Boston, brought along Jed Hoyer from the Padres to fill the role of GM and hired Dale Sveum as manager. Although the team had a dismal 2012 season, losing 101 games (the worst record since 1966), it was largely expected. The youth movement ushered in by Epstein and Hoyer began as longtime fan favorite Kerry Wood retired in May, followed by Ryan Dempster and Geovany Soto being traded to Texas at the All-Star break for a group of minor league prospects headlined by Christian Villanueva, but also included little thought of Kyle Hendricks. The development of Castro, Anthony Rizzo, Darwin Barney, Brett Jackson and pitcher Jeff Samardzija, as well as the replenishing of the minor-league system with prospects such as Javier Baez, Albert Almora, and Jorge Soler became the primary focus of the season, a philosophy which the new management said would carry over at least through the 2013 season.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "The 2013 season resulted in much as the same the year before. Shortly before the trade deadline, the Cubs traded Matt Garza to the Texas Rangers for Mike Olt, Carl Edwards Jr, Neil Ramirez, and Justin Grimm. Three days later, the Cubs sent Alfonso Soriano to the New York Yankees for minor leaguer Corey Black. The mid season fire sale led to another last place finish in the NL Central, finishing with a record of 66–96. Although there was a five-game improvement in the record from the year before, Anthony Rizzo and Starlin Castro seemed to take steps backward in their development. On September 30, 2013, Theo Epstein made the decision to fire manager Dale Sveum after just two seasons at the helm of the Cubs. The regression of several young players was thought to be the main focus point, as the front office said Sveum would not be judged based on wins and losses. In two seasons as skipper, Sveum finished with a record of 127–197.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "The 2013 season was also notable as the Cubs drafted future Rookie of the Year and MVP Kris Bryant with the second overall selection.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "On November 7, 2013, the Cubs hired San Diego Padres bench coach Rick Renteria to be the 53rd manager in team history. The Cubs finished the 2014 season in last place with a 73–89 record in Rentería's first and only season as manager. Despite the poor record, the Cubs improved in many areas during 2014, including rebound years by Anthony Rizzo and Starlin Castro, ending the season with a winning record at home for the first time since 2009, and compiling a 33–34 record after the All-Star Break. However, following unexpected availability of Joe Maddon when he exercised a clause that triggered on October 14 with the departure of General Manager Andrew Friedman to the Los Angeles Dodgers, the Cubs relieved Rentería of his managerial duties on October 31, 2014. During the season, the Cubs drafted Kyle Schwarber with the fourth overall selection.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "Hall of Famer Ernie Banks died of a heart attack on January 23, 2015, shortly before his 84th birthday. The 2015 uniform carried a commemorative #14 patch on both its home and away jerseys in his honor.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "On November 2, 2014, the Cubs announced that Joe Maddon had signed a five-year contract to be the 54th manager in team history. On December 10, 2014, Maddon announced that the team had signed free agent Jon Lester to a six-year, $155 million contract. Many other trades and acquisitions occurred during the off season. The opening day lineup for the Cubs contained five new players including center fielder Dexter Fowler. Rookies Kris Bryant and Addison Russell were in the starting lineup by mid-April, along with the addition of rookie Kyle Schwarber who was added in mid-June. On August 30, Jake Arrieta threw a no hitter against the Los Angeles Dodgers. The Cubs finished the 2015 season in third place in the NL Central, with a record of 97–65, the third best record in the majors and earned a wild card berth. On October 7, in the 2015 National League Wild Card Game, Arrieta pitched a complete game shutout and the Cubs defeated the Pittsburgh Pirates 4–0.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "The Cubs defeated the Cardinals in the NLDS three-games-to-one, qualifying for a return to the NLCS for the first time in 12 years, where they faced the New York Mets. This was the first time in franchise history that the Cubs had clinched a playoff series at Wrigley Field. However, they were swept in four games by the Mets and were unable to make it to their first World Series since 1945. After the season, Arrieta won the National League Cy Young Award, becoming the first Cubs pitcher to win the award since Greg Maddux in 1992.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "Before the 2016 season, in an effort to shore up their lineup, free agents Ben Zobrist, Jason Heyward and John Lackey were signed. To make room for the Zobrist signing, Starlin Castro was traded to the Yankees for Adam Warren and Brendan Ryan, the latter of whom was released a week later. Also during the middle of the season, the Cubs traded their top prospect Gleyber Torres for Aroldis Chapman.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "In a season that included another no-hitter on April 21 by Jake Arrieta as well as an MVP award for Kris Bryant, the Cubs finished with the best record in Major League Baseball and won their first National League Central title since the 2008 season, winning by 17.5 games. The team also reached the 100-win mark for the first time since 1935 and won 103 total games, the most wins for the franchise since 1910. The Cubs defeated the San Francisco Giants in the National League Division Series and returned to the National League Championship Series for the second year in a row, where they defeated the Los Angeles Dodgers in six games. This was their first NLCS win since the series was created in 1969. The win earned the Cubs their first World Series appearance since 1945 and a chance for their first World Series win since 1908. Coming back from a three-games-to-one deficit, the Cubs defeated the Cleveland Indians in seven games in the 2016 World Series, They were the first team to come back from a three-games-to-one deficit since the Kansas City Royals in 1985. On November 4, the city of Chicago held a victory parade and rally for the Cubs that began at Wrigley Field, headed down Lake Shore Drive, and ended in Grant Park. The city estimated that over five million people attended the parade and rally, which made it one of the largest recorded gatherings in history.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "In an attempt to be the first team to repeat as World Series champions since the Yankees in 1998, 1999, and 2000, the Cubs struggled for most of the first half of the 2017 season, never moving more than four games over .500 and finishing the first half two games under .500. On July 15, the Cubs fell to a season-high 5.5 games out of first in the NL Central. The Cubs struggled mainly due to their pitching as Jake Arrieta and Jon Lester struggled and no starting pitcher managed to win more than 14 games (four pitchers won 15 games or more for the Cubs in 2016). The Cubs offense also struggled as Kyle Schwarber batted near .200 for most of the first half and was even sent to the minors. However, the Cubs recovered in the second half of the season to finish 22 games over .500 and win the NL Central by six games over the Milwaukee Brewers. The Cubs pulled out a five-game NLDS series win over the Washington Nationals to advance to the NLCS for the third consecutive year. For the second consecutive year, they faced the Dodgers. This time, however, the Dodgers defeated the Cubs in five games. In May 2017, the Cubs and the Rickets family formed Marquee Sports & Entertainment as a central sales and marketing company for the various Rickets family sports and entertainment assets: the Cubs, Wrigley Rooftops and Hickory Street Capital.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "Prior to the 2018 season, the Cubs made several key free agent signings to bolster their pitching staff. The team signed starting pitcher Yu Darvish to a six-year, $126 million contract and veteran closer Brandon Morrow to two-year, $21-million contract, in addition to Tyler Chatwood and Steve Cishek. However, the Cubs struggled to stay healthy throughout the season. Anthony Rizzo missed much of April due to a back injury, and Bryant missed almost a month due to shoulder injury. However, Darvish, who only started eight games in 2018, was lost for the season due to elbow and triceps injuries. Morrow also faced two injuries before the team ruled him out for the season in September. The team maintained first place in their division for much of the season. The injury-depleted team only went 16–11 during September, which allowed the Milwaukee Brewers, to finish with the same record. The Brewers defeated the Cubs in a tie-breaker game to win the Central Division and secure the top-seed in the National League. The Cubs subsequently lost to the Colorado Rockies in the 2018 National League Wild Card Game for their earliest playoff exit in three seasons.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "The Cubs' roster remained largely intact going into the 2019 season. The team led the Central Division by a half-game over the Brewers at the All-Star Break. However, the team's control over the division once again dissipated going into final months of the season. The Cubs lost several key players to injuries, including Javier Báez, Anthony Rizzo, and Kris Bryant during this stretch. The team's postseason chances were compromised after suffering a nine-game losing streak in late September. The Cubs were eliminated from playoff contention on September 25, marking the first time the team had failed to qualify for the playoffs since 2014. The Cubs announced they would not renew manager Joe Maddon's contract at the end of the season.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "On October 24, 2019, the Cubs hired David Ross as their new manager. Ross led the Cubs to a 34–26 record during the 2020 season, which was shortened due to the COVID-19 pandemic. Starting pitcher Yu Darvish rebounded with an 8–3 record and 2.01 ERA, while also finishing as the runner-up for the NL Cy Young Award. The Cubs as a whole also won the first ever \"team\" Gold Glove Award and finished first in the NL Central, but were swept by the Miami Marlins in the Wild Card round.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "Following the 2020 season, the Cubs' president, Theo Epstein, resigned from his position on November 17, 2020. He was succeeded Jed Hoyer, who previously served as the team's general manager since 2011. However, it was announced that Hoyer would also remain as general manager until the team could conduct a proper search for a replacement. Prior to the 2021 season, the Cubs announced they would not re-sign Jon Lester, Kyle Schwarber, or Albert Almora. In addition, the team then traded Darvish and Victor Caratini to the San Diego Padres in exchange for prospects. After suffering an 11-game losing streak in late June and early July 2021 that put the Cubs out of the pennant race, they traded Javier Báez, Kris Bryant, and Anthony Rizzo and other pieces at the trade deadline. These trades allowed journeymen such as Rafael Ortega and Patrick Wisdom to craft larger roles on the team, the latter of whom set a Cubs rookie record for home runs at 28. By the end of the season, the only remaining players from the World Series team were Willson Contreras, Jason Heyward, and Kyle Hendricks.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "On October 15, 2021, the Cubs hired Cleveland assistant general manager Carter Hawkins as the new general manager. Following his hiring, the Cubs signed Marcus Stroman to a 3-year $71 million deal and previous World Series foe Yan Gomes to a 2-year $13 million deal. In another rebuilding year, the Cubs finished the 2022 season 74–88, finishing third in the division and 19 games out of first. In the ensuing off-season, Jason Heyward was released and Willson Contreras left in free agency, leaving Kyle Hendricks as the only remaining player from their 2016 championship team. Additionally, fan-favorite Rafael Ortega was non-tendered, signaling a new chapter for the Cubs after two straight years of mediocrity.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "In an attempt to bolster the team, the Cubs made big moves in free agency, signing all-star, reigning gold glove shortstop Dansby Swanson to a 7-year, $177 million contract as well as former MVP Cody Bellinger to a 1-year, $17.5 million deal. In addition, the ballclub added veterans such as Jameson Taillon, Trey Mancini, Mike Tauchman and Tucker Barnhart as well as trading for utility-man Miles Mastrobuoni. The team also extended key contributors from the previous season including Ian Happ, Nico Hoerner, and Drew Smyly. Despite these moves, the Cubs entered the 2023 season with low expectations. Projection systems such as PECOTA projected them to finish under .500 for the third year in a row. In May 2023, multiple top prospects were called up, namely Miguel Amaya, Matt Mervis, and Christopher Morel; although Mervis was eventually sent back down. After falling as far as 10 games below .500, the Cubs were propelled by an 8-game win streak versus the White Sox and Cardinals in late July, prompting the front office to become \"buyers\" at the August 1st trade deadline. Thus, the team acquired former-Cub Jeimer Candelario from the Nationals and reliever José Cuas from the Royals, firmly cementing their intent to compete and contend for postseason baseball. The Cubs were poised to earn a wild-card berth entering September 2023. However, the team lost 15 of their last 22 games and were eliminated from the playoffs after their penultimate game of the season. The Cubs finished the season with an 83–79 record.",
"title": "History"
},
{
"paragraph_id": 65,
"text": "On November 6, the Cubs fired Ross and hired Craig Counsell as their new manager.",
"title": "History"
},
{
"paragraph_id": 66,
"text": "The Cubs have played their home games at Wrigley Field, also known as \"The Friendly Confines\" since 1916. It was built in 1914 as Weeghman Park for the Chicago Whales, a Federal League baseball team. The Cubs also shared the park with the Chicago Bears of the NFL for 50 years. The ballpark includes a manual scoreboard, ivy-covered brick walls, and relatively small dimensions.",
"title": "Ballpark"
},
{
"paragraph_id": 67,
"text": "Located in Chicago's Lake View neighborhood, Wrigley Field sits on an irregular block bounded by Clark and Addison Streets and Waveland and Sheffield Avenues. The area surrounding the ballpark is typically referred to as Wrigleyville. There is a dense collection of sports bars and restaurants in the area, most with baseball-inspired themes, including Sluggers, Murphy's Bleachers and The Cubby Bear. Many of the apartment buildings surrounding Wrigley Field on Waveland and Sheffield Avenues have built bleachers on their rooftops for fans to view games and other sell space for advertisement. One building on Sheffield Avenue has a sign atop its roof which says \"Eamus Catuli!\" which roughly translates into Latin as \"Let's Go Cubs!\" and another chronicles the years since the last Division title, National League pennant, and World Series championship. On game days, many residents rent out their yards and driveways to people looking for parking spots. The uniqueness of the neighborhood itself has ingrained itself into the culture of the Chicago Cubs as well as the Wrigleyville neighborhood, and has led to being used for concerts and other sporting events, such as the 2010 NHL Winter Classic between the Chicago Blackhawks and Detroit Red Wings, as well as a 2010 NCAA men's football game between the Northwestern Wildcats and Illinois Fighting Illini.",
"title": "Ballpark"
},
{
"paragraph_id": 68,
"text": "In 2013, Tom Ricketts and team president Crane Kenney unveiled plans for a five-year, $575 million privately funded renovation of Wrigley Field. Called the 1060 Project, the proposed plans included vast improvements to the stadium's facade, infrastructure, restrooms, concourses, suites, press box, bullpens, and clubhouses, as well as a 6,000-square-foot (560 m) jumbotron to be added in the left field bleachers, batting tunnels, a 3,000-square-foot (280 m) video board in right field, and, eventually, an adjacent hotel, plaza, and office-retail complex. In previous years mostly all efforts to conduct any large-scale renovations to the field had been opposed by the city, former mayor Richard M. Daley (a staunch White Sox fan), and especially the rooftop owners.",
"title": "Ballpark"
},
{
"paragraph_id": 69,
"text": "Months of negotiations between the team, a group of rooftop properties investors, local Alderman Tom Tunney, and Chicago Mayor Rahm Emanuel followed with the eventual endorsements of the city's Landmarks Commission, the Plan Commission and final approval by the Chicago City Council in July 2013. The project began at the conclusion of the 2014 season.",
"title": "Ballpark"
},
{
"paragraph_id": 70,
"text": "The \"Bleacher Bums\" is a name given to fans, many of whom spend much of the day heckling, who sit in the bleacher section at Wrigley Field. Initially, the group was called \"bums\" because they attended most of the games, and as Wrigley did not yet have lights, these were all day games, so it was jokingly presumed these fans were jobless. The group was started in 1967 by dedicated fans Ron Grousl, Tom Nall and \"mad bugler\" Mike Murphy, who was a sports radio host during mid days on Chicago-based WSCR AM 670 \"The Score\". Murphy has said that Grousl started the Wrigley tradition of throwing back opposing teams' home run balls. A 1977 Broadway play called Bleacher Bums, starring Joe Mantegna, Dennis Farina, Dennis Franz, and James Belushi, was based on a group of Cub fans who frequented the club's games.",
"title": "Ballpark"
},
{
"paragraph_id": 71,
"text": "Beginning in the days of P.K. Wrigley and the 1937 bleacher/scoreboard reconstruction, and prior to modern media saturation, a flag with either a \"W\" or an \"L\" has flown from atop the scoreboard masthead, indicating the day's result(s) when baseball was played at Wrigley. In case of a split doubleheader, both the \"W\" and \"L\" flags are flown.",
"title": "Culture"
},
{
"paragraph_id": 72,
"text": "Past Cubs media guides show that originally the flags were blue with a white \"W\" and white with a blue \"L\". In 1978, consistent with the dominant colors of the flags, blue and white lights were mounted atop the scoreboard, denoting \"win\" and \"loss\" respectively for the benefit of nighttime passers-by.",
"title": "Culture"
},
{
"paragraph_id": 73,
"text": "The flags were replaced by 1990, the first year in which the Cubs media guide reports the switch to the now-familiar colors of the flags: White with blue \"W\" and blue with white \"L\". In addition to needing to replace the worn-out flags, by then the retired numbers of Banks and Williams were flying on the foul poles, as white with blue numbers; so the \"good\" flag was switched to match that scheme.",
"title": "Culture"
},
{
"paragraph_id": 74,
"text": "This long-established tradition has evolved to fans carrying the white-with-blue-W flags to both home and away games, and displaying them after a Cub win. The flags are known as the Cubs Win Flag. The flags have become more and more popular each season since 1998, and are now even sold as T-shirts with the same layout. In 2009, the tradition spilled over to the NHL as Chicago Blackhawks fans adopted a red and black \"W\" flag of their own.",
"title": "Culture"
},
{
"paragraph_id": 75,
"text": "During the early and mid-2000s, Chip Caray usually declared that a Cubs win at home meant it was \"White flag time at Wrigley!\" More recently, the Cubs have promoted the phrase \"Fly the W!\" among fans and on social media.",
"title": "Culture"
},
{
"paragraph_id": 76,
"text": "The official Cubs team mascot is a young bear cub, named Clark, described by the team's press release as a young and friendly Cub. Clark made his debut at Advocate Health Care on January 13, 2014, the same day as the press release announcing his installation as the club's first-ever official physical mascot. The bear cub itself was used in the clubs since the early 1900s and was the inspiration of the Chicago Staleys changing their team's name to the Chicago Bears, because the Cubs allowed the bigger football players—like bears to cubs—to play at Wrigley Field in the 1930s.",
"title": "Culture"
},
{
"paragraph_id": 77,
"text": "The Cubs had no official physical mascot prior to Clark, though a man in a 'polar bear' looking outfit, called \"The Bear-man\" (or Beeman), which was mildly popular with the fans, paraded the stands briefly in the early 1990s. There is no record of whether or not he was just a fan in a costume or employed by the club. Through the 2013 season, there were \"Cubbie-bear\" mascots outside of Wrigley on game day, but none were employed by the team. They pose for pictures with fans for tips. The most notable of these was \"Billy Cub\" who worked outside of the stadium for over six years until July 2013, when the club asked him to stop. Billy Cub, who is played by fan John Paul Weier, had unsuccessfully petitioned the team to become the official mascot.",
"title": "Culture"
},
{
"paragraph_id": 78,
"text": "Another unofficial but much more well-known mascot is Ronnie \"Woo Woo\" Wickers who is a longtime fan and local celebrity in the Chicago area. He is known to Wrigley Field visitors for his idiosyncratic cheers at baseball games, generally punctuated with an exclamatory \"Woo!\" (e.g., \"Cubs, woo! Cubs, woo! Big-Z, woo! Zambrano, woo! Cubs, woo!\") Longtime Cubs announcer Harry Caray dubbed Wickers \"Leather Lungs\" for his ability to shout for hours at a time. He is not employed by the team, although the club has on two separate occasions allowed him into the broadcast booth and allow him some degree of freedom once he purchases or is given a ticket by fans to get into the games. He is largely allowed to roam the park and interact with fans by Wrigley Field security.",
"title": "Culture"
},
{
"paragraph_id": 79,
"text": "During the summer of 1969, a Chicago studio group produced a single record called \"Hey Hey! Holy Mackerel! (The Cubs Song)\" whose title and lyrics incorporated the catch-phrases of the respective TV and radio announcers for the Cubs, Jack Brickhouse and Vince Lloyd. Several members of the Cubs recorded an album called Cub Power which contained a cover of the song. The song received a good deal of local airplay that summer, associating it very strongly with that season. It was played much less frequently thereafter, although it remained an unofficial Cubs theme song for some years after.",
"title": "Culture"
},
{
"paragraph_id": 80,
"text": "For many years, Cubs radio broadcasts started with \"It's a Beautiful Day for a Ball Game\" by the Harry Simeone Chorale. In 1979, Roger Bain released a 45 rpm record of his song \"Thanks Mr. Banks\", to honor \"Mr. Cub\" Ernie Banks.",
"title": "Culture"
},
{
"paragraph_id": 81,
"text": "The song \"Go, Cubs, Go!\" by Steve Goodman was recorded early in the 1984 season, and was heard frequently during that season. Goodman died in September of that year, four days before the Cubs clinched the National League Eastern Division title, their first title in 39 years. Since 1984, the song started being played from time to time at Wrigley Field; since 2007, the song has been played over the loudspeakers following each Cubs home victory.",
"title": "Culture"
},
{
"paragraph_id": 82,
"text": "The Mountain Goats recorded a song entitled \"Cubs in Five\" on its 1995 EP Nine Black Poppies which refers to the seeming impossibility of the Cubs winning a World Series in both its title and Chorus.",
"title": "Culture"
},
{
"paragraph_id": 83,
"text": "In 2007, Pearl Jam frontman Eddie Vedder composed a song dedicated to the team called \"All the Way\". Vedder, a Chicago native, and lifelong Cubs fan, composed the song at the request of Ernie Banks. Pearl Jam has played this song live multiple times several of which occurring at Wrigley Field. Eddie Vedder has played this song live twice, at his solo shows at the Chicago Auditorium on August 21 and 22, 2008.",
"title": "Culture"
},
{
"paragraph_id": 84,
"text": "An album entitled Take Me Out to a Cubs Game was released in 2008. It is a collection of 17 songs and other recordings related to the team, including Harry Caray's final performance of \"Take Me Out to the Ball Game\" on September 21, 1997, the Steve Goodman song mentioned above, and a newly recorded rendition of \"Talkin' Baseball\" (subtitled \"Baseball and the Cubs\") by Terry Cashman. The album was produced in celebration of the 100th anniversary of the Cubs' 1908 World Series victory and contains sounds and songs of the Cubs and Wrigley Field.",
"title": "Culture"
},
{
"paragraph_id": 85,
"text": "Season 1 Episode 3 of the American television show Kolchak: The Night Stalker (\"They Have Been, They Are, They Will Be...\") is supposed to take place during a fictional 1974 World Series matchup between the Chicago Cubs and the Boston Red Sox.",
"title": "Culture"
},
{
"paragraph_id": 86,
"text": "The 1986 film Ferris Bueller's Day Off showed a game played by the Cubs when Ferris' principal goes to a bar looking for him.",
"title": "Culture"
},
{
"paragraph_id": 87,
"text": "The 1989 film Back to the Future Part II depicts the Chicago Cubs defeating a baseball team from Miami in the 2015 World Series, ending the longest championship drought in all four of the major North American professional sports leagues. In 2015, the Miami Marlins failed to make the playoffs but the Cubs were able to make it to the 2015 National League Wild Card round and move on to the 2015 National League Championship Series by October 21, 2015, the date where protagonist Marty McFly traveled to the future in the film. However, it was on October 21 that the Cubs were swept by the New York Mets in the NLCS.",
"title": "Culture"
},
{
"paragraph_id": 88,
"text": "The 1993 film Rookie of the Year, directed by Daniel Stern, centers on the Cubs as a team going nowhere into August when the team chances upon 12-year-old Cubs fan Henry Rowengartner (Thomas Ian Nicholas), whose right (throwing) arm tendons have healed tightly after a broken arm and granted him the ability to regularly pitch at speeds in excess of 100 miles per hour (160 km/h). Following the Cubs' win over the Cleveland Indians in Game 7 of the 2016 World Series, Nicholas, in celebration, tweeted the final shot from the movie: Henry holding his fist up to the camera to show a Cubs World Series ring. Director Daniel Stern, also reprised his role as Brickma during the Cubs playoff run.",
"title": "Culture"
},
{
"paragraph_id": 89,
"text": "\"Baseball's Sad Lexicon\", also known as \"Tinker to Evers to Chance\" after its refrain, is a 1910 baseball poem by Franklin Pierce Adams. The poem is presented as a single, rueful stanza from the point of view of a New York Giants fan seeing the talented Chicago Cubs infield of shortstop Joe Tinker, second baseman Johnny Evers, and first baseman Frank Chance complete a double play. The trio began playing together with the Cubs in 1902, and formed a double-play combination that lasted through April 1912. The Cubs won the pennant four times between 1906 and 1910, often defeating the Giants en route to the World Series.",
"title": "Culture"
},
{
"paragraph_id": 90,
"text": "The poem was first published in the New York Evening Mail on July 12, 1912. Popular among sportswriters, numerous additional verses were written. The poem gave Tinker, Evers, and Chance increased popularity and has been credited with their elections to the National Baseball Hall of Fame in 1946.",
"title": "Culture"
},
{
"paragraph_id": 91,
"text": "The Cardinals–Cubs rivalry refers to games between the Cubs and St. Louis Cardinals. The rivalry is also known as the Downstate Illinois rivalry or the I-55 Series (in earlier years as the Route 66 Series) as both cities are located along Interstate 55 (which itself succeeded the famous U.S. Route 66). The Cubs lead the series 1,253–1,196, through October 2021, while the Cardinals lead in National League pennants with 19 against the Cubs' 17. The Cubs have won 11 of those pennants in Major League Baseball's Modern Era (1901–present), while all 19 of the Cardinals' pennants have been won since 1926. The Cardinals also have an edge when it comes to World Series successes, having won 11 championships to the Cubs' 3. Games featuring the Cardinals and Cubs see numerous visiting fans in either Busch Stadium in St. Louis or Wrigley Field in Chicago given the proximity of both cities. When the National League split into multiple divisions, the Cardinals and Cubs remained together through the two realignments. This has added intensity to several pennant races over the years. The Cardinals and Cubs have played each other once in the postseason, 2015 National League Division Series, which the Cubs won 3–1.",
"title": "Rivalries"
},
{
"paragraph_id": 92,
"text": "The Cubs' rivalry with the Milwaukee Brewers refers to games between the Milwaukee Brewers and Chicago Cubs, the rivalry is also known as the I-94 rivalry due to the proximity between clubs' ballparks along an 83.3-mile drive along Interstate 94. The rivalry followed a 1969–97 rivalry between the Brewers, then in the American League, and the Chicago White Sox. The proximity of the two cities and the Bears-Packers football rivalry helped make the Cubs-Brewers rivalry one of baseball's best. In the 2018 season, the teams faced off in a Game 163 for the NL Central division title, which Milwaukee won.",
"title": "Rivalries"
},
{
"paragraph_id": 93,
"text": "The Cubs have held a longtime rivalry with crosstown foes the Chicago White Sox as Chicago has only retained two franchises in one major sports league since the Chicago Cardinals of the NFL relocated in 1960. The rivalry takes multiple names such as the Wintrust Crosstown Cup, Crosstown Classic, The Windy City Showdown, Red Line Series, City Series, Crosstown Series, Crosstown Cup or Crosstown Showdown referring to both Major League Baseball teams fighting for dominance across Chicago. The terms \"North Siders\" and \"South Siders\" are synonymous with the respective teams and their fans as Wrigley Field is located in the North side of the city while Guaranteed Rate Field is in the South, setting up an enduring cross-town rivalry with the White Sox.",
"title": "Rivalries"
},
{
"paragraph_id": 94,
"text": "Notably this rivalry predates the Interleague Play Era, with the only postseason meeting against the Sox occurring in the 1906 World Series. It was the first World Series between teams from the same city. The White Sox won the series 4 games to 2, over the highly favored Cubs who had won a record 116 games during the regular season. The rivalry continued through of exhibition games, culminating in the Crosstown Classic from 1985 to 1995, in which the White Sox were undefeated at 10–0–2. The White Sox currently lead the regular season series 72–64.",
"title": "Rivalries"
},
{
"paragraph_id": 95,
"text": "The Cubs currently wear pinstriped white uniforms at home. This design dates back to 1957 when the Cubs debuted the first version of the uniform. The basic look has the Cubs logo on the left chest, along with blue pinstripes and blue numbers. A left sleeve patch featuring the cub head logo was added in 1962. This design was then tweaked to include a red circle and angrier expression in 1979, before returning to a cuter version in 1994. In 1997, the patch was changed to the current \"walking cub\" logo. During this period the uniform received a few alterations, going from zippers to pullovers with sleeve stripes to the current buttoned look. The primary Cubs logo also received thicker letters and circle, while blue numbers received red trim and player names were added.",
"title": "Uniforms"
},
{
"paragraph_id": 96,
"text": "The Cubs' road gray uniform has been in use since 1997. This design has \"Chicago\" in blue letters with white trim arranged in a radial arch, along with red chest numbers with white trim. The back of the uniform has player names in blue with white trim, and numbers in red with white trim. This set also features the \"walking cub\" patch on the left sleeve.",
"title": "Uniforms"
},
{
"paragraph_id": 97,
"text": "The Cubs also wear a blue alternate uniform. The current design, first introduced in 1997, has the \"walking cub\" logo on the left chest, along with red letters and numbers with white trim. Prior to 2023, the National League logo took its place on the right sleeve; this has since been removed in anticipation of a future advertisement patch. The Cubs alternates are usually worn on road games, though in the past it was also worn at home, and at one point, a home blue version minus the player's name was used as well.",
"title": "Uniforms"
},
{
"paragraph_id": 98,
"text": "All three designs are paired with an all-blue cap with the red \"C\" trimmed in white, which was first worn in 1959.",
"title": "Uniforms"
},
{
"paragraph_id": 99,
"text": "Beginning in 2021, Major League Baseball and Nike introduced the \"City Connect\" series, featuring uniquely designed uniforms inspired by each city's community and personality. The Cubs' design is navy blue with light blue accents on both the uniform and pants, and features the \"Wrigleyville\" wordmark inspired by the Wrigley Field marquee. Caps are navy blue with a light blue brim, and features the trademark \"C\" crest in white with light blue trim, along with the red six-point star inside. The left sleeve patch features the full team name inside a navy circle, along with a specially designed municipal device incorporating the Chicago city flag.",
"title": "Uniforms"
},
{
"paragraph_id": 100,
"text": "Prior to unveiling their current look, the Cubs went through a variety of uniform looks in their early years, incorporating either a \"standing cub\" logo, a primitive version of the \"C-UBS\" logo, a \"wishbone C\" mark (later adopted by the Chicago Bears of the NFL), or the team or city name in various fonts. The uniform itself went from having pinstripes to racing stripes and chest piping. Navy blue and sometimes red served as the team colors through the mid-1940s when the team switched to the more familiar royal blue and red color scheme.",
"title": "Uniforms"
},
{
"paragraph_id": 101,
"text": "After unveiling the first version of what later became their current home uniform in 1957, the Cubs went through various changes with the road uniform. It had the full team name in red letters for its first season, before going to a more basic city name in blue letters with red trim. A cub head logo was also added to the sleeves in 1962, with several alterations coming afterward. By 1969, the red trim was removed, and chest numbers were added. Switching to pullovers in 1972, the Cubs' road uniform remained gray, but the chest numbers were moved to the middle before returning to the left side the following year. This was then changed to a powder blue base in 1976, added pinstripes in 1978, and added player names the following year. From 1982 to 1989, the Cubs wore blue tops with plain white pants for road games, featuring the primary Cubs logo in front and red letters with white trim.",
"title": "Uniforms"
},
{
"paragraph_id": 102,
"text": "In 1990, the Cubs returned to wearing gray uniforms with buttons on the road. However, it also went through some cosmetic changes, from a straight \"Chicago\" wordmark with red chest numbers (later with a taller font and red back numbers), to a script \"Cubs\" wordmark written diagonally. A blue alternate uniform returned in 1994, also incorporating the script \"Cubs\" wordmark in red minus the chest numbers. This was then changed to the \"walking cub\" logo in 1997, which was also incorporated as a sleeve patch on the road uniform. From 1994 to 2008, the Cubs also wore an alternate road blue cap with red brim. In 2014, the Cubs wore a second gray road uniform, this time with the block \"Cubs\" lettering with blue piping and red block numbers, but only lasted two seasons.",
"title": "Uniforms"
},
{
"paragraph_id": 103,
"text": "*Due to the COVID-19 pandemic, no fans were allowed at Wrigley Field during the 2020 season.",
"title": "Regular season home attendance"
},
{
"paragraph_id": 104,
"text": "**Attendance capped at 20% capacity until June 11.",
"title": "Regular season home attendance"
},
{
"paragraph_id": 105,
"text": "Throughout the history of the Chicago Cubs' franchise, 15 different Cubs pitchers have pitched no-hitters; however, no Cubs pitcher has thrown a perfect game.",
"title": "Distinctions"
},
{
"paragraph_id": 106,
"text": "As of 2020, the Chicago Cubs are ranked as the 17th most valuable sports team in the world, 14th in the United States, fourth in MLB, and tied for second in the city of Chicago with the Bulls.",
"title": "Distinctions"
},
{
"paragraph_id": 107,
"text": "The Chicago Cubs retired numbers are commemorated on pinstriped flags flying from the foul poles at Wrigley Field, with the exception of Jackie Robinson, the Brooklyn Dodgers player whose number 42 was retired for all clubs. The first retired number flag, Ernie Banks' number 14, was raised on the left-field pole, and they have alternated since then. 14, 10 and 31 (Jenkins) fly on the left-field pole; and 26, 23 and 31 (Maddux) fly on the right-field pole.",
"title": "Team"
},
{
"paragraph_id": 108,
"text": "* Robinson's number was retired by all MLB clubs.",
"title": "Team"
},
{
"paragraph_id": 109,
"text": "In August 2021, the Cubs reintroduced the Hall of Fame exhibit. The team had first established a Cubs Hall of Fame in 1982, inducting 41 members in the next four years. Six years later, it began again with the Cubs Walk of Fame, which enshrined nine until it was paused in 1998. As such, every member of those exhibits was inducted into the new Hall of Fame alongside the five most recent Cubs to enter the National Baseball Hall of Fame (Sutter, Dawson, Santo, Maddux, Smith). The 2021 class inducted one new member with Margaret Donahue (team corporate/executive secretary and vice president) to make 56 names inducted as the inaugural members of the Hall.",
"title": "Team"
},
{
"paragraph_id": 110,
"text": "Two stipulations were put for induction: at least five years as a Cub and significant contributions done as a member of the Cubs. The exhibit is located in the Budweiser Bleacher concourse in left field of Wrigley Field.",
"title": "Team"
},
{
"paragraph_id": 111,
"text": "The Chicago Cubs farm system consists of seven minor league affiliates.",
"title": "Team"
},
{
"paragraph_id": 112,
"text": "The Chicago White Stockings, (today's Chicago Cubs), began spring training in Hot Springs, Arkansas, in 1886. President Albert Spalding (founder of Spalding Sporting Goods) and player/manager Cap Anson brought their players to Hot Springs and played at the Hot Springs Baseball Grounds. The concept was for the players to have training and fitness before the start of the regular season, utilizing the bath houses of Hot Springs after practices. After the White Stockings had a successful season in 1886, winning the National League Pennant, other teams began bringing their players to Hot Springs for \"spring training\". The Chicago Cubs, St. Louis Browns, New York Yankees, St. Louis Cardinals, Cleveland Spiders, Detroit Tigers, Pittsburgh Pirates, Cincinnati Reds, New York Highlanders, Brooklyn Dodgers and Boston Red Sox were among the early squads to arrive. Whittington Park (1894) and later Majestic Park (1909) and Fogel Field (1912) were all built in Hot Springs specifically to host Major League teams.",
"title": "Team"
},
{
"paragraph_id": 113,
"text": "The Cubs' current spring training facility is located in Sloan Park in Mesa, Arizona, where they play in the Cactus League. The park seats 15,000, making it Major League baseball's largest spring training facility by capacity. The Cubs annually sell out most of their games both at home and on the road. Before Sloan Park opened in 2014, the team played games at HoHoKam Park – Dwight Patterson Field from 1979. \"HoHoKam\" is literally translated from Native American as \"those who vanished\". The North Siders have called Mesa their spring home for most seasons since 1952.",
"title": "Team"
},
{
"paragraph_id": 114,
"text": "In addition to Mesa, the club has held spring training in Hot Springs, Arkansas (1886, 1896–1900), (1909–1910) New Orleans (1870, 1907, 1911–1912); Champaign, Illinois (1901–02, 1906); Los Angeles (1903–04, 1948–1949), Santa Monica, California (1905); French Lick, Indiana (1908, 1943–1945); Tampa, Florida (1913–1916); Pasadena, California (1917–1921); Santa Catalina Island, California (1922–1942, 1946–1947, 1950–1951); Rendezvous Park in Mesa (1952–1965); Blair Field in Long Beach, California (1966); and Scottsdale, Arizona (1967–1978).",
"title": "Team"
},
{
"paragraph_id": 115,
"text": "The curious location on Catalina Island stemmed from Cubs owner William Wrigley Jr.'s then-majority interest in the island in 1919. Wrigley constructed a ballpark on the island to house the Cubs in spring training: it was built to the same dimensions as Wrigley Field. The ballpark was called Wrigley Field of Avalon. (The ballpark is long gone, but a clubhouse built by Wrigley to house the Cubs exists as the Catalina County Club.) However, by 1951 the team chose to leave Catalina Island and spring training was shifted to Mesa, Arizona. The Cubs' 30-year association with Catalina is chronicled in the book, The Cubs on Catalina, by Jim Vitti, which was named International 'Book of the Year' by The Sporting News. The Cubs left Catalina after some bad weather in 1951, choosing to move to Mesa, a city where the Wrigleys also had interests. Today, there is an exhibit at the Catalina Museum dedicated to the Cubs' spring training on the island.",
"title": "Team"
},
{
"paragraph_id": 116,
"text": "The former location in Mesa is actually the second Hohokam Park (Hohokam Stadium 1997–2013); the first was built in 1976 as the spring-training home of the Oakland Athletics who left the park in 1979. Apart from HoHoKam Park and Sloan Park the Cubs also have another Mesa training facility called Fitch Park, this complex provides 25,000 square feet (2,300 m) of team facilities, including major league clubhouse, four practice fields, one practice infield, enclosed batting tunnels, batting cages, a maintenance facility, and administrative offices for the Cubs.",
"title": "Team"
},
{
"paragraph_id": 117,
"text": "Cubs radio rights are held by Entercom; its acquisition of the radio rights effective 2015 (under CBS Radio) ended the team's 90-year association with 720 WGN. During the first season of the contract, Cubs games aired on WBBM, taking over as flagship of the Chicago Cubs Radio Network. On November 11, 2015, CBS announced that the Cubs would move to WBBM's all-sports sister station, WSCR, beginning in the 2016 season. The move was enabled by WSCR's end of their rights agreement for the White Sox, who moved to WLS.",
"title": "Media"
},
{
"paragraph_id": 118,
"text": "The play-by-play voice of the Cubs is Pat Hughes, who has held the position since 1996, joined by Ron Coomer. Former Cubs third baseman and fan favorite Ron Santo had been Hughes' long-time partner until his death in 2010. Keith Moreland replaced Hall of Fame inductee Santo for three seasons, followed by Coomer for the 2014 season.",
"title": "Media"
},
{
"paragraph_id": 119,
"text": "The club publishes a traditional media guide. Formerly, the club also produced an official magazine Vineline, which had 12 annual issues and ran for 33 years, spotlighting players and events involving the club. The club discontinued the magazine in 2018.",
"title": "Media"
},
{
"paragraph_id": 120,
"text": "As of the 2020 season, all Cubs games not aired on broadcast television will air on Marquee Sports Network, a joint venture between the team and Sinclair Broadcast Group. The venture was officially announced in February 2019.",
"title": "Media"
},
{
"paragraph_id": 121,
"text": "WGN-TV had a long-term association with the team, having aired Cubs games via its WGN Sports department from its establishment in 1948, through the 2019 season. For a period, WGN's Cubs games aired nationally on WGN America (formerly Superstation WGN); however, prior to the 2015 season, the Cubs, as well as all other Chicago sports programming, was dropped from the channel as part of its re-positioning as a general entertainment cable channel. To compensate, all games carried by over-the-air channels were syndicated to a network of other television stations within the Cubs' market, which includes Illinois and parts of Indiana and Iowa. Due to limits on program pre-emptions imposed by WGN's former affiliations with The WB and its successor The CW, WGN occasionally sub-licensed some of its sports broadcasts to another station in the market, particularly independent station WCIU-TV (and later MyNetworkTV station WPWR-TV).",
"title": "Media"
},
{
"paragraph_id": 122,
"text": "In November 2013, the Cubs exercised an option to terminate its existing broadcast rights with WGN-TV after the 2014 season, requesting a higher-valued contract lasting through the 2019 season (which would be aligned with the end of its contract with CSN Chicago). The team would split its over-the-air package with a second partner, ABC owned-and-operated station WLS-TV, who would acquire rights to 25 games per season from 2015 through 2019. On January 7, 2015, WGN announced that it would air 45 games per-season through 2019.",
"title": "Media"
},
{
"paragraph_id": 123,
"text": "From 1999, regional sports network FSN Chicago served as a cable rightsholder for games not on WGN or MLB's national television outlets. In 2003, the owners of the Cubs, White Sox, Blackhawks, and Bulls all broke away from FSN Chicago, and partnered with Comcast to form Comcast SportsNet Chicago (CSN Chicago, now NBC Sports Chicago) in 2004, assuming cable rights to all four teams.",
"title": "Media"
},
{
"paragraph_id": 124,
"text": "As of the 2021 season, Jon Sciambi serves as the Cubs' lead television play-by-play announcer; when Sciambi is on national TV/radio assignment with ESPN, his role would be filled by either Chris Myers, Beth Mowins, or Pat Hughes. Sciambi is joined by Jim Deshaies, Ryan Dempster, Joe Girardi and/or Rick Sutcliffe.",
"title": "Media"
},
{
"paragraph_id": 125,
"text": "Len Kasper (play-by-play, 2005–2020), Bob Brenly (analyst, 2005–2012), Chip Caray (play-by-play, 1998–2004), Steve Stone (analyst, 1983–2000, 2003–04), Joe Carter (analyst for WGN-TV games, 2001–02) and Dave Otto (analyst for FSN Chicago games, 2001–02) also have spent time broadcasting from the Cubs booth since the death of Harry Caray in 1998.",
"title": "Media"
}
] | The Chicago Cubs are an American professional baseball team based in Chicago. The Cubs compete in Major League Baseball (MLB) as part of the National League (NL) Central division. The club plays its home games at Wrigley Field, which is located on Chicago's North Side. The Cubs are one of two major league teams based in Chicago; the other, the Chicago White Sox, are a member of the American League (AL) Central division. The Cubs, first known as the White Stockings, were a founding member of the NL in 1876, becoming the Chicago Cubs in 1903. Throughout the club's history, the Cubs have played in a total of 11 World Series. The 1906 Cubs won 116 games, finishing 116–36 and posting a modern-era record winning percentage of .763, before losing the World Series to the Chicago White Sox by four games to two. The Cubs won back-to-back World Series championships in 1907 and 1908, becoming the first major league team to play in three consecutive World Series, and the first to win it twice. Most recently, the Cubs won the 2016 National League Championship Series and 2016 World Series, which ended a 71-year National League pennant drought and a 108-year World Series championship drought, both of which are record droughts in Major League Baseball. The 108-year drought was also the longest such occurrence in all major sports leagues in the United States and Canada. Since the start of divisional play in 1969, the Cubs have appeared in the postseason 11 times through the 2022 season. The Cubs are known as "the North Siders", a reference to the location of Wrigley Field within the city of Chicago, and in contrast to the White Sox, whose home field is located on the South Side. Through 2023, the franchise's all-time record is 11,244–10,688(.513). | 2001-10-23T01:10:25Z | 2023-11-30T05:01:30Z | [
"Template:Main",
"Template:Mlby",
"Template:See also",
"Template:Cite web",
"Template:Cite twitter",
"Template:Refbegin",
"Template:Portal bar",
"Template:Infobox MLB",
"Template:Authority control",
"Template:Retired number list",
"Template:Ford C. Frick award list",
"Template:Cbignore",
"Template:Short description",
"Template:Frac",
"Template:Decrease",
"Template:Chicago Cubs roster",
"Template:S-ttl",
"Template:Pp",
"Template:Further",
"Template:Div col end",
"Template:Commons category",
"Template:S-aft",
"Template:Chicago Cubs",
"Template:Navboxes",
"Template:Winning percentage",
"Template:Abbr",
"Template:Ref",
"Template:Nowrap",
"Template:Note label",
"Template:Div col",
"Template:Reflist",
"Template:Refend",
"Template:Convert",
"Template:S-end",
"Template:MLBTeam",
"Template:Center",
"Template:MLB Year",
"Template:Notelist",
"Template:S-start-collapsible",
"Template:TOClimit",
"Template:Steady",
"Template:Baseball hall of fame list",
"Template:Cite news",
"Template:Cite book",
"Template:S-bef",
"Template:Multiple image",
"Template:Sup",
"Template:Dead link",
"Template:Increase"
] | https://en.wikipedia.org/wiki/Chicago_Cubs |
6,655 | Coldcut | Coldcut are an English electronic music duo composed of Matt Black and Jonathan More. Credited as pioneers for pop sampling in the 1980s, Coldcut are also considered the first stars of UK electronic dance music due to their innovative style, which featured cut-up samples of hip-hop, soul, funk, spoken word and various other types of music, as well as video and multimedia. According to Spin, "in '87 Coldcut pioneered the British fad for 'DJ records'".
Coldcut's records first introduced the public to pop artists Yazz and Lisa Stansfield, through which these artists achieved pop chart success. In addition, Coldcut has remixed and created productions on tracks by the likes of Eric B & Rakim, James Brown, Queen Latifah, Eurythmics, INXS, Steve Reich, Blondie, the Fall, Pierre Henry, Nina Simone, Fog, Red Snapper, and BBC Radiophonic Workshop.
Beyond their work as a production duo, Coldcut are the founders of Ninja Tune, an independent record label in London, England (with satellite offices in Los Angeles and Berlin) with an overall emphasis on encouraging interactive technology and finding innovative uses of software. The label's first releases (the first four volumes of DJ Food - Jazz Brakes) were produced by Coldcut in the early '90s, and composed of instrumental hip-hop cuts that led the duo to help pioneer the trip hop genre, with artists such as Funki Porcini, the Herbaliser and DJ Vadim.
In 1986, computer programmer Matt Black and ex-art teacher Jonathan More were part-time DJs on the rare groove scene. More also DJed on pirate radio, hosting the Meltdown Show on Kiss FM and worked at the Reckless Records store on Berwick Street, London where Black visited as a customer. The first collaboration between the two artists was "Say Kids What Time Is It?" on a white label in January 1987, which mixed The Jungle Book's "King of the Swingers" with the break from James Brown's "Funky Drummer". The innovation of "Say Kids..." caused More and Black to be heralded by SPIN as "the first Brit artists to really get hip-hop's class-cutup aesthetic". It is regarded as the UK's first breaks record, the first UK record to be built entirely of samples and "the final link in the chain connecting European collage-experiment with the dance-remix-scratch edit". This was later sampled in "Pump Up the Volume" by MARRS, a single that reached #1 in the UK in October 1987.
Though Black had joined Kiss FM with his own mix-based show, the pair eventually joined forces on its own show later in 1987 called Solid Steel. The eclectic show became a unifying force in underground experimental electronic music and is still running, celebrating 25 years in 2013.
The duo adopted the name "Coldcut" and set up a record label called Ahead Of Our Time to release the single "Beats + Pieces" (one of the formats also included "That Greedy Beat") in 1987. All of these tracks were assembled using cassette pause button edits and later spliced tape edits that would sometimes run "all over the room". The duo used sampling from Led Zeppelin to James Brown. Electronic act the Chemical Brothers have described "Beats + Pieces" as the "first bigbeat record", a style which appeared in the mid-1990s.
Coldcut's first mainstream success came when Julian Palmer from Island Records asked them to remix Eric B. & Rakim's "Paid in Full". Released in October 1987, the landmark remix is said to have "laid the groundwork for hip hop's entry into the UK mainstream", becoming a breakthrough hit for Eric B & Rakim outside the U.S., reaching No. 15 in the UK, and the top 20 in a number of European countries. It featured a prominent Ofra Haza sample and many other vocal cut ups as well as a looped rhythm which later, when sped up, proved popular in the Breakbeat genre. Off the back of its success in clubs, the Coldcut "Seven Minutes of Madness" remix ended up being promoted as the single in the UK.
In 1988, More and Black formed Hex, a self-titled "multimedia pop group", with Mile Visman and Rob Pepperell. While working on videos for artists such as Kevin Saunderson, Queen Latifah and Spiritualized, Hex's collaborative work went on to incorporate 3D modelling, punk video art, and algorithmic visuals on desktop machines. The video for Coldcut's 'Christmas Break' in 1989 is arguably one of the first pop promos produced entirely on microcomputers.
In 1988, Coldcut released Out To Lunch With Ahead Of Our Time, a double LP of Coldcut productions and re-cuts, and the various aliases under which the duo had recorded. This continued the duo's tradition of releasing limited available vinyl.
The next Coldcut single, released in February 1988, moved towards a more house-influenced style. "Doctorin' the House", which debuted singer Yazz, became a top ten hit, and peaked at No. 6. In the same year, under the guise Yazz and the Plastic Population, they produced "The Only Way Is Up", a cover of a Northern soul song. The record reached No. 1 in the UK in August, and remained there for five weeks, becoming 1988's second biggest selling single. Producer Youth of Killing Joke also helped Coldcut with this record. The duo had another top hit in September with "Stop This Crazy Thing", which featured reggae vocalist Junior Reid and reached number 21 in the UK.
The single "People Hold On" became another UK Top 20 hit. Released in March 1989, it helped launch the career of the then relatively unknown singer Lisa Stansfield. Coldcut and Mark Saunders produced her debut solo single "This Is the Right Time", which became another UK Top 20 hit in August as well as reaching No. 21 on the U.S. Billboard Hot 100 the following year.
As the duo started to enjoy critical and commercial success, their debut album What's That Noise? was released in April 1989 on Ahead of Our Time and distributed by Big Life Records. The album gave "breaks the full length treatment", and showcased "their heady blend of hip-hop production aesthetics and proto-acid house grooves". It also rounded up a heap of unconventional guest features, quoted by SPIN as having "somehow found room at the same table for Queen Latifah and Mark E. Smith". The album's track "I'm in Deep" (featuring Smith) prefigured the indie-dance guitar-breaks crossover of such bands as the Stone Roses and Happy Mondays, utilizing Smith's freestyle raucous vocals over an acid house backing, and also including psych guitar samples from British rock band Deep Purple. What's That Noise? reached the Top 20 in the UK and was certified Silver.
Coldcut's second album, Some Like It Cold, released in 1990 on Ahead Of Our Time, featured a collaboration with Queen Latifah on the single "Find a Way". Though "Find a Way" was a minor hit in the UK, no more singles were released from the album. The duo was given the BPI "Producer of the Year Award" in 1990. Hex - alongside some other London visual experimenters such as iE - produced a series of videos for a longform VHS version of the album. This continued Coldcut and Hex's pioneering of the use of microcomputers to synthesize electronic music visuals.
After their success with Lisa Stansfield, Coldcut signed with her label, Arista. Conflicts arose with the major label, as Coldcut's "vision extended beyond the formulae of house and techno" and mainstream pop culture (CITATION: The Virgin Encyclopedia Of Nineties Music, 2000). Eventually, the duo's album Philosophy emerged in 1993. Singles "Dreamer" and "Autumn Leaves" (1994) sung by vocalist Janis Alexander were both minor hits but the album did not chart.
"Autumn Leaves" had strings recorded at Abbey Road, with a 30-piece string section and an arrangement by film composer Ed Shearmur. The leader of the string section was Simon Jeffes of Penguin Cafe Orchestra. Coldcut's insistence on their friend Mixmaster Morris to remix "Autumn Leaves" led to one of Morris' most celebrated remixes, which became a minor legend in ambient music. It has appeared on numerous compilations.
In 1990, whilst on their first tour in Japan (which also featured Norman Cook, who later became Fatboy Slim), Matt and Jon formed their second record label, Ninja Tune, as a self-titled "technocoloured escape pod", and a way to escape the creative control of major labels. The label enabled them to release music under different aliases (e.g. Bogus Order, DJ Food), which also helped them to avoid pigeonholing as producers. Ninja Tune's first release was Bogus Order's "Zen Brakes". The name Coldcut stayed with Arista so there were no official Coldcut releases for the next three years.
During this time, Coldcut still produced for artists on their new label, releasing a flood of material under different names and continuing to work with young groups. They additionally kept on with Solid Steel on Kiss FM and running the night club Stealth (Club of the Year in the NME, The Face, and Mixmag in 1996).
In 1991, Hex released their first video game, Top Banana, which was included on a Hex release for the Commodore CDTV machine in 1992, arguably the first complete purpose-designed multimedia system. Top Banana was innovative in that it used sampled graphics, contained an ecological theme and a female lead character (dubbed "KT"), and its music changed through random processes. Coldcut and Hex presented this multimedia project as an example of the forthcoming convergence of pop music and computer-game characters.
In 1992, Hex's first single - "Global Chaos" / "Digital Love Opus 1" - combined rave visuals with techno and ambient interactive visuals. In November of that year, Hex released Global Chaos CDTV, which took advantage of the possibilities of the new CD-ROM medium. The Global Chaos CDTV disk (which contained the Top Banana game, interactive visuals and audio), was a forerunner of the "CD+" concept, uniting music, graphics, and video games into one. This multi-dimensional entertainment product received wide coverage in the national media, including features on Dance Energy, Kaleidoscope on BBC Radio 4, What's Up Doc? on ITV and Reportage on BBC Two. i-D Magazine was quoted as saying, "It's like your TV tripping".
Coldcut videos were made for most songs, often by Hexstatic, and used a lot of stock and sampled footage. Their "Timber" video, which created an AV collage piece using analogous techniques to audio sample collage, was put on heavy rotation on MTV. Stuart Warren Hill of Hexstatic referred to this technique as: "What you see is what you hear". "Timber" (which appears on both Let Us Play, Coldcut's fourth album, and Let Us Replay, their fifth) won awards for its innovative use of repetitive video clips synced to the music, including being shortlisted at the Edinburgh Television and Film Festival in their top five music videos of the year in 1998.
Coldcut began integrating video sampling into their live DJ gigs at the time, and incorporated multimedia content that caused press to credit the act as segueing "into the computer age". Throughout the 90s, Hex created visuals for Coldcut's live performances, and developed the CD-ROM portion of Coldcut's Let Us Play and Let Us Replay, in addition to software developed specifically for the album's world tour. Hex's inclusion of music videos and "playtools" (playful art/music software programs) on Coldcut's CD-ROM was completely ahead of the curve at that time, offering viewers/listeners a high level of interactivity. Playtools such as My Little Funkit and Playtime were the prototypes for Ninja Jamm, the app Coldcut designed and launched 16 years later. Playtime followed on from Coldcut and Hex's Synopticon installation, developing the auto-cutup algorhythm, and using other random processes to generate surprising combinations. Coldcut and Hex performed live using Playtime at the 1st Sonar Festival in 1994. Playtime was also used to generate the backing track for Coldcut's collaboration with Jello Biafra, "Every Home a Prison".
In 1994, Coldcut and Hex contributed an installation to the Glasgow Gallery of Modern Art. The piece, called Generator, was installed in the Fire Gallery. Generator was an interactive installation which allowed users to mix sound, video, text and graphics and make their own audio-visual mix, modelled on the techniques and technology used by Coldcut in clubs and live performance events. It consisted of two consoles: the left controlling how the sounds are played, the right controlling how the images are played.
As part of the JAM exhibition of "Style, Music and Media" at the Barbican Art Gallery in 1996, Coldcut and Hex were commissioned to produce an interactive audiovisual piece called Synopticon. Conceived and designed by Robert Pepperell and Matt Black, the digital culture synthesiser allows users to "remix" sounds, images, text and music in a partially random, partially controlled way.
The year 1996 also brought the Coldcut name back to More and Black, and the pair celebrated with 70 Minutes of Madness, a mix CD that became part of the Journeys by DJ series. The release was credited with "bringing to wider attention the sort of freestyle mixing the pair were always known for through their radio show on KISS FM, Solid Steel, and their steady club dates". It was voted "Best Compilation of All Time" by Jockey Slut in 1998.
In February 1997, they released a double pack single "Atomic Moog 2000" / "Boot the System", the first Coldcut release on Ninja Tune. This was not eligible for the UK chart because time and format restrictions prevented the inclusion of the "Natural Rhythm" video on the CD. In August 1997, a reworking of the early track "More Beats + Pieces" gave them their first UK Top 40 hit since 1989.
The album Let Us Play! followed in September and also made the Top 40. The fourth album by Coldcut, Let Us Play! paid homage to the greats that inspired them. Their first album to be released on Ninja Tune, it featured guest appearances by Grandmaster Flash, Steinski, Jello Biafra, Jimpster, The Herbaliser, Talvin Singh, Daniel Pemberton and Selena Saliva. Coldcut's cut 'n' paste method on the album was compared to that of Dadaism and William Burroughs. Hex collaborated with Coldcut to produce the multimedia CD-ROM for the album. Hex later evolved the software into the engine that was used on the Let Us Play! world tour.
In 1997, Matt Black - alongside Cambridge based developers Camart - created real-time video manipulation software VJAMM. It allowed users to be a "digital video jockey", remixing and collaging sound and images and trigger audio and visual samples simultaneously, subsequently bringing futuristic technology to the audio-visual field. VJAMM rivalled some of the features of high-end and high cost tech at the time. The VJAMM technology, praised as being proof of how far computers changed the face of live music, became seminal in both Coldcut's live sets (which were called a "revelaton" by Melody Maker and DJ sets. Their CCTV live show was featured at major festivals including Glastonbury, Roskilde, Sónar, the Montreux Jazz Festival, and John Peel's Meltdown. The "beautifully simple and devastatingly effective" software was deemed revolutionary, and became recognized as a major factor in the evolution of clubs. It eventually earned a place in the American Museum of the Moving Image's permanent collection. As quoted by The Independent, Coldcut's rallying cry was "Don't hate the media, be the media'". NME was quoted as saying: "Veteran duo Coldcut are so cool they invented the remix - now they are doing the same for television."
Also working with Camart, Black designed DJamm software in 1998, which Coldcut used on laptops for their live shows, providing the audio bed alongside VJAMM's audiovisual samples. Matt Black explained they designed DJamm so they "could perform electronic music in a different way – i.e., not just taking a session band out to reproduce what you put together in the studio using samples. It had a relationship to DJing, but was more interactive and more effective." Excitingly at that time, DJamm was pioneering in its ability to shuffle sliced loops into intricate sequences, enabling users to split loops into any number of parts.
In 1999, Let Us Replay! was released, a double-disc remix album where Coldcut's classic tunes were remixed by the likes of Cornelius (which was heralded as a highlight of the album, Irresistible Force, Shut Up And Dance, Carl Craig and J Swinscoe. Let Us Replay! pieces together "short sharp shocks that put the mental in 'experimental' and still bring the breaks till the breakadawn". It also includes a few live tracks from the duo's innovative world tour. The CD-ROM of the album, which also contained a free demo disc of the VJamm software, was one of the earliest audiovisual CD- ROMs on the market, and Muzik claimed deserved to "have them canonized...it's like buying an entire mini studio for under $15".
In 2000, the Solid Steel show moved to BBC London.
Coldcut continued to forge interesting collaborations, including 2001's Re:volution as an EP in which Coldcut created their own political party (The Guilty Party). Featuring scratches and samples of Tony Blair and William Hague speeches, the 3-track EP included Nautilus' "Space Journey", which won an Intermusic contest in 2000. The video was widely played on MTV. With "Space Journey", Coldcut were arguably the first group to give fans access to the multitrack parts, or "stems" of their songs, building on the idea of interactivity and sharing from Let Us Play.
In 2001, Coldcut produced tracks for the Sega music video game Rez. Rez replaced typical video-game sound effects with electronic music; the player created sounds and melodies, intended to simulate a form of synesthesia. The soundtrack also featured Adam Freeland and Oval.
In 2002, while utilizing VJamm and Detraktor, Coldcut and Juxta remixed Herbie Hancock's classic "Rockit", creating both an audio and video remix.
Working with Marcus Clements in 2002, Coldcut released the sample manipulation algorhythm from their DJamm software as a standalone VST plugin that could be used in other software, naming it the "Coldcutter".
Also in 2002, Coldcut with UK VJs Headspace (now mainly performing as the VJamm Allstars developed Gridio, an interactive, immersive audio-visual installation for the Pompidou Centre as part of the ‘'Sonic Process exhibition. The Sonic Process exhibition was launched at the MACBA in Barcelona in conjunction with Sónar, featuring Gridio as its centerpiece. In 2003, a commission for Graz led to a specially built version of Gridio, in a cave inside the castle mountain in Austria. Gridio was later commissioned by O2 for two simultaneous customised installations at the O2 Wireless Festivals in Leeds and London in 2007. That same year, Gridio was featured as part of Optronica at the opening week of the new BFI Southbank development in London.
In 2003, Black worked with Penny Rimbaud (ex Crass) on Crass Agenda's Savage Utopia project. Black performed the piece with Rimbaud, Eve Libertine and other players at London's Vortex Jazz Club.
In 2004, Coldcut collaborated with American video mashup artist TV Sheriff to produce their cut-up entitled "Revolution USA". The tactical-media project (coordinated with Canadian art duo NomIg) followed on from the UK version and extended the premise "into an open access participatory project". Through the multimedia political art project, over 12 gigabytes of footage from the last 40 years of US politics were made accessible to download, allowing participants to create a cut-up over a Coldcut beat. Coldcut also collaborated with TV Sheriff and NomIg to produce two audiovisual pieces "World of Evil" (2004) and "Revolution '08" (2008), both composed of footage from the United States presidential elections of respective years. The music used was composed by Coldcut, with "Revolution '08" featuring a remix by the Qemists.
Later that year, a collaboration with the British Antarctic Survey (BAS) led to the psychedelic art documentary Wavejammer. Coldcut was given access to the BAS archive in order to create sounds and visuals for the short film.
Two thousand and four also saw Coldcut produce a radio play in conjunction with renowned young author Hari Kunzru for BBC Radio 3 (incidentally called Sound Mirrors).
Coldcut returned with the single "Everything Is Under Control" at the end of 2005, featuring Jon Spencer (of Jon Spencer Blues Explosion) and Mike Ladd. It was followed in 2006 by their fifth studio album Sound Mirrors, which was quoted as being "one of the most vital and imaginative records Jon Moore and Matt Black have ever made", and saw the duo "continue, impressively, to find new ways to present political statements through a gamut of pristine electronics and breakbeats" (CITATION: Future Music, 2007). The fascinating array of guest vocalists included Soweto Kinch, Annette Peacock, Ameri Baraka, and Saul Williams. The latter followed on from Coldcut's remix of Williams' "The Pledge" for a project with DJ Spooky.
A 100-date audiovisual world tour commenced for Sound Mirrors, which was considered "no small feat in terms of technology or human effort". Coldcut was accompanied by scratch DJ Raj and AV artist Juxta, in addition to guest vocalists from the album, including UK rapper Juice Aleem, Roots Manuva, Mpho Skeef, Jon Spencer and house legend Robert Owens.
Three further singles were released from the album including the Top 75 hit "True Skool" with Roots Manuva. The same track appeared on the soundtrack of the video game FIFA Street 2.
Sponsored by the British Council, in 2005 Coldcut introduced AV mixing to India with the Union project, alongside collaborators Howie B and Aki Nawaz of Fun-Da-Mental. Coldcut created an A/V remix of the Bollywood hit movie Kal Ho Naa Ho.
In 2006, Coldcut performed an A/V set based on "Music for 18 Musicians" as part of Steve Reich's 70th birthday gig at the Barbican Centre in London. This was originally written for the 1999 album Reich Remixed.
Coldcut remixed another classic song in 2007: Nina Simone's "Save Me". This was part of a remix album called Nina Simone: Remixed & Re-imagined, featuring remixes from Tony Humphries, Francois K and Chris Coco.
In February 2007, Coldcut and Mixmaster Morris created a psychedelic AV obituary/tribute Coldcut, Mixmaster Morris, Ken Campbell, Bill Drummond and Alan Moore (18 March 2007). Robert Anton Wilson tribute show. Queen Elizabeth Hall, London: Mixmaster Morris. (28 August 2009) to Robert Anton Wilson, the 60s author of Illuminatus! Trilogy. The tribute featured graphic novel writer Alan Moore and artist Bill Drummond and a performance by experimental theatre legend Ken Campbell. Coldcut and Morris' hour and a half performance resembled a documentary being remixed on the fly, cutting up nearly 15 hours' worth of Wilson's lectures.
In 2008, an international group of party organisers, activists and artists including Coldcut received a grant from the Intelligent Energy Department of the European Union, to create a project that promoted intelligent energy and environmental awareness to the youth of Europe. The result was Energy Union, a piece of VJ cinema, political campaign, music tour, party, art exhibition and social media hub. Energy Union toured 12 EU countries throughout 2009 and 2010, completing 24 events in total. Coldcut created the Energy Union show for the tour, a one-hour Audio/Visual montage on the theme of Intelligent Energy. In presenting new ideas for climate, environmental and energy communication strategies, the Energy Union tour was well received, and reached a widespread audience in cities across the UK, Germany, Belgium, The Netherlands, Croatia, Slovenia, Austria, Hungary, Bulgaria, Spain and the Czech Republic.
Also in 2008, Coldcut was asked to remix the theme song for British cult TV show Doctor Who for the program's 40th anniversary. In October 2008, Coldcut celebrated the legacy of the BBC Radiophonic Workshop (the place where the Doctor Who theme was created) with a live DJ mix at London's legendary Roundhouse. The live mix incorporated classic Radiophonic Workshop compositions with extended sampling of the original gear.
Additionally in 2008, Coldcut remixed "Ourselves", a Japanese No. 1 hit from the single "&" by Ayumi Hamasaki. This mix was included on the album Ayu-mi-x 6: Gold.
Starting in 2009, Matt Black, with musician/artist/coder Paul Miller (creator of the TX Modular Open Source synth), developed Granul8, a new type of visual fx/source Black termed a "granular video synthesiser". Granul8 allows the use of realtime VJ techniques including video feedback combined with VDMX VJ software.
From 2009 onwards, Black has been collaborating with coder and psychedelic mathematician William Rood to create a forthcoming project called Liveloom, a social media AV mixer.
In 2010, Coldcut celebrated 20 years of releasing music with its label, Ninja Tune. A book entitled Ninja Tune: 20 Years of Beats and Pieces was released on 12 August 2010, and an exhibition was held at Black Dog Publishing's Black Dog Space in London, showcasing artwork, design and photography from the label's 20-year history. A compilation album was released on 20 September in two formats: a regular version consisting of two 2-disc volumes, and a limited edition which contained six CDs, six 7" vinyl singles, a hardback copy of the book, a poster and additional items. Ninja Tune also incorporated a series of international parties. This repositioned Ninja as a continually compelling and influential label, being one of the "longest-running (and successful) UK indie labels to come out of the late-1980s/early-90s explosion in dance music and hip-hop" (Pitchfork, 28 September 2010). Pitchfork claimed it had a "right to show off a little".
In July 2013, Coldcut produced a piece entitled "D'autre" based on the writings of French poet Arthur Rimbaud, for Forum Des Images in Paris. The following month, in August, Coldcut produced a new soundtrack for a section of André Sauvage's classic film Études sur Paris, which was shown as part of Noise of Art at the BFI in London, which celebrated 100 years of electronic music and silent cinema. Coldcut put new music to films from the Russolo era, incorporating original recordings of Russolo's proto-synths.
In 2014, Coldcut did three soundtracks as part of the project New City, a series of animated skylines of the near future developed by Tomorrow's Thought Today's Liam Young, with accompanying writing from sci-fi authors Jeff Noon, Pat Cadigan and Tim Maughan.
Most recently, Coldcut released Ninja Jamm, a music making app, for Android and iOS, in collaboration with London-based arts and technology firm Seeper. Geared toward both casual listeners and more experienced DJs and music producers, the freemium app allows users to download, remix and make music with samplepacks and tunepacks that feature pro quality sample libraries and also original tracks and mixes by Coldcut, as well as other Ninja artists, creating something new altogether. With the "intuitive yet deep" app, users can turn instruments on and off, swap between clips, add glitches and effects, trigger and pitch-bend stabs and one-off samples, and change the tempo of the track instantly. Users can additionally record as they mix and instantly upload to SoundCloud or save the mixes locally. Tunepack releases for Ninja Jamm are increasingly synchronised with Ninja Tune releases on conventional formats. To date, over 30 tunepacks have been released, including Amon Tobin, Bonobo, Coldcut, DJ Food, Martyn, Lapalux, Machinedrum, Raffertie, Irresistible Force, FaltyDL, Shuttle, Starkey. Ninja Jamm was featured by Apple in the New and Noteworthy section of the App Store in the week of release and it received over 100,000 downloads in the first week. Coldcut are developing Ninja Jamm further after the Android release garnered acclaim from the Guardian, Independent, Gizmodo and many more reviewers.
In 2017, Ahead Of Our Time released the album Stories From Far Away On Piano by James Heather, and also released its follow up in 2022, the album Invisible Forces.
On 6 December 2017, BBC Radio 4 broadcast a play, Billie Homeless Dies at the End by Tom Kelly with electronic music by Coldcut.
In 2020, Coldcut appeared on the global music/afrobeat album Keleketla! (with artists such as Tenderlonious, Tamar Osborn, Sibusile Xaba, Thabang Tabane and Tony Allen), which was released on their Ahead of Our Time Records label.
On November 19th, 2021 Ahead of Our Time released an ambient compilation curated out of old and new compositions and extra sequencing with the help of Mixmaster Morris. The compilation featured music by Ryuichi Sakamoto, Julianna Barwick, Daniel Pemberton, Kaitlyn Aurelia Smith, Sigur Rós, Laraaji and many more artists, purposefully ranging in prominence. | [
{
"paragraph_id": 0,
"text": "Coldcut are an English electronic music duo composed of Matt Black and Jonathan More. Credited as pioneers for pop sampling in the 1980s, Coldcut are also considered the first stars of UK electronic dance music due to their innovative style, which featured cut-up samples of hip-hop, soul, funk, spoken word and various other types of music, as well as video and multimedia. According to Spin, \"in '87 Coldcut pioneered the British fad for 'DJ records'\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "Coldcut's records first introduced the public to pop artists Yazz and Lisa Stansfield, through which these artists achieved pop chart success. In addition, Coldcut has remixed and created productions on tracks by the likes of Eric B & Rakim, James Brown, Queen Latifah, Eurythmics, INXS, Steve Reich, Blondie, the Fall, Pierre Henry, Nina Simone, Fog, Red Snapper, and BBC Radiophonic Workshop.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Beyond their work as a production duo, Coldcut are the founders of Ninja Tune, an independent record label in London, England (with satellite offices in Los Angeles and Berlin) with an overall emphasis on encouraging interactive technology and finding innovative uses of software. The label's first releases (the first four volumes of DJ Food - Jazz Brakes) were produced by Coldcut in the early '90s, and composed of instrumental hip-hop cuts that led the duo to help pioneer the trip hop genre, with artists such as Funki Porcini, the Herbaliser and DJ Vadim.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In 1986, computer programmer Matt Black and ex-art teacher Jonathan More were part-time DJs on the rare groove scene. More also DJed on pirate radio, hosting the Meltdown Show on Kiss FM and worked at the Reckless Records store on Berwick Street, London where Black visited as a customer. The first collaboration between the two artists was \"Say Kids What Time Is It?\" on a white label in January 1987, which mixed The Jungle Book's \"King of the Swingers\" with the break from James Brown's \"Funky Drummer\". The innovation of \"Say Kids...\" caused More and Black to be heralded by SPIN as \"the first Brit artists to really get hip-hop's class-cutup aesthetic\". It is regarded as the UK's first breaks record, the first UK record to be built entirely of samples and \"the final link in the chain connecting European collage-experiment with the dance-remix-scratch edit\". This was later sampled in \"Pump Up the Volume\" by MARRS, a single that reached #1 in the UK in October 1987.",
"title": "Music Career"
},
{
"paragraph_id": 4,
"text": "Though Black had joined Kiss FM with his own mix-based show, the pair eventually joined forces on its own show later in 1987 called Solid Steel. The eclectic show became a unifying force in underground experimental electronic music and is still running, celebrating 25 years in 2013.",
"title": "Music Career"
},
{
"paragraph_id": 5,
"text": "The duo adopted the name \"Coldcut\" and set up a record label called Ahead Of Our Time to release the single \"Beats + Pieces\" (one of the formats also included \"That Greedy Beat\") in 1987. All of these tracks were assembled using cassette pause button edits and later spliced tape edits that would sometimes run \"all over the room\". The duo used sampling from Led Zeppelin to James Brown. Electronic act the Chemical Brothers have described \"Beats + Pieces\" as the \"first bigbeat record\", a style which appeared in the mid-1990s.",
"title": "Music Career"
},
{
"paragraph_id": 6,
"text": "Coldcut's first mainstream success came when Julian Palmer from Island Records asked them to remix Eric B. & Rakim's \"Paid in Full\". Released in October 1987, the landmark remix is said to have \"laid the groundwork for hip hop's entry into the UK mainstream\", becoming a breakthrough hit for Eric B & Rakim outside the U.S., reaching No. 15 in the UK, and the top 20 in a number of European countries. It featured a prominent Ofra Haza sample and many other vocal cut ups as well as a looped rhythm which later, when sped up, proved popular in the Breakbeat genre. Off the back of its success in clubs, the Coldcut \"Seven Minutes of Madness\" remix ended up being promoted as the single in the UK.",
"title": "Music Career"
},
{
"paragraph_id": 7,
"text": "In 1988, More and Black formed Hex, a self-titled \"multimedia pop group\", with Mile Visman and Rob Pepperell. While working on videos for artists such as Kevin Saunderson, Queen Latifah and Spiritualized, Hex's collaborative work went on to incorporate 3D modelling, punk video art, and algorithmic visuals on desktop machines. The video for Coldcut's 'Christmas Break' in 1989 is arguably one of the first pop promos produced entirely on microcomputers.",
"title": "Music Career"
},
{
"paragraph_id": 8,
"text": "In 1988, Coldcut released Out To Lunch With Ahead Of Our Time, a double LP of Coldcut productions and re-cuts, and the various aliases under which the duo had recorded. This continued the duo's tradition of releasing limited available vinyl.",
"title": "Music Career"
},
{
"paragraph_id": 9,
"text": "The next Coldcut single, released in February 1988, moved towards a more house-influenced style. \"Doctorin' the House\", which debuted singer Yazz, became a top ten hit, and peaked at No. 6. In the same year, under the guise Yazz and the Plastic Population, they produced \"The Only Way Is Up\", a cover of a Northern soul song. The record reached No. 1 in the UK in August, and remained there for five weeks, becoming 1988's second biggest selling single. Producer Youth of Killing Joke also helped Coldcut with this record. The duo had another top hit in September with \"Stop This Crazy Thing\", which featured reggae vocalist Junior Reid and reached number 21 in the UK.",
"title": "Music Career"
},
{
"paragraph_id": 10,
"text": "The single \"People Hold On\" became another UK Top 20 hit. Released in March 1989, it helped launch the career of the then relatively unknown singer Lisa Stansfield. Coldcut and Mark Saunders produced her debut solo single \"This Is the Right Time\", which became another UK Top 20 hit in August as well as reaching No. 21 on the U.S. Billboard Hot 100 the following year.",
"title": "Music Career"
},
{
"paragraph_id": 11,
"text": "As the duo started to enjoy critical and commercial success, their debut album What's That Noise? was released in April 1989 on Ahead of Our Time and distributed by Big Life Records. The album gave \"breaks the full length treatment\", and showcased \"their heady blend of hip-hop production aesthetics and proto-acid house grooves\". It also rounded up a heap of unconventional guest features, quoted by SPIN as having \"somehow found room at the same table for Queen Latifah and Mark E. Smith\". The album's track \"I'm in Deep\" (featuring Smith) prefigured the indie-dance guitar-breaks crossover of such bands as the Stone Roses and Happy Mondays, utilizing Smith's freestyle raucous vocals over an acid house backing, and also including psych guitar samples from British rock band Deep Purple. What's That Noise? reached the Top 20 in the UK and was certified Silver.",
"title": "Music Career"
},
{
"paragraph_id": 12,
"text": "Coldcut's second album, Some Like It Cold, released in 1990 on Ahead Of Our Time, featured a collaboration with Queen Latifah on the single \"Find a Way\". Though \"Find a Way\" was a minor hit in the UK, no more singles were released from the album. The duo was given the BPI \"Producer of the Year Award\" in 1990. Hex - alongside some other London visual experimenters such as iE - produced a series of videos for a longform VHS version of the album. This continued Coldcut and Hex's pioneering of the use of microcomputers to synthesize electronic music visuals.",
"title": "Music Career"
},
{
"paragraph_id": 13,
"text": "After their success with Lisa Stansfield, Coldcut signed with her label, Arista. Conflicts arose with the major label, as Coldcut's \"vision extended beyond the formulae of house and techno\" and mainstream pop culture (CITATION: The Virgin Encyclopedia Of Nineties Music, 2000). Eventually, the duo's album Philosophy emerged in 1993. Singles \"Dreamer\" and \"Autumn Leaves\" (1994) sung by vocalist Janis Alexander were both minor hits but the album did not chart.",
"title": "Music Career"
},
{
"paragraph_id": 14,
"text": "\"Autumn Leaves\" had strings recorded at Abbey Road, with a 30-piece string section and an arrangement by film composer Ed Shearmur. The leader of the string section was Simon Jeffes of Penguin Cafe Orchestra. Coldcut's insistence on their friend Mixmaster Morris to remix \"Autumn Leaves\" led to one of Morris' most celebrated remixes, which became a minor legend in ambient music. It has appeared on numerous compilations.",
"title": "Music Career"
},
{
"paragraph_id": 15,
"text": "In 1990, whilst on their first tour in Japan (which also featured Norman Cook, who later became Fatboy Slim), Matt and Jon formed their second record label, Ninja Tune, as a self-titled \"technocoloured escape pod\", and a way to escape the creative control of major labels. The label enabled them to release music under different aliases (e.g. Bogus Order, DJ Food), which also helped them to avoid pigeonholing as producers. Ninja Tune's first release was Bogus Order's \"Zen Brakes\". The name Coldcut stayed with Arista so there were no official Coldcut releases for the next three years.",
"title": "Music Career"
},
{
"paragraph_id": 16,
"text": "During this time, Coldcut still produced for artists on their new label, releasing a flood of material under different names and continuing to work with young groups. They additionally kept on with Solid Steel on Kiss FM and running the night club Stealth (Club of the Year in the NME, The Face, and Mixmag in 1996).",
"title": "Music Career"
},
{
"paragraph_id": 17,
"text": "In 1991, Hex released their first video game, Top Banana, which was included on a Hex release for the Commodore CDTV machine in 1992, arguably the first complete purpose-designed multimedia system. Top Banana was innovative in that it used sampled graphics, contained an ecological theme and a female lead character (dubbed \"KT\"), and its music changed through random processes. Coldcut and Hex presented this multimedia project as an example of the forthcoming convergence of pop music and computer-game characters.",
"title": "Music Career"
},
{
"paragraph_id": 18,
"text": "In 1992, Hex's first single - \"Global Chaos\" / \"Digital Love Opus 1\" - combined rave visuals with techno and ambient interactive visuals. In November of that year, Hex released Global Chaos CDTV, which took advantage of the possibilities of the new CD-ROM medium. The Global Chaos CDTV disk (which contained the Top Banana game, interactive visuals and audio), was a forerunner of the \"CD+\" concept, uniting music, graphics, and video games into one. This multi-dimensional entertainment product received wide coverage in the national media, including features on Dance Energy, Kaleidoscope on BBC Radio 4, What's Up Doc? on ITV and Reportage on BBC Two. i-D Magazine was quoted as saying, \"It's like your TV tripping\".",
"title": "Music Career"
},
{
"paragraph_id": 19,
"text": "Coldcut videos were made for most songs, often by Hexstatic, and used a lot of stock and sampled footage. Their \"Timber\" video, which created an AV collage piece using analogous techniques to audio sample collage, was put on heavy rotation on MTV. Stuart Warren Hill of Hexstatic referred to this technique as: \"What you see is what you hear\". \"Timber\" (which appears on both Let Us Play, Coldcut's fourth album, and Let Us Replay, their fifth) won awards for its innovative use of repetitive video clips synced to the music, including being shortlisted at the Edinburgh Television and Film Festival in their top five music videos of the year in 1998.",
"title": "Music Career"
},
{
"paragraph_id": 20,
"text": "Coldcut began integrating video sampling into their live DJ gigs at the time, and incorporated multimedia content that caused press to credit the act as segueing \"into the computer age\". Throughout the 90s, Hex created visuals for Coldcut's live performances, and developed the CD-ROM portion of Coldcut's Let Us Play and Let Us Replay, in addition to software developed specifically for the album's world tour. Hex's inclusion of music videos and \"playtools\" (playful art/music software programs) on Coldcut's CD-ROM was completely ahead of the curve at that time, offering viewers/listeners a high level of interactivity. Playtools such as My Little Funkit and Playtime were the prototypes for Ninja Jamm, the app Coldcut designed and launched 16 years later. Playtime followed on from Coldcut and Hex's Synopticon installation, developing the auto-cutup algorhythm, and using other random processes to generate surprising combinations. Coldcut and Hex performed live using Playtime at the 1st Sonar Festival in 1994. Playtime was also used to generate the backing track for Coldcut's collaboration with Jello Biafra, \"Every Home a Prison\".",
"title": "Music Career"
},
{
"paragraph_id": 21,
"text": "In 1994, Coldcut and Hex contributed an installation to the Glasgow Gallery of Modern Art. The piece, called Generator, was installed in the Fire Gallery. Generator was an interactive installation which allowed users to mix sound, video, text and graphics and make their own audio-visual mix, modelled on the techniques and technology used by Coldcut in clubs and live performance events. It consisted of two consoles: the left controlling how the sounds are played, the right controlling how the images are played.",
"title": "Music Career"
},
{
"paragraph_id": 22,
"text": "As part of the JAM exhibition of \"Style, Music and Media\" at the Barbican Art Gallery in 1996, Coldcut and Hex were commissioned to produce an interactive audiovisual piece called Synopticon. Conceived and designed by Robert Pepperell and Matt Black, the digital culture synthesiser allows users to \"remix\" sounds, images, text and music in a partially random, partially controlled way.",
"title": "Music Career"
},
{
"paragraph_id": 23,
"text": "The year 1996 also brought the Coldcut name back to More and Black, and the pair celebrated with 70 Minutes of Madness, a mix CD that became part of the Journeys by DJ series. The release was credited with \"bringing to wider attention the sort of freestyle mixing the pair were always known for through their radio show on KISS FM, Solid Steel, and their steady club dates\". It was voted \"Best Compilation of All Time\" by Jockey Slut in 1998.",
"title": "Music Career"
},
{
"paragraph_id": 24,
"text": "In February 1997, they released a double pack single \"Atomic Moog 2000\" / \"Boot the System\", the first Coldcut release on Ninja Tune. This was not eligible for the UK chart because time and format restrictions prevented the inclusion of the \"Natural Rhythm\" video on the CD. In August 1997, a reworking of the early track \"More Beats + Pieces\" gave them their first UK Top 40 hit since 1989.",
"title": "Music Career"
},
{
"paragraph_id": 25,
"text": "The album Let Us Play! followed in September and also made the Top 40. The fourth album by Coldcut, Let Us Play! paid homage to the greats that inspired them. Their first album to be released on Ninja Tune, it featured guest appearances by Grandmaster Flash, Steinski, Jello Biafra, Jimpster, The Herbaliser, Talvin Singh, Daniel Pemberton and Selena Saliva. Coldcut's cut 'n' paste method on the album was compared to that of Dadaism and William Burroughs. Hex collaborated with Coldcut to produce the multimedia CD-ROM for the album. Hex later evolved the software into the engine that was used on the Let Us Play! world tour.",
"title": "Music Career"
},
{
"paragraph_id": 26,
"text": "In 1997, Matt Black - alongside Cambridge based developers Camart - created real-time video manipulation software VJAMM. It allowed users to be a \"digital video jockey\", remixing and collaging sound and images and trigger audio and visual samples simultaneously, subsequently bringing futuristic technology to the audio-visual field. VJAMM rivalled some of the features of high-end and high cost tech at the time. The VJAMM technology, praised as being proof of how far computers changed the face of live music, became seminal in both Coldcut's live sets (which were called a \"revelaton\" by Melody Maker and DJ sets. Their CCTV live show was featured at major festivals including Glastonbury, Roskilde, Sónar, the Montreux Jazz Festival, and John Peel's Meltdown. The \"beautifully simple and devastatingly effective\" software was deemed revolutionary, and became recognized as a major factor in the evolution of clubs. It eventually earned a place in the American Museum of the Moving Image's permanent collection. As quoted by The Independent, Coldcut's rallying cry was \"Don't hate the media, be the media'\". NME was quoted as saying: \"Veteran duo Coldcut are so cool they invented the remix - now they are doing the same for television.\"",
"title": "Music Career"
},
{
"paragraph_id": 27,
"text": "Also working with Camart, Black designed DJamm software in 1998, which Coldcut used on laptops for their live shows, providing the audio bed alongside VJAMM's audiovisual samples. Matt Black explained they designed DJamm so they \"could perform electronic music in a different way – i.e., not just taking a session band out to reproduce what you put together in the studio using samples. It had a relationship to DJing, but was more interactive and more effective.\" Excitingly at that time, DJamm was pioneering in its ability to shuffle sliced loops into intricate sequences, enabling users to split loops into any number of parts.",
"title": "Music Career"
},
{
"paragraph_id": 28,
"text": "In 1999, Let Us Replay! was released, a double-disc remix album where Coldcut's classic tunes were remixed by the likes of Cornelius (which was heralded as a highlight of the album, Irresistible Force, Shut Up And Dance, Carl Craig and J Swinscoe. Let Us Replay! pieces together \"short sharp shocks that put the mental in 'experimental' and still bring the breaks till the breakadawn\". It also includes a few live tracks from the duo's innovative world tour. The CD-ROM of the album, which also contained a free demo disc of the VJamm software, was one of the earliest audiovisual CD- ROMs on the market, and Muzik claimed deserved to \"have them canonized...it's like buying an entire mini studio for under $15\".",
"title": "Music Career"
},
{
"paragraph_id": 29,
"text": "In 2000, the Solid Steel show moved to BBC London.",
"title": "Music Career"
},
{
"paragraph_id": 30,
"text": "Coldcut continued to forge interesting collaborations, including 2001's Re:volution as an EP in which Coldcut created their own political party (The Guilty Party). Featuring scratches and samples of Tony Blair and William Hague speeches, the 3-track EP included Nautilus' \"Space Journey\", which won an Intermusic contest in 2000. The video was widely played on MTV. With \"Space Journey\", Coldcut were arguably the first group to give fans access to the multitrack parts, or \"stems\" of their songs, building on the idea of interactivity and sharing from Let Us Play.",
"title": "Music Career"
},
{
"paragraph_id": 31,
"text": "In 2001, Coldcut produced tracks for the Sega music video game Rez. Rez replaced typical video-game sound effects with electronic music; the player created sounds and melodies, intended to simulate a form of synesthesia. The soundtrack also featured Adam Freeland and Oval.",
"title": "Music Career"
},
{
"paragraph_id": 32,
"text": "In 2002, while utilizing VJamm and Detraktor, Coldcut and Juxta remixed Herbie Hancock's classic \"Rockit\", creating both an audio and video remix.",
"title": "Music Career"
},
{
"paragraph_id": 33,
"text": "Working with Marcus Clements in 2002, Coldcut released the sample manipulation algorhythm from their DJamm software as a standalone VST plugin that could be used in other software, naming it the \"Coldcutter\".",
"title": "Music Career"
},
{
"paragraph_id": 34,
"text": "Also in 2002, Coldcut with UK VJs Headspace (now mainly performing as the VJamm Allstars developed Gridio, an interactive, immersive audio-visual installation for the Pompidou Centre as part of the ‘'Sonic Process exhibition. The Sonic Process exhibition was launched at the MACBA in Barcelona in conjunction with Sónar, featuring Gridio as its centerpiece. In 2003, a commission for Graz led to a specially built version of Gridio, in a cave inside the castle mountain in Austria. Gridio was later commissioned by O2 for two simultaneous customised installations at the O2 Wireless Festivals in Leeds and London in 2007. That same year, Gridio was featured as part of Optronica at the opening week of the new BFI Southbank development in London.",
"title": "Music Career"
},
{
"paragraph_id": 35,
"text": "In 2003, Black worked with Penny Rimbaud (ex Crass) on Crass Agenda's Savage Utopia project. Black performed the piece with Rimbaud, Eve Libertine and other players at London's Vortex Jazz Club.",
"title": "Music Career"
},
{
"paragraph_id": 36,
"text": "In 2004, Coldcut collaborated with American video mashup artist TV Sheriff to produce their cut-up entitled \"Revolution USA\". The tactical-media project (coordinated with Canadian art duo NomIg) followed on from the UK version and extended the premise \"into an open access participatory project\". Through the multimedia political art project, over 12 gigabytes of footage from the last 40 years of US politics were made accessible to download, allowing participants to create a cut-up over a Coldcut beat. Coldcut also collaborated with TV Sheriff and NomIg to produce two audiovisual pieces \"World of Evil\" (2004) and \"Revolution '08\" (2008), both composed of footage from the United States presidential elections of respective years. The music used was composed by Coldcut, with \"Revolution '08\" featuring a remix by the Qemists.",
"title": "Music Career"
},
{
"paragraph_id": 37,
"text": "Later that year, a collaboration with the British Antarctic Survey (BAS) led to the psychedelic art documentary Wavejammer. Coldcut was given access to the BAS archive in order to create sounds and visuals for the short film.",
"title": "Music Career"
},
{
"paragraph_id": 38,
"text": "Two thousand and four also saw Coldcut produce a radio play in conjunction with renowned young author Hari Kunzru for BBC Radio 3 (incidentally called Sound Mirrors).",
"title": "Music Career"
},
{
"paragraph_id": 39,
"text": "Coldcut returned with the single \"Everything Is Under Control\" at the end of 2005, featuring Jon Spencer (of Jon Spencer Blues Explosion) and Mike Ladd. It was followed in 2006 by their fifth studio album Sound Mirrors, which was quoted as being \"one of the most vital and imaginative records Jon Moore and Matt Black have ever made\", and saw the duo \"continue, impressively, to find new ways to present political statements through a gamut of pristine electronics and breakbeats\" (CITATION: Future Music, 2007). The fascinating array of guest vocalists included Soweto Kinch, Annette Peacock, Ameri Baraka, and Saul Williams. The latter followed on from Coldcut's remix of Williams' \"The Pledge\" for a project with DJ Spooky.",
"title": "Music Career"
},
{
"paragraph_id": 40,
"text": "A 100-date audiovisual world tour commenced for Sound Mirrors, which was considered \"no small feat in terms of technology or human effort\". Coldcut was accompanied by scratch DJ Raj and AV artist Juxta, in addition to guest vocalists from the album, including UK rapper Juice Aleem, Roots Manuva, Mpho Skeef, Jon Spencer and house legend Robert Owens.",
"title": "Music Career"
},
{
"paragraph_id": 41,
"text": "Three further singles were released from the album including the Top 75 hit \"True Skool\" with Roots Manuva. The same track appeared on the soundtrack of the video game FIFA Street 2.",
"title": "Music Career"
},
{
"paragraph_id": 42,
"text": "Sponsored by the British Council, in 2005 Coldcut introduced AV mixing to India with the Union project, alongside collaborators Howie B and Aki Nawaz of Fun-Da-Mental. Coldcut created an A/V remix of the Bollywood hit movie Kal Ho Naa Ho.",
"title": "Music Career"
},
{
"paragraph_id": 43,
"text": "In 2006, Coldcut performed an A/V set based on \"Music for 18 Musicians\" as part of Steve Reich's 70th birthday gig at the Barbican Centre in London. This was originally written for the 1999 album Reich Remixed.",
"title": "Music Career"
},
{
"paragraph_id": 44,
"text": "Coldcut remixed another classic song in 2007: Nina Simone's \"Save Me\". This was part of a remix album called Nina Simone: Remixed & Re-imagined, featuring remixes from Tony Humphries, Francois K and Chris Coco.",
"title": "Music Career"
},
{
"paragraph_id": 45,
"text": "In February 2007, Coldcut and Mixmaster Morris created a psychedelic AV obituary/tribute Coldcut, Mixmaster Morris, Ken Campbell, Bill Drummond and Alan Moore (18 March 2007). Robert Anton Wilson tribute show. Queen Elizabeth Hall, London: Mixmaster Morris. (28 August 2009) to Robert Anton Wilson, the 60s author of Illuminatus! Trilogy. The tribute featured graphic novel writer Alan Moore and artist Bill Drummond and a performance by experimental theatre legend Ken Campbell. Coldcut and Morris' hour and a half performance resembled a documentary being remixed on the fly, cutting up nearly 15 hours' worth of Wilson's lectures.",
"title": "Music Career"
},
{
"paragraph_id": 46,
"text": "In 2008, an international group of party organisers, activists and artists including Coldcut received a grant from the Intelligent Energy Department of the European Union, to create a project that promoted intelligent energy and environmental awareness to the youth of Europe. The result was Energy Union, a piece of VJ cinema, political campaign, music tour, party, art exhibition and social media hub. Energy Union toured 12 EU countries throughout 2009 and 2010, completing 24 events in total. Coldcut created the Energy Union show for the tour, a one-hour Audio/Visual montage on the theme of Intelligent Energy. In presenting new ideas for climate, environmental and energy communication strategies, the Energy Union tour was well received, and reached a widespread audience in cities across the UK, Germany, Belgium, The Netherlands, Croatia, Slovenia, Austria, Hungary, Bulgaria, Spain and the Czech Republic.",
"title": "Music Career"
},
{
"paragraph_id": 47,
"text": "Also in 2008, Coldcut was asked to remix the theme song for British cult TV show Doctor Who for the program's 40th anniversary. In October 2008, Coldcut celebrated the legacy of the BBC Radiophonic Workshop (the place where the Doctor Who theme was created) with a live DJ mix at London's legendary Roundhouse. The live mix incorporated classic Radiophonic Workshop compositions with extended sampling of the original gear.",
"title": "Music Career"
},
{
"paragraph_id": 48,
"text": "Additionally in 2008, Coldcut remixed \"Ourselves\", a Japanese No. 1 hit from the single \"&\" by Ayumi Hamasaki. This mix was included on the album Ayu-mi-x 6: Gold.",
"title": "Music Career"
},
{
"paragraph_id": 49,
"text": "Starting in 2009, Matt Black, with musician/artist/coder Paul Miller (creator of the TX Modular Open Source synth), developed Granul8, a new type of visual fx/source Black termed a \"granular video synthesiser\". Granul8 allows the use of realtime VJ techniques including video feedback combined with VDMX VJ software.",
"title": "Music Career"
},
{
"paragraph_id": 50,
"text": "From 2009 onwards, Black has been collaborating with coder and psychedelic mathematician William Rood to create a forthcoming project called Liveloom, a social media AV mixer.",
"title": "Music Career"
},
{
"paragraph_id": 51,
"text": "In 2010, Coldcut celebrated 20 years of releasing music with its label, Ninja Tune. A book entitled Ninja Tune: 20 Years of Beats and Pieces was released on 12 August 2010, and an exhibition was held at Black Dog Publishing's Black Dog Space in London, showcasing artwork, design and photography from the label's 20-year history. A compilation album was released on 20 September in two formats: a regular version consisting of two 2-disc volumes, and a limited edition which contained six CDs, six 7\" vinyl singles, a hardback copy of the book, a poster and additional items. Ninja Tune also incorporated a series of international parties. This repositioned Ninja as a continually compelling and influential label, being one of the \"longest-running (and successful) UK indie labels to come out of the late-1980s/early-90s explosion in dance music and hip-hop\" (Pitchfork, 28 September 2010). Pitchfork claimed it had a \"right to show off a little\".",
"title": "Music Career"
},
{
"paragraph_id": 52,
"text": "In July 2013, Coldcut produced a piece entitled \"D'autre\" based on the writings of French poet Arthur Rimbaud, for Forum Des Images in Paris. The following month, in August, Coldcut produced a new soundtrack for a section of André Sauvage's classic film Études sur Paris, which was shown as part of Noise of Art at the BFI in London, which celebrated 100 years of electronic music and silent cinema. Coldcut put new music to films from the Russolo era, incorporating original recordings of Russolo's proto-synths.",
"title": "Music Career"
},
{
"paragraph_id": 53,
"text": "In 2014, Coldcut did three soundtracks as part of the project New City, a series of animated skylines of the near future developed by Tomorrow's Thought Today's Liam Young, with accompanying writing from sci-fi authors Jeff Noon, Pat Cadigan and Tim Maughan.",
"title": "Music Career"
},
{
"paragraph_id": 54,
"text": "Most recently, Coldcut released Ninja Jamm, a music making app, for Android and iOS, in collaboration with London-based arts and technology firm Seeper. Geared toward both casual listeners and more experienced DJs and music producers, the freemium app allows users to download, remix and make music with samplepacks and tunepacks that feature pro quality sample libraries and also original tracks and mixes by Coldcut, as well as other Ninja artists, creating something new altogether. With the \"intuitive yet deep\" app, users can turn instruments on and off, swap between clips, add glitches and effects, trigger and pitch-bend stabs and one-off samples, and change the tempo of the track instantly. Users can additionally record as they mix and instantly upload to SoundCloud or save the mixes locally. Tunepack releases for Ninja Jamm are increasingly synchronised with Ninja Tune releases on conventional formats. To date, over 30 tunepacks have been released, including Amon Tobin, Bonobo, Coldcut, DJ Food, Martyn, Lapalux, Machinedrum, Raffertie, Irresistible Force, FaltyDL, Shuttle, Starkey. Ninja Jamm was featured by Apple in the New and Noteworthy section of the App Store in the week of release and it received over 100,000 downloads in the first week. Coldcut are developing Ninja Jamm further after the Android release garnered acclaim from the Guardian, Independent, Gizmodo and many more reviewers.",
"title": "Music Career"
},
{
"paragraph_id": 55,
"text": "In 2017, Ahead Of Our Time released the album Stories From Far Away On Piano by James Heather, and also released its follow up in 2022, the album Invisible Forces.",
"title": "Music Career"
},
{
"paragraph_id": 56,
"text": "On 6 December 2017, BBC Radio 4 broadcast a play, Billie Homeless Dies at the End by Tom Kelly with electronic music by Coldcut.",
"title": "Music Career"
},
{
"paragraph_id": 57,
"text": "In 2020, Coldcut appeared on the global music/afrobeat album Keleketla! (with artists such as Tenderlonious, Tamar Osborn, Sibusile Xaba, Thabang Tabane and Tony Allen), which was released on their Ahead of Our Time Records label.",
"title": "Music Career"
},
{
"paragraph_id": 58,
"text": "On November 19th, 2021 Ahead of Our Time released an ambient compilation curated out of old and new compositions and extra sequencing with the help of Mixmaster Morris. The compilation featured music by Ryuichi Sakamoto, Julianna Barwick, Daniel Pemberton, Kaitlyn Aurelia Smith, Sigur Rós, Laraaji and many more artists, purposefully ranging in prominence.",
"title": "Music Career"
}
] | Coldcut are an English electronic music duo composed of Matt Black and Jonathan More. Credited as pioneers for pop sampling in the 1980s, Coldcut are also considered the first stars of UK electronic dance music due to their innovative style, which featured cut-up samples of hip-hop, soul, funk, spoken word and various other types of music, as well as video and multimedia. According to Spin, "in '87 Coldcut pioneered the British fad for 'DJ records'". Coldcut's records first introduced the public to pop artists Yazz and Lisa Stansfield, through which these artists achieved pop chart success. In addition, Coldcut has remixed and created productions on tracks by the likes of Eric B & Rakim, James Brown, Queen Latifah, Eurythmics, INXS, Steve Reich, Blondie, the Fall, Pierre Henry, Nina Simone, Fog, Red Snapper, and BBC Radiophonic Workshop. Beyond their work as a production duo, Coldcut are the founders of Ninja Tune, an independent record label in London, England with an overall emphasis on encouraging interactive technology and finding innovative uses of software. The label's first releases were produced by Coldcut in the early '90s, and composed of instrumental hip-hop cuts that led the duo to help pioneer the trip hop genre, with artists such as Funki Porcini, the Herbaliser and DJ Vadim. | 2001-10-01T21:55:32Z | 2023-12-15T23:04:26Z | [
"Template:Authority control",
"Template:More citations needed",
"Template:Cite web",
"Template:AllMusic",
"Template:DJ Award for Ambient, Chill Out, Down Tempo and Eclectic, Experimental DJ",
"Template:Use dmy dates",
"Template:Cite book",
"Template:Discogs artist",
"Template:Infobox musical artist",
"Template:Reflist",
"Template:Cite news",
"Template:Cite magazine",
"Template:Coldcut",
"Template:Short description",
"Template:For",
"Template:Use British English",
"Template:Solid Steel"
] | https://en.wikipedia.org/wiki/Coldcut |
6,656 | Cuisine | A cuisine is a style of cooking characterized by distinctive ingredients, techniques and dishes, and usually associated with a specific culture or geographic region. Regional food preparation techniques, customs, and ingredients combine to enable dishes unique to a region.
Used in English since the late 18th century, the word cuisine – meaning manner or style of cooking – is borrowed from the French for "style of cooking", as originally derived from Latin coquere "to cook".
A cuisine is partly determined by ingredients that are available locally or through trade. Regional ingredients are developed and commonly contribute to a regional or national cuisine, such as Japanese rice in Japanese cuisine.
Religious food laws can also exercise an influence on cuisine, such as Indian cuisine and Hinduism that is mainly lacto-vegetarian (avoiding meat and eggs) due to sacred animal worship. Sikhism in Punjabi cuisine, Buddhism in East Asian cuisine, Christianity in European cuisine, Islam in Middle Eastern cuisine, and Judaism in Jewish and Israeli cuisine all exercise an influence on cuisine.
Some factors that have an influence on a region's cuisine include the area's climate, the trade among different countries, religious or sumptuary laws and culinary culture exchange. For example, a tropical diet may be based more on fruits and vegetables, while a polar diet might rely more on meat and fish.
The area's climate, in large measure, determines the native foods that are available. In addition, climate influences food preservation. For example, foods preserved for winter consumption by smoking, curing, and pickling have remained significant in world cuisines for their altered gustatory properties.
The trade among different countries also largely affects a region's cuisine. Dating back to the ancient spice trade, seasonings such as cinnamon, cassia, cardamom, ginger, and turmeric were important items of commerce in the earliest evolution of trade, and India was a global market for this. Cinnamon and cassia found their way to the Middle East at least 4,000 years ago.
Certain foods and food preparations are required or proscribed by the religiousness or sumptuary laws, such as Islamic dietary laws and Jewish dietary laws.
Culinary culture exchange is also an important factor for cuisine in many regions: Japan's first substantial and direct exposure to the West came with the arrival of European missionaries in the second half of the 16th century. At that time, the combination of Spanish and Portuguese game frying techniques with an East Asian method for cooking vegetables in oil, led to the development of tempura, the "popular Japanese dish in which seafood and many different types of vegetables are coated with batter and deep fried".
Cuisine dates back to Antiquity. As food began to require more planning, there was an emergence of meals that situated around culture.
Cuisines evolve continually, and new cuisines are created by innovation and cultural interaction. One recent example is fusion cuisine, which combines elements of various culinary traditions while not being categorized per any one cuisine style, and generally refers to the innovations in many contemporary restaurant cuisines since the 1970s. Nouvelle cuisine (New cuisine) is an approach to cooking and food presentation in French cuisine that was popularized in the 1960s by the food critics Henri Gault, who invented the phrase, and his colleagues André Gayot and Christian Millau in a new restaurant guide, the Gault-Millau, or Le Nouveau Guide. Molecular cuisine, is a modern style of cooking which takes advantage of many technical innovations from the scientific disciplines (molecular cooking). The term was coined in 1999 by the French INRA chemist Hervé This because he wanted to distinguish it from the name Molecular gastronomy (a scientific activity) that was introduced by him and the late Oxford physicist Nicholas Kurti in 1988. It is also named as multi sensory cooking, modernist cuisine, culinary physics, and experimental cuisine by some chefs. Besides, international trade brings new foodstuffs including ingredients to existing cuisines and leads to changes. The introduction of hot pepper to China from South America around the end of the 17th century, greatly influencing Sichuan cuisine, which combines the original taste (with use of Sichuan pepper) with the taste of newly introduced hot pepper and creates a unique mala (麻辣) flavor that's mouth-numbingly spicy and pungent.
A global cuisine is a cuisine that is practiced around the world, and can be categorized according to the common use of major foodstuffs, including grains, produce and cooking fats.
Regional cuisines can vary based on availability and usage of specific ingredients, local cooking traditions and practices, as well as overall cultural differences. Such factors can be more-or-less uniform across wide swaths of territory, or vary intensely within individual regions. For example, in Central and North South America, corn (maize), both fresh and dried, is a staple food, and is used in many different ways. In northern Europe, wheat, rye, and fats of animal origin predominate, while in southern Europe olive oil is ubiquitous and rice is more prevalent. In Italy, the cuisine of the north, featuring butter and rice, stands in contrast to that of the south, with its wheat pasta and olive oil. In some parts of China, rice is the staple, while in others this role is filled by noodles and bread. Throughout the Middle East and Mediterranean, common ingredients include lamb, olive oil, lemons, peppers, and rice. The vegetarianism practiced in much of India has made pulses (crops harvested solely for the dry seed) such as chickpeas and lentils as important as wheat or rice. From India to Indonesia, the extensive use of spices is characteristic; coconuts and seafood are also used throughout the region both as foodstuffs and as seasonings.
African cuisines use a combination of locally available fruits, cereals and vegetables, as well as milk and meat products. In some parts of the continent, the traditional diet features a preponderance of milk, curd and whey products. In much of tropical Africa, however, cow's milk is rare and cannot be produced locally (owing to various diseases that affect livestock). The continent's diverse demographic makeup is reflected in the many different eating and drinking habits, dishes, and preparation techniques of its manifold populations.
Due to Asia's vast size and extremely diverse geography and demographics, Asian cuisines are many and varied, and include East Asian cuisine, South Asian cuisine, Southeast Asian cuisine, Central Asian cuisine and West Asian cuisine. Ingredients common to East Asia and Southeast Asia (due to overseas Chinese influence) include rice, ginger, garlic, sesame seeds, chilies, dried onions, soy, and tofu, with stir frying, steaming, and deep frying being common cooking methods. While rice is common to most regional cuisines in Asia, different varieties are popular in the different regions: Basmati rice is popular in South Asia, Jasmine rice in Southeast Asia, and long-grain rice in China and short-grain rice in Japan and Korea. Curry is also a common ingredient found in South Asia, Southeast Asia, and East Asia (notably Japanese curry); however, they are not popular in West Asian and Central Asian cuisines. Those curry dishes with origins in South Asia usually have a yogurt base, with origins in Southeast Asia a coconut milk base, and in East Asia a stewed meat and vegetable base. South Asian cuisine and Southeast Asian cuisine are often characterized by their extensive use of spices and herbs native to the tropical regions of Asia.
European cuisine (alternatively, "Western cuisine") include the cuisines of Europe and other Western countries. European cuisine includes non-indigenous cuisines of North America, Australasia, Oceania, and Latin America as well. The term is used by East Asians to contrast with East Asian styles of cooking. When used in English, the term may refer more specifically to cuisine in (Continental) Europe; in this context, a synonym is Continental cuisine.
Oceanian cuisines include Australian cuisine, New Zealand cuisine, and cuisines from many other islands or island groups throughout Oceania. Australian cuisine consists of immigrant Anglo-Celtic derived cuisine, and Bushfood prepared and eaten by native Aboriginal Australian peoples, and various newer Asian influences. New Zealand cuisine also consists of European inspired dishes, such as Pavlova, and native Māori cuisine. Across Oceania, staples include the Kūmura and Taro, which was/is a staple from Papua New Guinea to the South Pacific. On most islands in the south pacific, fish are widely consumed because of the proximity to the ocean.
The cuisines of the Americas are found across North and South America, and are based on the cuisines of the countries from which the immigrant people came from, primarily Europe. However, the traditional European cuisine has been adapted by the addition of many local and native ingredients, and many of their techniques have been added to traditional foods as well. Native American cuisine is prepared by indigenous populations across the continent, and its influences can be seen on multi-ethnic Latin American cuisine. Many staple foods have been seen to be eaten across the continent, such as corn (maize), beans, and potatoes have their own respective native origins. The regional cuisines are North American cuisine, Mexican cuisine, Central American cuisine, South American cuisine, and Caribbean cuisine. | [
{
"paragraph_id": 0,
"text": "A cuisine is a style of cooking characterized by distinctive ingredients, techniques and dishes, and usually associated with a specific culture or geographic region. Regional food preparation techniques, customs, and ingredients combine to enable dishes unique to a region.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Used in English since the late 18th century, the word cuisine – meaning manner or style of cooking – is borrowed from the French for \"style of cooking\", as originally derived from Latin coquere \"to cook\".",
"title": "Etymology"
},
{
"paragraph_id": 2,
"text": "A cuisine is partly determined by ingredients that are available locally or through trade. Regional ingredients are developed and commonly contribute to a regional or national cuisine, such as Japanese rice in Japanese cuisine.",
"title": "Influences on cuisine"
},
{
"paragraph_id": 3,
"text": "Religious food laws can also exercise an influence on cuisine, such as Indian cuisine and Hinduism that is mainly lacto-vegetarian (avoiding meat and eggs) due to sacred animal worship. Sikhism in Punjabi cuisine, Buddhism in East Asian cuisine, Christianity in European cuisine, Islam in Middle Eastern cuisine, and Judaism in Jewish and Israeli cuisine all exercise an influence on cuisine.",
"title": "Influences on cuisine"
},
{
"paragraph_id": 4,
"text": "Some factors that have an influence on a region's cuisine include the area's climate, the trade among different countries, religious or sumptuary laws and culinary culture exchange. For example, a tropical diet may be based more on fruits and vegetables, while a polar diet might rely more on meat and fish.",
"title": "Influences on cuisine"
},
{
"paragraph_id": 5,
"text": "The area's climate, in large measure, determines the native foods that are available. In addition, climate influences food preservation. For example, foods preserved for winter consumption by smoking, curing, and pickling have remained significant in world cuisines for their altered gustatory properties.",
"title": "Influences on cuisine"
},
{
"paragraph_id": 6,
"text": "The trade among different countries also largely affects a region's cuisine. Dating back to the ancient spice trade, seasonings such as cinnamon, cassia, cardamom, ginger, and turmeric were important items of commerce in the earliest evolution of trade, and India was a global market for this. Cinnamon and cassia found their way to the Middle East at least 4,000 years ago.",
"title": "Influences on cuisine"
},
{
"paragraph_id": 7,
"text": "Certain foods and food preparations are required or proscribed by the religiousness or sumptuary laws, such as Islamic dietary laws and Jewish dietary laws.",
"title": "Influences on cuisine"
},
{
"paragraph_id": 8,
"text": "Culinary culture exchange is also an important factor for cuisine in many regions: Japan's first substantial and direct exposure to the West came with the arrival of European missionaries in the second half of the 16th century. At that time, the combination of Spanish and Portuguese game frying techniques with an East Asian method for cooking vegetables in oil, led to the development of tempura, the \"popular Japanese dish in which seafood and many different types of vegetables are coated with batter and deep fried\".",
"title": "Influences on cuisine"
},
{
"paragraph_id": 9,
"text": "Cuisine dates back to Antiquity. As food began to require more planning, there was an emergence of meals that situated around culture.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Cuisines evolve continually, and new cuisines are created by innovation and cultural interaction. One recent example is fusion cuisine, which combines elements of various culinary traditions while not being categorized per any one cuisine style, and generally refers to the innovations in many contemporary restaurant cuisines since the 1970s. Nouvelle cuisine (New cuisine) is an approach to cooking and food presentation in French cuisine that was popularized in the 1960s by the food critics Henri Gault, who invented the phrase, and his colleagues André Gayot and Christian Millau in a new restaurant guide, the Gault-Millau, or Le Nouveau Guide. Molecular cuisine, is a modern style of cooking which takes advantage of many technical innovations from the scientific disciplines (molecular cooking). The term was coined in 1999 by the French INRA chemist Hervé This because he wanted to distinguish it from the name Molecular gastronomy (a scientific activity) that was introduced by him and the late Oxford physicist Nicholas Kurti in 1988. It is also named as multi sensory cooking, modernist cuisine, culinary physics, and experimental cuisine by some chefs. Besides, international trade brings new foodstuffs including ingredients to existing cuisines and leads to changes. The introduction of hot pepper to China from South America around the end of the 17th century, greatly influencing Sichuan cuisine, which combines the original taste (with use of Sichuan pepper) with the taste of newly introduced hot pepper and creates a unique mala (麻辣) flavor that's mouth-numbingly spicy and pungent.",
"title": "Evolution of cuisine"
},
{
"paragraph_id": 11,
"text": "A global cuisine is a cuisine that is practiced around the world, and can be categorized according to the common use of major foodstuffs, including grains, produce and cooking fats.",
"title": "Global cuisine"
},
{
"paragraph_id": 12,
"text": "Regional cuisines can vary based on availability and usage of specific ingredients, local cooking traditions and practices, as well as overall cultural differences. Such factors can be more-or-less uniform across wide swaths of territory, or vary intensely within individual regions. For example, in Central and North South America, corn (maize), both fresh and dried, is a staple food, and is used in many different ways. In northern Europe, wheat, rye, and fats of animal origin predominate, while in southern Europe olive oil is ubiquitous and rice is more prevalent. In Italy, the cuisine of the north, featuring butter and rice, stands in contrast to that of the south, with its wheat pasta and olive oil. In some parts of China, rice is the staple, while in others this role is filled by noodles and bread. Throughout the Middle East and Mediterranean, common ingredients include lamb, olive oil, lemons, peppers, and rice. The vegetarianism practiced in much of India has made pulses (crops harvested solely for the dry seed) such as chickpeas and lentils as important as wheat or rice. From India to Indonesia, the extensive use of spices is characteristic; coconuts and seafood are also used throughout the region both as foodstuffs and as seasonings.",
"title": "Regional diversity"
},
{
"paragraph_id": 13,
"text": "African cuisines use a combination of locally available fruits, cereals and vegetables, as well as milk and meat products. In some parts of the continent, the traditional diet features a preponderance of milk, curd and whey products. In much of tropical Africa, however, cow's milk is rare and cannot be produced locally (owing to various diseases that affect livestock). The continent's diverse demographic makeup is reflected in the many different eating and drinking habits, dishes, and preparation techniques of its manifold populations.",
"title": "Regional diversity"
},
{
"paragraph_id": 14,
"text": "Due to Asia's vast size and extremely diverse geography and demographics, Asian cuisines are many and varied, and include East Asian cuisine, South Asian cuisine, Southeast Asian cuisine, Central Asian cuisine and West Asian cuisine. Ingredients common to East Asia and Southeast Asia (due to overseas Chinese influence) include rice, ginger, garlic, sesame seeds, chilies, dried onions, soy, and tofu, with stir frying, steaming, and deep frying being common cooking methods. While rice is common to most regional cuisines in Asia, different varieties are popular in the different regions: Basmati rice is popular in South Asia, Jasmine rice in Southeast Asia, and long-grain rice in China and short-grain rice in Japan and Korea. Curry is also a common ingredient found in South Asia, Southeast Asia, and East Asia (notably Japanese curry); however, they are not popular in West Asian and Central Asian cuisines. Those curry dishes with origins in South Asia usually have a yogurt base, with origins in Southeast Asia a coconut milk base, and in East Asia a stewed meat and vegetable base. South Asian cuisine and Southeast Asian cuisine are often characterized by their extensive use of spices and herbs native to the tropical regions of Asia.",
"title": "Regional diversity"
},
{
"paragraph_id": 15,
"text": "European cuisine (alternatively, \"Western cuisine\") include the cuisines of Europe and other Western countries. European cuisine includes non-indigenous cuisines of North America, Australasia, Oceania, and Latin America as well. The term is used by East Asians to contrast with East Asian styles of cooking. When used in English, the term may refer more specifically to cuisine in (Continental) Europe; in this context, a synonym is Continental cuisine.",
"title": "Regional diversity"
},
{
"paragraph_id": 16,
"text": "Oceanian cuisines include Australian cuisine, New Zealand cuisine, and cuisines from many other islands or island groups throughout Oceania. Australian cuisine consists of immigrant Anglo-Celtic derived cuisine, and Bushfood prepared and eaten by native Aboriginal Australian peoples, and various newer Asian influences. New Zealand cuisine also consists of European inspired dishes, such as Pavlova, and native Māori cuisine. Across Oceania, staples include the Kūmura and Taro, which was/is a staple from Papua New Guinea to the South Pacific. On most islands in the south pacific, fish are widely consumed because of the proximity to the ocean.",
"title": "Regional diversity"
},
{
"paragraph_id": 17,
"text": "The cuisines of the Americas are found across North and South America, and are based on the cuisines of the countries from which the immigrant people came from, primarily Europe. However, the traditional European cuisine has been adapted by the addition of many local and native ingredients, and many of their techniques have been added to traditional foods as well. Native American cuisine is prepared by indigenous populations across the continent, and its influences can be seen on multi-ethnic Latin American cuisine. Many staple foods have been seen to be eaten across the continent, such as corn (maize), beans, and potatoes have their own respective native origins. The regional cuisines are North American cuisine, Mexican cuisine, Central American cuisine, South American cuisine, and Caribbean cuisine.",
"title": "Regional diversity"
}
] | A cuisine is a style of cooking characterized by distinctive ingredients, techniques and dishes, and usually associated with a specific culture or geographic region. Regional food preparation techniques, customs, and ingredients combine to enable dishes unique to a region. | 2001-10-26T03:24:44Z | 2023-12-29T10:42:48Z | [
"Template:Short description",
"Template:Morefootnotes",
"Template:Reflist",
"Template:Cite news",
"Template:Sister project links",
"Template:Authority control",
"Template:Main",
"Template:Cuisine portal links",
"Template:Div col",
"Template:Div col end",
"Template:Cite book",
"Template:Cite journal",
"Template:Wikivoyage",
"Template:Cuisine",
"Template:For",
"Template:Cite web",
"Template:Use dmy dates",
"Template:Ndash",
"Template:Further",
"Template:Lang",
"Template:ISBN",
"Template:Meals navbox"
] | https://en.wikipedia.org/wiki/Cuisine |
6,660 | Codec | A codec is a device or computer program that encodes or decodes a data stream or signal. Codec is a portmanteau of coder/decoder.
In electronic communications, an endec is a device that acts as both an encoder and a decoder on a signal or data stream, and hence is a type of codec. Endec is a portmanteau of encoder/decoder.
A coder or encoder encodes a data stream or a signal for transmission or storage, possibly in encrypted form, and the decoder function reverses the encoding for playback or editing. Codecs are used in videoconferencing, streaming media, and video editing applications.
In the mid-20th century, a codec was a device that coded analog signals into digital form using pulse-code modulation (PCM). Later, the name was also applied to software for converting between digital signal formats, including companding functions.
An audio codec converts analog audio signals into digital signals for transmission or encodes them for storage. A receiving device converts the digital signals back to analog form using an audio decoder for playback. An example of this is the codecs used in the sound cards of personal computers. A video codec accomplishes the same task for video signals.
An Emergency Alert System unit is usually an endec, but sometimes just a decoder.
When implementing the Infrared Data Association (IrDA) protocol, an endec may be used between the UART and the optoelectronic systems.
In addition to encoding a signal, a codec may also compress the data to reduce transmission bandwidth or storage space. Compression codecs are classified primarily into lossy codecs and lossless codecs.
Lossless codecs are often used for archiving data in a compressed form while retaining all information present in the original stream. If preserving the original quality of the stream is more important than eliminating the correspondingly larger data sizes, lossless codecs are preferred. This is especially true if the data is to undergo further processing (for example editing) in which case the repeated application of processing (encoding and decoding) on lossy codecs will degrade the quality of the resulting data such that it is no longer identifiable (visually, audibly or both). Using more than one codec or encoding scheme successively can also degrade quality significantly. The decreasing cost of storage capacity and network bandwidth has a tendency to reduce the need for lossy codecs for some media.
Many popular codecs are lossy. They reduce quality in order to maximize compression. Often, this type of compression is virtually indistinguishable from the original uncompressed sound or images, depending on the codec and the settings used. The most widely used lossy data compression technique in digital media is based on the discrete cosine transform (DCT), used in compression standards such as JPEG images, H.26x and MPEG video, and MP3 and AAC audio. Smaller data sets ease the strain on relatively expensive storage sub-systems such as non-volatile memory and hard disk, as well as write-once-read-many formats such as CD-ROM, DVD and Blu-ray Disc. Lower data rates also reduce cost and improve performance when the data is transmitted, e.g. over the internet.
Two principal techniques are used in codecs, pulse-code modulation and delta modulation. Codecs are often designed to emphasize certain aspects of the media to be encoded. For example, a digital video (using a DV codec) of a sports event needs to encode motion well but not necessarily exact colors, while a video of an art exhibit needs to encode color and surface texture well.
Audio codecs for cell phones need to have very low latency between source encoding and playback. In contrast, audio codecs for recording or broadcast can use high-latency audio compression techniques to achieve higher fidelity at a lower bit rate.
There are thousands of audio and video codecs, ranging in cost from free to hundreds of dollars or more. This variety of codecs can create compatibility and obsolescence issues. The impact is lessened for older formats, for which free or nearly-free codecs have existed for a long time. The older formats are often ill-suited to modern applications, however, such as playback in small portable devices. For example, raw uncompressed PCM audio (44.1 kHz, 16-bit stereo, as represented on an audio CD or in a .wav or .aiff file) has long been a standard across multiple platforms, but its transmission over networks is slow and expensive compared with more modern compressed formats, such as Opus and MP3.
Many multimedia data streams contain both audio and video, and often some metadata that permits synchronization of audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data streams to be useful in stored or transmitted form, they must be encapsulated together in a container format.
Lower bitrate codecs allow more users, but they also have more distortion. Beyond the initial increase in distortion, lower bit rate codecs also achieve their lower bit rates by using more complex algorithms that make certain assumptions, such as those about the media and the packet loss rate. Other codecs may not make those same assumptions. When a user with a low bitrate codec talks to a user with another codec, additional distortion is introduced by each transcoding.
Audio Video Interleave (AVI) is sometimes erroneously described as a codec, but AVI is actually a container format, while a codec is a software or hardware tool that encodes or decodes audio or video into or from some audio or video format. Audio and video encoded with many codecs might be put into an AVI container, although AVI is not an ISO standard. There are also other well-known container formats, such as Ogg, ASF, QuickTime, RealMedia, Matroska, and DivX Media Format. MPEG transport stream, MPEG program stream, MP4, and ISO base media file format are examples of container formats that are ISO standardized.
Fake codecs are used when an online user takes a type of codec and installs viruses and other malware into whatever data is being compressed and uses it as a disguise. This disguise appears as a codec download through a pop-up alert or ad. When a user goes to click or download that codec, the malware is then installed on the computer. Once a fake codec is installed it is often used to access private data, corrupt an entire computer system or to keep spreading the malware. One of the previous most used ways to spread malware was fake AV pages and with the rise of codec technology, both have been used in combination to take advantage of online users. This combination allows fake codecs to be automatically downloaded to a device through a website linked in a pop-up ad, virus/codec alerts or articles as well. | [
{
"paragraph_id": 0,
"text": "A codec is a device or computer program that encodes or decodes a data stream or signal. Codec is a portmanteau of coder/decoder.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In electronic communications, an endec is a device that acts as both an encoder and a decoder on a signal or data stream, and hence is a type of codec. Endec is a portmanteau of encoder/decoder.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A coder or encoder encodes a data stream or a signal for transmission or storage, possibly in encrypted form, and the decoder function reverses the encoding for playback or editing. Codecs are used in videoconferencing, streaming media, and video editing applications.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the mid-20th century, a codec was a device that coded analog signals into digital form using pulse-code modulation (PCM). Later, the name was also applied to software for converting between digital signal formats, including companding functions.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "An audio codec converts analog audio signals into digital signals for transmission or encodes them for storage. A receiving device converts the digital signals back to analog form using an audio decoder for playback. An example of this is the codecs used in the sound cards of personal computers. A video codec accomplishes the same task for video signals.",
"title": "Examples"
},
{
"paragraph_id": 5,
"text": "An Emergency Alert System unit is usually an endec, but sometimes just a decoder.",
"title": "Examples"
},
{
"paragraph_id": 6,
"text": "When implementing the Infrared Data Association (IrDA) protocol, an endec may be used between the UART and the optoelectronic systems.",
"title": "Examples"
},
{
"paragraph_id": 7,
"text": "In addition to encoding a signal, a codec may also compress the data to reduce transmission bandwidth or storage space. Compression codecs are classified primarily into lossy codecs and lossless codecs.",
"title": "Compression"
},
{
"paragraph_id": 8,
"text": "Lossless codecs are often used for archiving data in a compressed form while retaining all information present in the original stream. If preserving the original quality of the stream is more important than eliminating the correspondingly larger data sizes, lossless codecs are preferred. This is especially true if the data is to undergo further processing (for example editing) in which case the repeated application of processing (encoding and decoding) on lossy codecs will degrade the quality of the resulting data such that it is no longer identifiable (visually, audibly or both). Using more than one codec or encoding scheme successively can also degrade quality significantly. The decreasing cost of storage capacity and network bandwidth has a tendency to reduce the need for lossy codecs for some media.",
"title": "Compression"
},
{
"paragraph_id": 9,
"text": "Many popular codecs are lossy. They reduce quality in order to maximize compression. Often, this type of compression is virtually indistinguishable from the original uncompressed sound or images, depending on the codec and the settings used. The most widely used lossy data compression technique in digital media is based on the discrete cosine transform (DCT), used in compression standards such as JPEG images, H.26x and MPEG video, and MP3 and AAC audio. Smaller data sets ease the strain on relatively expensive storage sub-systems such as non-volatile memory and hard disk, as well as write-once-read-many formats such as CD-ROM, DVD and Blu-ray Disc. Lower data rates also reduce cost and improve performance when the data is transmitted, e.g. over the internet.",
"title": "Compression"
},
{
"paragraph_id": 10,
"text": "Two principal techniques are used in codecs, pulse-code modulation and delta modulation. Codecs are often designed to emphasize certain aspects of the media to be encoded. For example, a digital video (using a DV codec) of a sports event needs to encode motion well but not necessarily exact colors, while a video of an art exhibit needs to encode color and surface texture well.",
"title": "Media codecs"
},
{
"paragraph_id": 11,
"text": "Audio codecs for cell phones need to have very low latency between source encoding and playback. In contrast, audio codecs for recording or broadcast can use high-latency audio compression techniques to achieve higher fidelity at a lower bit rate.",
"title": "Media codecs"
},
{
"paragraph_id": 12,
"text": "There are thousands of audio and video codecs, ranging in cost from free to hundreds of dollars or more. This variety of codecs can create compatibility and obsolescence issues. The impact is lessened for older formats, for which free or nearly-free codecs have existed for a long time. The older formats are often ill-suited to modern applications, however, such as playback in small portable devices. For example, raw uncompressed PCM audio (44.1 kHz, 16-bit stereo, as represented on an audio CD or in a .wav or .aiff file) has long been a standard across multiple platforms, but its transmission over networks is slow and expensive compared with more modern compressed formats, such as Opus and MP3.",
"title": "Media codecs"
},
{
"paragraph_id": 13,
"text": "Many multimedia data streams contain both audio and video, and often some metadata that permits synchronization of audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data streams to be useful in stored or transmitted form, they must be encapsulated together in a container format.",
"title": "Media codecs"
},
{
"paragraph_id": 14,
"text": "Lower bitrate codecs allow more users, but they also have more distortion. Beyond the initial increase in distortion, lower bit rate codecs also achieve their lower bit rates by using more complex algorithms that make certain assumptions, such as those about the media and the packet loss rate. Other codecs may not make those same assumptions. When a user with a low bitrate codec talks to a user with another codec, additional distortion is introduced by each transcoding.",
"title": "Media codecs"
},
{
"paragraph_id": 15,
"text": "Audio Video Interleave (AVI) is sometimes erroneously described as a codec, but AVI is actually a container format, while a codec is a software or hardware tool that encodes or decodes audio or video into or from some audio or video format. Audio and video encoded with many codecs might be put into an AVI container, although AVI is not an ISO standard. There are also other well-known container formats, such as Ogg, ASF, QuickTime, RealMedia, Matroska, and DivX Media Format. MPEG transport stream, MPEG program stream, MP4, and ISO base media file format are examples of container formats that are ISO standardized.",
"title": "Media codecs"
},
{
"paragraph_id": 16,
"text": "Fake codecs are used when an online user takes a type of codec and installs viruses and other malware into whatever data is being compressed and uses it as a disguise. This disguise appears as a codec download through a pop-up alert or ad. When a user goes to click or download that codec, the malware is then installed on the computer. Once a fake codec is installed it is often used to access private data, corrupt an entire computer system or to keep spreading the malware. One of the previous most used ways to spread malware was fake AV pages and with the rise of codec technology, both have been used in combination to take advantage of online users. This combination allows fake codecs to be automatically downloaded to a device through a website linked in a pop-up ad, virus/codec alerts or articles as well.",
"title": "Malware"
}
] | A codec is a device or computer program that encodes or decodes a data stream or signal. Codec is a portmanteau of coder/decoder. In electronic communications, an endec is a device that acts as both an encoder and a decoder on a signal or data stream, and hence is a type of codec. Endec is a portmanteau of encoder/decoder. A coder or encoder encodes a data stream or a signal for transmission or storage, possibly in encrypted form, and the decoder function reverses the encoding for playback or editing. Codecs are used in videoconferencing, streaming media, and video editing applications. | 2001-11-12T18:41:03Z | 2023-12-27T21:15:25Z | [
"Template:Vanchor",
"Template:Div col",
"Template:Reflist",
"Template:Cite web",
"Template:Short description",
"Template:Importance section",
"Template:See",
"Template:Div col end",
"Template:Cite dictionary",
"Template:Compression Methods",
"Template:About",
"Template:Cn"
] | https://en.wikipedia.org/wiki/Codec |
6,663 | Clyde Tombaugh | Clyde William Tombaugh /ˈtɒmbaʊ/ (February 4, 1906 – January 17, 1997) was an American astronomer. He discovered Pluto in 1930, the first object to be discovered in what would later be identified as the Kuiper belt. At the time of discovery, Pluto was considered a planet, but was reclassified as a dwarf planet in 2006. Tombaugh also discovered many asteroids, and called for the serious scientific research of unidentified flying objects.
Tombaugh was born in Streator, Illinois, son of Muron Dealvo Tombaugh, a farmer, and his wife Adella Pearl Chritton. After his family moved to Burdett, Kansas, in 1922, Tombaugh's plans for attending college were frustrated when a hailstorm ruined his family's farm crops.
Beginning in 1926, he built several telescopes with lenses and mirrors by himself. To better test his telescope mirrors, Tombaugh, with just a pick and shovel, dug a pit 24 feet long, 8 feet deep, and 7 feet wide. This provided a constant air temperature, free of air currents, and was also used by the family as a root cellar and emergency shelter. He sent drawings of Jupiter and Mars to the Lowell Observatory, at Flagstaff, Arizona, which offered him a job. Tombaugh worked there from 1929 to 1945.
It was at Lowell in 1930 that Tombaugh discovered Pluto. Following his discovery, Tombaugh earned bachelor's and master's degrees in astronomy from the University of Kansas in 1936 and 1938. While a young researcher working for the Lowell Observatory in Flagstaff, Arizona, Tombaugh was given the job to perform a systematic search for a trans-Neptunian planet (also called Planet X), which had been predicted by Percival Lowell based on calculations performed by his student mathematician Elizabeth Williams and William Pickering.
Starting April 6, 1929, Tombaugh used the observatory's 13-inch (330 mm) astrograph to take photographs of the same section of sky several nights apart. He then used a blink comparator to compare the different images. When he shifted between the two images, a moving object, such as a planet, would appear to jump from one position to another, while the more distant objects such as stars would appear stationary. Tombaugh noticed such a moving object in his search, near the place predicted by Lowell, and subsequent observations showed it to have an orbit beyond that of Neptune. This ruled out classification as an asteroid, and they decided this was the ninth planet that Lowell had predicted. The discovery was made on Tuesday, February 18, 1930, using images taken the previous month.
Three classical mythological names were about equally popular among proposals for the new planet: Minerva, Cronus and Pluto. However, Minerva was already in use and the primary supporter of Cronus was widely disliked, leaving Pluto as the front-runner. Outside of Lowell staff, it was first proposed by an 11-year-old English schoolgirl, Venetia Burney. In its favor was that the Pluto of Roman mythology was able to render himself invisible, and that its first two letters formed Percival Lowell's initials. In order to avoid the name changes suffered by Neptune, the name was proposed to both the American Astronomical Society and the Royal Astronomical Society, both of which approved it unanimously. The name was officially adopted on May 1, 1930.
Following the discovery, it was recognized that Pluto wasn't massive enough to be the expected ninth planet, and some astronomers began to consider it the first of a new class of object – and indeed Tombaugh searched for additional trans-Neptunian objects for years, though due to the lack of any further discoveries he concluded that Pluto was indeed a planet. The idea that Pluto was not a true planet remained a minority position until the discovery of other Kuiper belt objects in the late 1990s, which showed that it did not orbit alone but was at best the largest of a number of icy bodies in its region of space. After it was shown that at least one such body, dubbed Eris, was more massive than Pluto, the International Astronomical Union (IAU) reclassified Pluto on August 24, 2006, as a dwarf planet, leaving eight planets in the Solar System.
Tombaugh's widow Patricia stated after the IAU's decision that while he might have been disappointed with the change since he had resisted attempts to remove Pluto's planetary status in his lifetime, he would have accepted the decision now if he were alive. She noted that he "was a scientist. He would understand they had a real problem when they start finding several of these things flying around the place." Hal Levison offered this perspective on Tombaugh's place in history: "Clyde Tombaugh discovered the Kuiper Belt. That's a helluva lot more interesting than the ninth planet."
Tombaugh continued searching for over a decade after the discovery of Pluto, and the lack of further discoveries left him satisfied that no other object of a comparable apparent magnitude existed near the ecliptic. No more trans-Neptunian objects were discovered until 15760 Albion, in 1992.
However, more recently the relatively bright object Makemake has been discovered. It has a relatively high orbital inclination, but at the time of Tombaugh's discovery of Pluto, Makemake was only a few degrees from the ecliptic, near the border of Taurus and Auriga, at an apparent magnitude of 16. This position was also very near the galactic equator, making it almost impossible to find such an object within the dense concentration of background stars of the Milky Way. In the fourteen years of looking for planets, until he was drafted in July 1943, Tombaugh looked for motion in 90 million star images (two each of 45 million stars).
Tombaugh is officially credited by the Minor Planet Center with discovering 15 asteroids, and he observed nearly 800 asteroids during his search for Pluto and years of follow-up searches looking for another candidate for the postulated Planet X. Tombaugh is also credited with the discovery of periodic comet 274P/Tombaugh–Tenagra. He also discovered hundreds of variable stars, as well as star clusters, galaxy clusters, and a galaxy supercluster.
The asteroid 1604 Tombaugh, discovered in 1931, is named after him. He discovered hundreds of asteroids, beginning with 2839 Annette in 1929, mostly as a by-product of his search for Pluto and his searches for other celestial objects. Tombaugh named some of them after his wife, children and grandchildren. The Royal Astronomical Society awarded him the Jackson-Gwilt Medal in 1931.
Tombaugh was probably the most eminent astronomer to have reported seeing unidentified flying objects. On August 20, 1949, Tombaugh saw several unidentified objects near Las Cruces, New Mexico. He described them as six to eight rectangular lights, stating: "I doubt that the phenomenon was any terrestrial reflection, because... nothing of the kind has ever appeared before or since... I was so unprepared for such a strange sight that I was really petrified with astonishment.".
Tombaugh observed these rectangles of light for about 3 seconds and his wife saw them for about 1+1⁄2 seconds. He never supported the interpretation as a spaceship that has often been attributed to him. He considered other possibilities, with a temperature inversion as the most likely cause.
From my own studies of the solar system I cannot entertain any serious possibility for intelligent life on other planets, not even for Mars... The logistics of visitations from planets revolving around the nearer stars is staggering. In consideration of the hundreds of millions of years in the geologic time scale when such visits may have possibly occurred, the odds of a single visit in a given century or millennium are overwhelmingly against such an event.
A much more likely source of explanation is some natural optical phenomenon in our own atmosphere. In my 1949 sightings the faintness of the object, together with the manner of fading in intensity as it traveled away from the zenith towards the southeastern horizon, is quite suggestive of a reflection from an optical boundary or surface of slight contrast in refractive index, as in an inversion layer.
I have never seen anything like it before or since, and I have spent a lot of time where the night sky could be seen well. This suggests that the phenomenon involves a comparatively rare set of conditions or circumstances to produce it, but nothing like the odds of an interstellar visitation.
Another sighting by Tombaugh a year or two later while at a White Sands observatory was of an object of −6 magnitude, four times brighter than Venus at its brightest, going from the zenith to the southern horizon in about 3 seconds. The object executed the same maneuvers as in Tombaugh's first sighting.
Tombaugh later reported having seen three of the mysterious green fireballs, which suddenly appeared over New Mexico in late 1948 and continued at least through the early 1950s. A researcher on Project Twinkle reported that Tombaugh "... never observed an unexplainable aerial object despite his continuous and extensive observations of the sky."
Shortly after this, in January 1957, in an Associated Press article in the Alamogordo Daily News titled "Celestial Visitors May Be Invading Earth's Atmosphere", Tombaugh was again quoted on his sightings and opinion about them. "Although our own solar system is believed to support no other life than on Earth, other stars in the galaxy may have hundreds of thousands of habitable worlds. Races on these worlds may have been able to utilize the tremendous amounts of power required to bridge the space between the stars ...". Tombaugh stated that he had observed celestial phenomena which he could not explain, but had seen none personally since 1951 or 1952. "These things, which do appear to be directed, are unlike any other phenomena I ever observed. Their apparent lack of obedience to the ordinary laws of celestial motion gives credence."
In 1949, Tombaugh had also told the Naval missile director at White Sands Missile Range, Commander Robert McLaughlin, that he had seen a bright flash on Mars on August 27, 1941, which he now attributed to an atomic blast. Tombaugh also noted that the first atomic bomb tested in New Mexico would have lit up the dark side of the Earth like a neon sign and that Mars was coincidentally quite close at the time, the implication apparently being that the atomic test would have been visible from Mars.
In June 1952, Dr. J. Allen Hynek, an astronomer acting as a scientific consultant to the Air Force's Project Blue Book UFO study, secretly conducted a survey of fellow astronomers on UFO sightings and attitudes while attending an astronomy convention. Tombaugh and four other astronomers, including Dr. Lincoln LaPaz of the University of New Mexico, told Hynek about their sightings. Tombaugh also told Hynek that his telescopes were at the Air Force's disposal for taking photos of UFOs, if he was properly alerted.
Tombaugh's offer may have led to his involvement in a search for Near-Earth objects, first announced in late 1953 and sponsored by the Army Office of Ordnance Research. Another public statement was made on the search in March 1954, emphasizing the rationale that such an orbiting object would serve as a natural space station. However, according to Donald Keyhoe, later director of the National Investigations Committee on Aerial Phenomena (NICAP), the real reason for the sudden search was because two near-Earth orbiting objects had been picked up on new long-range radar in the summer of 1953, according to his Pentagon source.
By May 1954, Keyhoe was making public statements that his sources told him the search had indeed been successful, and either one or two objects had been found. However, the story did not break until August 23, 1954, when Aviation Week magazine stated that two satellites had been found only 400 and 600 miles out. They were termed "natural satellites" and implied that they had been recently captured, despite this being a virtual impossibility. The next day, the story was in many major newspapers. Dr. LaPaz was implicated in the discovery in addition to Tombaugh. LaPaz had earlier conducted secret investigations on behalf of the Air Force on the green fireballs and other unidentified aerial phenomena over New Mexico. The New York Times reported on August 29 that "a source close to the O. O. R. unit here described as 'quite accurate' the report in the magazine Aviation Week that two previously unobserved satellites had been spotted and identified by Dr. Lincoln LaPaz of the University of New Mexico as natural and not artificial objects. This source also said there was absolutely no connection between the reported satellites and flying saucer reports." However, in the October 10 issue, LaPaz said the magazine article was "false in every particular, in so far as reference to me is concerned."
Both LaPaz and Tombaugh were to issue public denials that anything had been found. The October 1955 issue of Popular Mechanics magazine reported: "Professor Tombaugh is closemouthed about his results. He won't say whether or not any small natural satellites have been discovered. He does say, however, that newspaper reports of 18 months ago announcing the discovery of natural satellites at 400 and 600 miles out are not correct. He adds that there is no connection between the search program and the reports of so-called flying saucers."
At a meteor conference in Los Angeles in 1957, Tombaugh reiterated that his four-year search for "natural satellites" had been unsuccessful. In 1959, Tombaugh was to issue a final report stating that nothing had been found in his search. His personal 16-inch telescope was reassembled and dedicated on September 17, 2009, at Rancho Hidalgo, New Mexico (near Animas, New Mexico), adjacent to Astronomy's new observatory.
During World War II he taught naval personnel navigation at Northern Arizona University. He worked at White Sands Missile Range in the early 1950s, and taught astronomy at New Mexico State University from 1955 until his retirement in 1973. In 1980 he was inducted into the International Space Hall of Fame. In 1991, he received the American Academy of Achievement's Golden Plate Award presented by Awards Council member Glenn T. Seaborg.
Direct visual observation became rare in astronomy. By 1965 Robert S. Richardson called Tombaugh one of two great living experienced visual observers as talented as Percival Lowell or Giovanni Schiaparelli. In 1980, Tombaugh and Patrick Moore wrote a book Out of the Darkness: The Planet Pluto. In August 1992, JPL scientist Robert Staehle called Tombaugh, requesting permission to visit his planet. "I told him he was welcome to it," Tombaugh later remembered, "though he's got to go one long, cold trip." The call eventually led to the launch of the New Horizons space probe to Pluto in 2006. Following the passage of Pluto by New Horizons on July 14, 2015, the "Heart of Pluto" was named Tombaugh Regio.
Clyde Tombaugh had five siblings. Through the daughter of his youngest brother, Robert, he is the great-uncle of Los Angeles Dodgers pitcher Clayton Kershaw.
Tombaugh was an active Unitarian Universalist, and he and his wife helped found the Unitarian Universalist Church of Las Cruces, New Mexico.
Tombaugh died on January 17, 1997, in Las Cruces, New Mexico, at the age of 90, and he was cremated. A small portion of his ashes was placed aboard the New Horizons spacecraft. The container includes the inscription: "Interred herein are remains of American Clyde W. Tombaugh, discoverer of Pluto and the Solar System's 'third zone'. Adelle and Muron's boy, Patricia's husband, Annette and Alden's father, astronomer, teacher, punster and friend: Clyde W. Tombaugh (1906–1997)". Tombaugh was survived by his wife, Patricia (1912–2012), and their children, Annette and Alden.
I had never been much interested in Pluto—too few facts and too much speculation, too far away and not desirable real estate. By comparison the Moon was a choice residential suburb. Professor Tombaugh (the one the station was named for) was working on a giant electronic telescope to photograph it, under a Guggenheim grant, but he had a special interest; he discovered Pluto years before I was born. | [
{
"paragraph_id": 0,
"text": "Clyde William Tombaugh /ˈtɒmbaʊ/ (February 4, 1906 – January 17, 1997) was an American astronomer. He discovered Pluto in 1930, the first object to be discovered in what would later be identified as the Kuiper belt. At the time of discovery, Pluto was considered a planet, but was reclassified as a dwarf planet in 2006. Tombaugh also discovered many asteroids, and called for the serious scientific research of unidentified flying objects.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Tombaugh was born in Streator, Illinois, son of Muron Dealvo Tombaugh, a farmer, and his wife Adella Pearl Chritton. After his family moved to Burdett, Kansas, in 1922, Tombaugh's plans for attending college were frustrated when a hailstorm ruined his family's farm crops.",
"title": "Early life"
},
{
"paragraph_id": 2,
"text": "Beginning in 1926, he built several telescopes with lenses and mirrors by himself. To better test his telescope mirrors, Tombaugh, with just a pick and shovel, dug a pit 24 feet long, 8 feet deep, and 7 feet wide. This provided a constant air temperature, free of air currents, and was also used by the family as a root cellar and emergency shelter. He sent drawings of Jupiter and Mars to the Lowell Observatory, at Flagstaff, Arizona, which offered him a job. Tombaugh worked there from 1929 to 1945.",
"title": "Astronomy career"
},
{
"paragraph_id": 3,
"text": "It was at Lowell in 1930 that Tombaugh discovered Pluto. Following his discovery, Tombaugh earned bachelor's and master's degrees in astronomy from the University of Kansas in 1936 and 1938. While a young researcher working for the Lowell Observatory in Flagstaff, Arizona, Tombaugh was given the job to perform a systematic search for a trans-Neptunian planet (also called Planet X), which had been predicted by Percival Lowell based on calculations performed by his student mathematician Elizabeth Williams and William Pickering.",
"title": "Astronomy career"
},
{
"paragraph_id": 4,
"text": "Starting April 6, 1929, Tombaugh used the observatory's 13-inch (330 mm) astrograph to take photographs of the same section of sky several nights apart. He then used a blink comparator to compare the different images. When he shifted between the two images, a moving object, such as a planet, would appear to jump from one position to another, while the more distant objects such as stars would appear stationary. Tombaugh noticed such a moving object in his search, near the place predicted by Lowell, and subsequent observations showed it to have an orbit beyond that of Neptune. This ruled out classification as an asteroid, and they decided this was the ninth planet that Lowell had predicted. The discovery was made on Tuesday, February 18, 1930, using images taken the previous month.",
"title": "Astronomy career"
},
{
"paragraph_id": 5,
"text": "Three classical mythological names were about equally popular among proposals for the new planet: Minerva, Cronus and Pluto. However, Minerva was already in use and the primary supporter of Cronus was widely disliked, leaving Pluto as the front-runner. Outside of Lowell staff, it was first proposed by an 11-year-old English schoolgirl, Venetia Burney. In its favor was that the Pluto of Roman mythology was able to render himself invisible, and that its first two letters formed Percival Lowell's initials. In order to avoid the name changes suffered by Neptune, the name was proposed to both the American Astronomical Society and the Royal Astronomical Society, both of which approved it unanimously. The name was officially adopted on May 1, 1930.",
"title": "Astronomy career"
},
{
"paragraph_id": 6,
"text": "Following the discovery, it was recognized that Pluto wasn't massive enough to be the expected ninth planet, and some astronomers began to consider it the first of a new class of object – and indeed Tombaugh searched for additional trans-Neptunian objects for years, though due to the lack of any further discoveries he concluded that Pluto was indeed a planet. The idea that Pluto was not a true planet remained a minority position until the discovery of other Kuiper belt objects in the late 1990s, which showed that it did not orbit alone but was at best the largest of a number of icy bodies in its region of space. After it was shown that at least one such body, dubbed Eris, was more massive than Pluto, the International Astronomical Union (IAU) reclassified Pluto on August 24, 2006, as a dwarf planet, leaving eight planets in the Solar System.",
"title": "Astronomy career"
},
{
"paragraph_id": 7,
"text": "Tombaugh's widow Patricia stated after the IAU's decision that while he might have been disappointed with the change since he had resisted attempts to remove Pluto's planetary status in his lifetime, he would have accepted the decision now if he were alive. She noted that he \"was a scientist. He would understand they had a real problem when they start finding several of these things flying around the place.\" Hal Levison offered this perspective on Tombaugh's place in history: \"Clyde Tombaugh discovered the Kuiper Belt. That's a helluva lot more interesting than the ninth planet.\"",
"title": "Astronomy career"
},
{
"paragraph_id": 8,
"text": "Tombaugh continued searching for over a decade after the discovery of Pluto, and the lack of further discoveries left him satisfied that no other object of a comparable apparent magnitude existed near the ecliptic. No more trans-Neptunian objects were discovered until 15760 Albion, in 1992.",
"title": "Astronomy career"
},
{
"paragraph_id": 9,
"text": "However, more recently the relatively bright object Makemake has been discovered. It has a relatively high orbital inclination, but at the time of Tombaugh's discovery of Pluto, Makemake was only a few degrees from the ecliptic, near the border of Taurus and Auriga, at an apparent magnitude of 16. This position was also very near the galactic equator, making it almost impossible to find such an object within the dense concentration of background stars of the Milky Way. In the fourteen years of looking for planets, until he was drafted in July 1943, Tombaugh looked for motion in 90 million star images (two each of 45 million stars).",
"title": "Astronomy career"
},
{
"paragraph_id": 10,
"text": "Tombaugh is officially credited by the Minor Planet Center with discovering 15 asteroids, and he observed nearly 800 asteroids during his search for Pluto and years of follow-up searches looking for another candidate for the postulated Planet X. Tombaugh is also credited with the discovery of periodic comet 274P/Tombaugh–Tenagra. He also discovered hundreds of variable stars, as well as star clusters, galaxy clusters, and a galaxy supercluster.",
"title": "Astronomy career"
},
{
"paragraph_id": 11,
"text": "The asteroid 1604 Tombaugh, discovered in 1931, is named after him. He discovered hundreds of asteroids, beginning with 2839 Annette in 1929, mostly as a by-product of his search for Pluto and his searches for other celestial objects. Tombaugh named some of them after his wife, children and grandchildren. The Royal Astronomical Society awarded him the Jackson-Gwilt Medal in 1931.",
"title": "Astronomy career"
},
{
"paragraph_id": 12,
"text": "Tombaugh was probably the most eminent astronomer to have reported seeing unidentified flying objects. On August 20, 1949, Tombaugh saw several unidentified objects near Las Cruces, New Mexico. He described them as six to eight rectangular lights, stating: \"I doubt that the phenomenon was any terrestrial reflection, because... nothing of the kind has ever appeared before or since... I was so unprepared for such a strange sight that I was really petrified with astonishment.\".",
"title": "Astronomy career"
},
{
"paragraph_id": 13,
"text": "Tombaugh observed these rectangles of light for about 3 seconds and his wife saw them for about 1+1⁄2 seconds. He never supported the interpretation as a spaceship that has often been attributed to him. He considered other possibilities, with a temperature inversion as the most likely cause.",
"title": "Astronomy career"
},
{
"paragraph_id": 14,
"text": "From my own studies of the solar system I cannot entertain any serious possibility for intelligent life on other planets, not even for Mars... The logistics of visitations from planets revolving around the nearer stars is staggering. In consideration of the hundreds of millions of years in the geologic time scale when such visits may have possibly occurred, the odds of a single visit in a given century or millennium are overwhelmingly against such an event.",
"title": "Astronomy career"
},
{
"paragraph_id": 15,
"text": "A much more likely source of explanation is some natural optical phenomenon in our own atmosphere. In my 1949 sightings the faintness of the object, together with the manner of fading in intensity as it traveled away from the zenith towards the southeastern horizon, is quite suggestive of a reflection from an optical boundary or surface of slight contrast in refractive index, as in an inversion layer.",
"title": "Astronomy career"
},
{
"paragraph_id": 16,
"text": "I have never seen anything like it before or since, and I have spent a lot of time where the night sky could be seen well. This suggests that the phenomenon involves a comparatively rare set of conditions or circumstances to produce it, but nothing like the odds of an interstellar visitation.",
"title": "Astronomy career"
},
{
"paragraph_id": 17,
"text": "Another sighting by Tombaugh a year or two later while at a White Sands observatory was of an object of −6 magnitude, four times brighter than Venus at its brightest, going from the zenith to the southern horizon in about 3 seconds. The object executed the same maneuvers as in Tombaugh's first sighting.",
"title": "Astronomy career"
},
{
"paragraph_id": 18,
"text": "Tombaugh later reported having seen three of the mysterious green fireballs, which suddenly appeared over New Mexico in late 1948 and continued at least through the early 1950s. A researcher on Project Twinkle reported that Tombaugh \"... never observed an unexplainable aerial object despite his continuous and extensive observations of the sky.\"",
"title": "Astronomy career"
},
{
"paragraph_id": 19,
"text": "Shortly after this, in January 1957, in an Associated Press article in the Alamogordo Daily News titled \"Celestial Visitors May Be Invading Earth's Atmosphere\", Tombaugh was again quoted on his sightings and opinion about them. \"Although our own solar system is believed to support no other life than on Earth, other stars in the galaxy may have hundreds of thousands of habitable worlds. Races on these worlds may have been able to utilize the tremendous amounts of power required to bridge the space between the stars ...\". Tombaugh stated that he had observed celestial phenomena which he could not explain, but had seen none personally since 1951 or 1952. \"These things, which do appear to be directed, are unlike any other phenomena I ever observed. Their apparent lack of obedience to the ordinary laws of celestial motion gives credence.\"",
"title": "Astronomy career"
},
{
"paragraph_id": 20,
"text": "In 1949, Tombaugh had also told the Naval missile director at White Sands Missile Range, Commander Robert McLaughlin, that he had seen a bright flash on Mars on August 27, 1941, which he now attributed to an atomic blast. Tombaugh also noted that the first atomic bomb tested in New Mexico would have lit up the dark side of the Earth like a neon sign and that Mars was coincidentally quite close at the time, the implication apparently being that the atomic test would have been visible from Mars.",
"title": "Astronomy career"
},
{
"paragraph_id": 21,
"text": "In June 1952, Dr. J. Allen Hynek, an astronomer acting as a scientific consultant to the Air Force's Project Blue Book UFO study, secretly conducted a survey of fellow astronomers on UFO sightings and attitudes while attending an astronomy convention. Tombaugh and four other astronomers, including Dr. Lincoln LaPaz of the University of New Mexico, told Hynek about their sightings. Tombaugh also told Hynek that his telescopes were at the Air Force's disposal for taking photos of UFOs, if he was properly alerted.",
"title": "Astronomy career"
},
{
"paragraph_id": 22,
"text": "Tombaugh's offer may have led to his involvement in a search for Near-Earth objects, first announced in late 1953 and sponsored by the Army Office of Ordnance Research. Another public statement was made on the search in March 1954, emphasizing the rationale that such an orbiting object would serve as a natural space station. However, according to Donald Keyhoe, later director of the National Investigations Committee on Aerial Phenomena (NICAP), the real reason for the sudden search was because two near-Earth orbiting objects had been picked up on new long-range radar in the summer of 1953, according to his Pentagon source.",
"title": "Astronomy career"
},
{
"paragraph_id": 23,
"text": "By May 1954, Keyhoe was making public statements that his sources told him the search had indeed been successful, and either one or two objects had been found. However, the story did not break until August 23, 1954, when Aviation Week magazine stated that two satellites had been found only 400 and 600 miles out. They were termed \"natural satellites\" and implied that they had been recently captured, despite this being a virtual impossibility. The next day, the story was in many major newspapers. Dr. LaPaz was implicated in the discovery in addition to Tombaugh. LaPaz had earlier conducted secret investigations on behalf of the Air Force on the green fireballs and other unidentified aerial phenomena over New Mexico. The New York Times reported on August 29 that \"a source close to the O. O. R. unit here described as 'quite accurate' the report in the magazine Aviation Week that two previously unobserved satellites had been spotted and identified by Dr. Lincoln LaPaz of the University of New Mexico as natural and not artificial objects. This source also said there was absolutely no connection between the reported satellites and flying saucer reports.\" However, in the October 10 issue, LaPaz said the magazine article was \"false in every particular, in so far as reference to me is concerned.\"",
"title": "Astronomy career"
},
{
"paragraph_id": 24,
"text": "Both LaPaz and Tombaugh were to issue public denials that anything had been found. The October 1955 issue of Popular Mechanics magazine reported: \"Professor Tombaugh is closemouthed about his results. He won't say whether or not any small natural satellites have been discovered. He does say, however, that newspaper reports of 18 months ago announcing the discovery of natural satellites at 400 and 600 miles out are not correct. He adds that there is no connection between the search program and the reports of so-called flying saucers.\"",
"title": "Astronomy career"
},
{
"paragraph_id": 25,
"text": "At a meteor conference in Los Angeles in 1957, Tombaugh reiterated that his four-year search for \"natural satellites\" had been unsuccessful. In 1959, Tombaugh was to issue a final report stating that nothing had been found in his search. His personal 16-inch telescope was reassembled and dedicated on September 17, 2009, at Rancho Hidalgo, New Mexico (near Animas, New Mexico), adjacent to Astronomy's new observatory.",
"title": "Astronomy career"
},
{
"paragraph_id": 26,
"text": "During World War II he taught naval personnel navigation at Northern Arizona University. He worked at White Sands Missile Range in the early 1950s, and taught astronomy at New Mexico State University from 1955 until his retirement in 1973. In 1980 he was inducted into the International Space Hall of Fame. In 1991, he received the American Academy of Achievement's Golden Plate Award presented by Awards Council member Glenn T. Seaborg.",
"title": "Other ventures"
},
{
"paragraph_id": 27,
"text": "Direct visual observation became rare in astronomy. By 1965 Robert S. Richardson called Tombaugh one of two great living experienced visual observers as talented as Percival Lowell or Giovanni Schiaparelli. In 1980, Tombaugh and Patrick Moore wrote a book Out of the Darkness: The Planet Pluto. In August 1992, JPL scientist Robert Staehle called Tombaugh, requesting permission to visit his planet. \"I told him he was welcome to it,\" Tombaugh later remembered, \"though he's got to go one long, cold trip.\" The call eventually led to the launch of the New Horizons space probe to Pluto in 2006. Following the passage of Pluto by New Horizons on July 14, 2015, the \"Heart of Pluto\" was named Tombaugh Regio.",
"title": "Later life"
},
{
"paragraph_id": 28,
"text": "Clyde Tombaugh had five siblings. Through the daughter of his youngest brother, Robert, he is the great-uncle of Los Angeles Dodgers pitcher Clayton Kershaw.",
"title": "Personal life"
},
{
"paragraph_id": 29,
"text": "Tombaugh was an active Unitarian Universalist, and he and his wife helped found the Unitarian Universalist Church of Las Cruces, New Mexico.",
"title": "Personal life"
},
{
"paragraph_id": 30,
"text": "Tombaugh died on January 17, 1997, in Las Cruces, New Mexico, at the age of 90, and he was cremated. A small portion of his ashes was placed aboard the New Horizons spacecraft. The container includes the inscription: \"Interred herein are remains of American Clyde W. Tombaugh, discoverer of Pluto and the Solar System's 'third zone'. Adelle and Muron's boy, Patricia's husband, Annette and Alden's father, astronomer, teacher, punster and friend: Clyde W. Tombaugh (1906–1997)\". Tombaugh was survived by his wife, Patricia (1912–2012), and their children, Annette and Alden.",
"title": "Death"
},
{
"paragraph_id": 31,
"text": "I had never been much interested in Pluto—too few facts and too much speculation, too far away and not desirable real estate. By comparison the Moon was a choice residential suburb. Professor Tombaugh (the one the station was named for) was working on a giant electronic telescope to photograph it, under a Guggenheim grant, but he had a special interest; he discovered Pluto years before I was born.",
"title": "In popular culture"
}
] | Clyde William Tombaugh was an American astronomer. He discovered Pluto in 1930, the first object to be discovered in what would later be identified as the Kuiper belt. At the time of discovery, Pluto was considered a planet, but was reclassified as a dwarf planet in 2006. Tombaugh also discovered many asteroids, and called for the serious scientific research of unidentified flying objects. | 2001-10-02T04:01:15Z | 2023-12-20T00:00:22Z | [
"Template:Dp",
"Template:Mpl",
"Template:Commons category",
"Template:Portal bar",
"Template:Redirect",
"Template:Convert",
"Template:Cite book",
"Template:ISBN",
"Template:Cite news",
"Template:Pluto",
"Template:'",
"Template:Blockquote",
"Template:Reflist",
"Template:Cite web",
"Template:Spaced ndash",
"Template:Needs update",
"Template:Frac",
"Template:ISBN?",
"Template:Short description",
"Template:Use mdy dates",
"Template:Infobox person",
"Template:IPAc-en",
"Template:Cite encyclopedia",
"Template:Cite magazine",
"Template:Cite journal",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Clyde_Tombaugh |
6,666 | Christopher Báthory | Christopher Báthory (Hungarian: Báthory Kristóf; 1530 – 27 May 1581) was voivode of Transylvania from 1576 to 1581. He was a younger son of Stephen Báthory of Somlyó. Christopher's career began during the reign of Queen Isabella Jagiellon, who administered the eastern territories of the Kingdom of Hungary on behalf of her son, John Sigismund Zápolya, from 1556 to 1559. He was one of the commanders of John Sigismund's army in the early 1560s.
Christopher's brother, Stephen Báthory, who succeeded John Sigismund in 1571, made Christopher captain of Várad (now Oradea in Romania). After being elected King of Poland, Stephen Báthory adopted the title of Prince of Transylvania and made Christopher voivode in 1576. Christopher cooperated with Márton Berzeviczy, whom his brother appointed to supervise the administration of the Principality of Transylvania as the head of the Transylvanian chancellery at Kraków. Christopher ordered the imprisonment of Ferenc Dávid, a leading theologian of the Unitarian Church of Transylvania, who started to condemn the adoration of Jesus. He supported his brother's efforts to settle the Jesuits in Transylvania.
Christopher was the third of the four sons of Stephen Báthory of Somlyó and Catherine Telegdi. His father was a supporter of John Zápolya, King of Hungary, who made him voivode of Transylvania in February 1530. Christopher was born in Báthorys' castle at Szilágysomlyó (now Șimleu Silvaniei in Romania) in the same year. His father died in 1534.
His brother, Andrew, and their kinsman, Tamás Nádasdy, took charge of Christopher's education. Christopher visited England, France, Italy, Spain, and the Holy Roman Empire in his youth. He also served as a page in Emperor Charles V's court.
Christopher entered the service of John Zápolya's widow, Isabella Jagiellon, in the late 1550s. At the time, Isabella administered the eastern territories of the Kingdom of Hungary on behalf of her son, John Sigismund Zápolya. She wanted to persuade Henry II of France to withdraw his troops from three fortresses that the Ottomans had captured in Banat, so she sent Christopher to France to start negotiations in 1557.
John Sigismund took charge of the administration of his realm after his mother died on 15 November 1559. He retained his mother's advisors, including Christopher who became one of his most influential officials. After the rebellion of Melchior Balassa, Christopher persuaded John Sigismund to fight for his realm instead of fleeing to Poland in 1562. Christopher was one of the commanders of John Sigismund's troops during the ensuing war against the Habsburg rulers of the western territories of the Kingdom of Hungary, Ferdinand and Maximilian, who tried to reunite the kingdom under their rule. Christopher defeated Maximilian's commander, Lazarus von Schwendi, forcing him to lift the siege of Huszt (now Khust in Ukraine) in 1565.
After the death of John Sigismund, the Diet of Transylvania elected Christopher's younger brother, Stephen Báthory, voivode (or ruler) on 25 May 1571. Stephen made Christopher captain of Várad (now Oradea in Romania). The following year, the Ottoman Sultan, Selim II (who was the overlord of Transylvania), acknowledged the hereditary right of the Báthory family to rule the province.
Stephen Báthory was elected King of Poland on 15 December 1575. He adopted the title of Prince of Transylvania and made Christopher voivode on 14 January 1576. An Ottoman delegation confirmed Christopher's appointment at the Diet in Gyulafehérvár (now Alba Iulia in Romania) in July. The sultan's charter (or ahidnâme) sent to Christopher emphasized that he should keep the peace along the frontiers. Stephen set up a separate chancellery in Kraków to keep an eye on the administration of Transylvania. The head of the new chancellery, Márton Berzeviczy, and Christopher cooperated closely.
Anti-Trinitarian preachers began to condemn the worshiping of Jesus in Partium and Székely Land in 1576, although the Diet had already forbade all doctrinal innovations. Ferenc Dávid, the most influential leader of the Unitarian Church of Transylvania, openly joined the dissenters in the autumn of 1578. Christopher invited Fausto Sozzini, a leading Anti-Trinitarian theologian, to Transylvania to convince Dávid that the new teaching was erroneous. Since Dávid refused to obey, Christopher held a Diet and the "Three Nations" (including the Unitarian delegates) ordered Dávid's imprisonment. Christopher also supported his brother's attempts to strengthen the position of the Roman Catholic Church in Transylvania. He granted estates to the Jesuits to promote the establishment of a college in Kolozsvár (now Cluj-Napoca in Romania) on 5 May 1579.
Christopher fell seriously ill after his second wife, Elisabeth Bocskai, died in early 1581. After a false rumor about Christopher's death reached Istanbul, Koca Sinan Pasha proposed Transylvania to Pál Márkházy whom Christopher had been forced into exile. Although Christopher's only surviving son Sigismund was still a minor, the Diet elected him as voivode before Christopher's death, because they wanted to prevent the appointment of Márkházy. Christopher died in Gyulafehérvár on 27 May 1581. He was buried in the Jesuits' church in Gyulafehérvár, almost two years later, on 14 March 1583.
Christopher's first wife, Catherina Danicska, was a Polish noblewoman, but only the Hungarian form of her name is known. Their eldest son, Balthasar Báthory, moved to Kraków shortly after Stephen Báthory was crowned King of Poland; he drowned in the Vistula River in May 1577 at the age of 22. Christopher's and Catherina's second son, Nicholas, was born in 1567 and died in 1576.
Christopher's second wife, Elisabeth Bocskai, was a Calvinist noblewoman. Their first child, Cristina (or Griselda), was born in 1569. She was given in marriage to Jan Zamoyski, Chancellor of Poland, in 1583. Christopher's youngest son, Sigismund, was born in 1573. | [
{
"paragraph_id": 0,
"text": "Christopher Báthory (Hungarian: Báthory Kristóf; 1530 – 27 May 1581) was voivode of Transylvania from 1576 to 1581. He was a younger son of Stephen Báthory of Somlyó. Christopher's career began during the reign of Queen Isabella Jagiellon, who administered the eastern territories of the Kingdom of Hungary on behalf of her son, John Sigismund Zápolya, from 1556 to 1559. He was one of the commanders of John Sigismund's army in the early 1560s.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Christopher's brother, Stephen Báthory, who succeeded John Sigismund in 1571, made Christopher captain of Várad (now Oradea in Romania). After being elected King of Poland, Stephen Báthory adopted the title of Prince of Transylvania and made Christopher voivode in 1576. Christopher cooperated with Márton Berzeviczy, whom his brother appointed to supervise the administration of the Principality of Transylvania as the head of the Transylvanian chancellery at Kraków. Christopher ordered the imprisonment of Ferenc Dávid, a leading theologian of the Unitarian Church of Transylvania, who started to condemn the adoration of Jesus. He supported his brother's efforts to settle the Jesuits in Transylvania.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Christopher was the third of the four sons of Stephen Báthory of Somlyó and Catherine Telegdi. His father was a supporter of John Zápolya, King of Hungary, who made him voivode of Transylvania in February 1530. Christopher was born in Báthorys' castle at Szilágysomlyó (now Șimleu Silvaniei in Romania) in the same year. His father died in 1534.",
"title": "Early life"
},
{
"paragraph_id": 3,
"text": "His brother, Andrew, and their kinsman, Tamás Nádasdy, took charge of Christopher's education. Christopher visited England, France, Italy, Spain, and the Holy Roman Empire in his youth. He also served as a page in Emperor Charles V's court.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "Christopher entered the service of John Zápolya's widow, Isabella Jagiellon, in the late 1550s. At the time, Isabella administered the eastern territories of the Kingdom of Hungary on behalf of her son, John Sigismund Zápolya. She wanted to persuade Henry II of France to withdraw his troops from three fortresses that the Ottomans had captured in Banat, so she sent Christopher to France to start negotiations in 1557.",
"title": "Career"
},
{
"paragraph_id": 5,
"text": "John Sigismund took charge of the administration of his realm after his mother died on 15 November 1559. He retained his mother's advisors, including Christopher who became one of his most influential officials. After the rebellion of Melchior Balassa, Christopher persuaded John Sigismund to fight for his realm instead of fleeing to Poland in 1562. Christopher was one of the commanders of John Sigismund's troops during the ensuing war against the Habsburg rulers of the western territories of the Kingdom of Hungary, Ferdinand and Maximilian, who tried to reunite the kingdom under their rule. Christopher defeated Maximilian's commander, Lazarus von Schwendi, forcing him to lift the siege of Huszt (now Khust in Ukraine) in 1565.",
"title": "Career"
},
{
"paragraph_id": 6,
"text": "After the death of John Sigismund, the Diet of Transylvania elected Christopher's younger brother, Stephen Báthory, voivode (or ruler) on 25 May 1571. Stephen made Christopher captain of Várad (now Oradea in Romania). The following year, the Ottoman Sultan, Selim II (who was the overlord of Transylvania), acknowledged the hereditary right of the Báthory family to rule the province.",
"title": "Career"
},
{
"paragraph_id": 7,
"text": "Stephen Báthory was elected King of Poland on 15 December 1575. He adopted the title of Prince of Transylvania and made Christopher voivode on 14 January 1576. An Ottoman delegation confirmed Christopher's appointment at the Diet in Gyulafehérvár (now Alba Iulia in Romania) in July. The sultan's charter (or ahidnâme) sent to Christopher emphasized that he should keep the peace along the frontiers. Stephen set up a separate chancellery in Kraków to keep an eye on the administration of Transylvania. The head of the new chancellery, Márton Berzeviczy, and Christopher cooperated closely.",
"title": "Reign"
},
{
"paragraph_id": 8,
"text": "Anti-Trinitarian preachers began to condemn the worshiping of Jesus in Partium and Székely Land in 1576, although the Diet had already forbade all doctrinal innovations. Ferenc Dávid, the most influential leader of the Unitarian Church of Transylvania, openly joined the dissenters in the autumn of 1578. Christopher invited Fausto Sozzini, a leading Anti-Trinitarian theologian, to Transylvania to convince Dávid that the new teaching was erroneous. Since Dávid refused to obey, Christopher held a Diet and the \"Three Nations\" (including the Unitarian delegates) ordered Dávid's imprisonment. Christopher also supported his brother's attempts to strengthen the position of the Roman Catholic Church in Transylvania. He granted estates to the Jesuits to promote the establishment of a college in Kolozsvár (now Cluj-Napoca in Romania) on 5 May 1579.",
"title": "Reign"
},
{
"paragraph_id": 9,
"text": "Christopher fell seriously ill after his second wife, Elisabeth Bocskai, died in early 1581. After a false rumor about Christopher's death reached Istanbul, Koca Sinan Pasha proposed Transylvania to Pál Márkházy whom Christopher had been forced into exile. Although Christopher's only surviving son Sigismund was still a minor, the Diet elected him as voivode before Christopher's death, because they wanted to prevent the appointment of Márkházy. Christopher died in Gyulafehérvár on 27 May 1581. He was buried in the Jesuits' church in Gyulafehérvár, almost two years later, on 14 March 1583.",
"title": "Reign"
},
{
"paragraph_id": 10,
"text": "Christopher's first wife, Catherina Danicska, was a Polish noblewoman, but only the Hungarian form of her name is known. Their eldest son, Balthasar Báthory, moved to Kraków shortly after Stephen Báthory was crowned King of Poland; he drowned in the Vistula River in May 1577 at the age of 22. Christopher's and Catherina's second son, Nicholas, was born in 1567 and died in 1576.",
"title": "Family"
},
{
"paragraph_id": 11,
"text": "Christopher's second wife, Elisabeth Bocskai, was a Calvinist noblewoman. Their first child, Cristina (or Griselda), was born in 1569. She was given in marriage to Jan Zamoyski, Chancellor of Poland, in 1583. Christopher's youngest son, Sigismund, was born in 1573.",
"title": "Family"
}
] | Christopher Báthory was voivode of Transylvania from 1576 to 1581. He was a younger son of Stephen Báthory of Somlyó. Christopher's career began during the reign of Queen Isabella Jagiellon, who administered the eastern territories of the Kingdom of Hungary on behalf of her son, John Sigismund Zápolya, from 1556 to 1559. He was one of the commanders of John Sigismund's army in the early 1560s. Christopher's brother, Stephen Báthory, who succeeded John Sigismund in 1571, made Christopher captain of Várad. After being elected King of Poland, Stephen Báthory adopted the title of Prince of Transylvania and made Christopher voivode in 1576. Christopher cooperated with Márton Berzeviczy, whom his brother appointed to supervise the administration of the Principality of Transylvania as the head of the Transylvanian chancellery at Kraków. Christopher ordered the imprisonment of Ferenc Dávid, a leading theologian of the Unitarian Church of Transylvania, who started to condemn the adoration of Jesus. He supported his brother's efforts to settle the Jesuits in Transylvania. | 2022-12-02T01:23:42Z | [
"Template:Cite book",
"Template:Refend",
"Template:S-start",
"Template:S-hou",
"Template:S-end",
"Template:Good article",
"Template:Sfn",
"Template:Ahnentafel",
"Template:S-bef",
"Template:S-aft",
"Template:Infobox nobility",
"Template:Lang-hu",
"Template:Cite journal",
"Template:S-off",
"Template:S-ttl",
"Template:Authority control",
"Template:Short description",
"Template:Reflist",
"Template:Refbegin"
] | https://en.wikipedia.org/wiki/Christopher_B%C3%A1thory |
|
6,667 | CPAN | The Comprehensive Perl Archive Network (CPAN) is a repository of over 250,000 software modules and accompanying documentation for 39,000 distributions, written in the Perl programming language by over 12,000 contributors. CPAN can denote either the archive network or the Perl program that acts as an interface to the network and as an automated software installer (somewhat like a package manager). Most software on CPAN is free and open source software.
CPAN was conceived in 1993 and has been active online since October 1995. It is based on the CTAN model and began as a place to unify the structure of scattered Perl archives.
Like many programming languages, Perl has mechanisms to use external libraries of code, making one file contain common routines used by several programs. Perl calls these modules. Perl modules are typically installed in one of several directories whose paths are placed in the Perl interpreter when it is first compiled; on Unix-like operating systems, common paths include /usr/lib/perl5, /usr/local/lib/perl5, and several of their subdirectories.
Perl comes with a small set of core modules. Some of these perform bootstrapping tasks, such as ExtUtils::MakeMaker, which is used to create Makefiles for building and installing other extension modules; others, like List::Util, are merely commonly used.
CPAN's main purpose is to help programmers locate modules and programs not included in the Perl standard distribution. Its structure is decentralized. Authors maintain and improve their own modules. Forking, and creating competing modules for the same task or purpose, is common. There is a third-party bug tracking system that is automatically set up for any uploaded distribution, but authors may opt to use a different bug tracking system such as GitHub. Similarly, though GitHub is a popular location to store the source for distributions, it may be stored anywhere the author prefers, or may not be publicly accessible at all. Maintainers may grant permissions to others to maintain or take over their modules, and permissions may be granted by admins for those wishing to take over abandoned modules. Previous versions of updated distributions are retained on CPAN until deleted by the uploader, and a secondary mirror network called BackPAN retains distributions even if they are deleted from CPAN. Also, the complete history of the CPAN and all its modules is available as the GitPAN project, allowing to easily see the complete history for all the modules and for easy maintenance of forks. CPAN is also used to distribute new versions of Perl, as well as related projects, such as Parrot and Raku.
Files on the CPAN are referred to as distributions. A distribution may consist of one or more modules, documentation files, or programs packaged in a common archiving format, such as a gzipped tar archive or a ZIP file. Distributions will often contain installation scripts (usually called Makefile.PL or Build.PL) and test scripts which can be run to verify the contents of the distribution are functioning properly. New distributions are uploaded to the Perl Authors Upload Server, or PAUSE (see the section Uploading distributions with PAUSE).
In 2003, distributions started to include metadata files, called META.yml, indicating the distribution's name, version, dependencies, and other useful information; however, not all distributions contain metadata. When metadata is not present in a distribution, the PAUSE's software will try to analyze the code in the distribution to look for the same information; this is not necessarily very reliable. In 2010, version 2 of this specification was created to be used via a new file called META.json, with the YAML format file often also included for backward compatibility.
With thousands of distributions, CPAN needs to be structured to be useful. Authors often place their modules in the natural hierarchy of Perl module names (such as Apache::DBI or Lingua::EN::Inflect) according to purpose or domain, though this is not enforced.
CPAN module distributions usually have names in the form of CGI-Application-3.1 (where the :: used in the module's name has been replaced with a dash, and the version number has been appended to the name), but this is only a convention; many prominent distributions break the convention, especially those that contain multiple modules. Security restrictions prevent a distribution from ever being replaced with an identical filename, so virtually all distribution names do include a version number.
The distribution infrastructure of CPAN consists of its worldwide network of more than 250 mirrors in more than 60 countries. Each full mirror hosts around 31 gigabytes of data.
Most mirrors update themselves hourly, daily or bidaily from the CPAN master site. Some sites are major FTP servers which mirror lots of other software, but others are simply servers owned by companies that use Perl heavily. There are at least two mirrors on every continent except Antarctica.
Several search engines have been written to help Perl programmers sort through the CPAN. The official search.cpan.org includes textual search, a browsable index of modules, and extracted copies of all distributions currently on the CPAN. On 16 May 2018, the Perl Foundation announced that search.cpan.org would be shut down on 29 June 2018 (after 19 years of operation), due to its aging codebase and maintenance burden. Users will be transitioned and redirected to the third-party alternative MetaCPAN.
CPAN Testers are a group of volunteers, who will download and test distributions as they are uploaded to CPAN. This enables the authors to have their modules tested on many platforms and environments that they would otherwise not have access to, thus helping to promote portability, as well as a degree of quality. Smoke testers send reports, which are then collated and used for a variety of presentation websites, including the main reports site, statistics and dependencies.
Authors can upload new distributions to the CPAN through the Perl Authors Upload Server (PAUSE). To do so, they must request a PAUSE account.
Once registered, they may use a web interface at pause.perl.org, or an FTP interface to upload files to their directory and delete them. Modules in the upload will only be indexed as canonical if the module name has not been used before (granting first-come permission to the uploader), or if the uploader has permission for that name, and if the module is a higher version than any existing entry. This can be specified through PAUSE's web interface.
There is also a Perl core module named CPAN; it is usually differentiated from the repository itself by using the name CPAN.pm. CPAN.pm is mainly an interactive shell which can be used to search for, download, and install distributions. An interactive shell called cpan is also provided in the Perl core, and is the usual way of running CPAN.pm. After a short configuration process and mirror selection, it uses tools available on the user's computer to automatically download, unpack, compile, test, and install modules. It is also capable of updating itself.
An effort to replace CPAN.pm with something cleaner and more modern resulted in the CPANPLUS (or CPAN++) set of modules. CPANPLUS separates the back-end work of downloading, compiling, and installing modules from the interactive shell used to issue commands. It also supports several advanced features, such as cryptographic signature checking and test result reporting. Finally, CPANPLUS can uninstall a distribution. CPANPLUS was added to the Perl core in version 5.10.0, and removed from it in version 5.20.0.
A smaller, leaner modern alternative to these CPAN installers was developed called cpanminus. cpanminus was designed to have a much smaller memory footprint as often required in limited memory environments, and to be usable as a standalone script such that it can even install itself, requiring only the expected set of core Perl modules to be available. It is also available from CPAN as the module App::cpanminus, which installs the cpanm script. It does not maintain or rely on a persistent configuration, but is configured only by the environment and command-line options. cpanminus does not have an interactive shell component. It recognizes the cpanfile format for specifying prerequisites, useful in ad-hoc Perl projects that may not be designed for CPAN installation. cpanminus also has the ability to uninstall distributions.
Each of these modules can check a distribution's dependencies and recursively install any prerequisites, either automatically or with individual user approval. Each support FTP and HTTP and can work through firewalls and proxies.
Experienced Perl programmers often comment that half of Perl's power is in the CPAN. It has been called Perl's killer app. It is roughly equivalent to Composer for PHP; the PyPI (Python Package Index) repository for Python; RubyGems for Ruby; CRAN for R; npm for Node.js; LuaRocks for Lua; Maven for Java; and Hackage for Haskell. CPAN's use of arbitrated name spaces, a testing regime and a well defined documentation style makes it unique.
Given its importance to the Perl developer community, the CPAN both shapes and is shaped by Perl's culture. Its "self-appointed master librarian", Jarkko Hietaniemi, often takes part in the April Fools' Day jokes; on 1 April 2002 the site was temporarily named to CJAN, where the "J" stood for "Java". In 2003, the www.cpan.org domain name was redirected to Matt's Script Archive, a site infamous in the Perl community for having badly written code.
Some of the distributions on the CPAN are distributed as jokes. The Acme:: hierarchy is reserved for joke modules; for instance, Acme::Don't adds a don't function that doesn't run the code given to it (to complement the do built-in, which does). Even outside the Acme:: hierarchy, some modules are still written largely for amusement; one example is Lingua::Romana::Perligata, which can be used to write Perl programs in a subset of Latin.
In 2005, a group of Perl developers who also had an interest in JavaScript got together to create JSAN, the JavaScript Archive Network. The JSAN is a near-direct port of the CPAN infrastructure for use with the JavaScript language, which for most of its lifespan did not have a cohesive "community".
In 2008, after a chance meeting with CPAN admin Adam Kennedy at the Open Source Developers Conference, Linux kernel developer Rusty Russell created the CCAN, the Comprehensive C Archive Network. The CCAN is a direct port of the CPAN architecture for use with the C language.
CRAN, the Comprehensive R Archive Network, is a set of mirrors hosting the R programming language distribution(s), documentation, and contributed extensions. | [
{
"paragraph_id": 0,
"text": "The Comprehensive Perl Archive Network (CPAN) is a repository of over 250,000 software modules and accompanying documentation for 39,000 distributions, written in the Perl programming language by over 12,000 contributors. CPAN can denote either the archive network or the Perl program that acts as an interface to the network and as an automated software installer (somewhat like a package manager). Most software on CPAN is free and open source software.",
"title": ""
},
{
"paragraph_id": 1,
"text": "CPAN was conceived in 1993 and has been active online since October 1995. It is based on the CTAN model and began as a place to unify the structure of scattered Perl archives.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Like many programming languages, Perl has mechanisms to use external libraries of code, making one file contain common routines used by several programs. Perl calls these modules. Perl modules are typically installed in one of several directories whose paths are placed in the Perl interpreter when it is first compiled; on Unix-like operating systems, common paths include /usr/lib/perl5, /usr/local/lib/perl5, and several of their subdirectories.",
"title": "Role"
},
{
"paragraph_id": 3,
"text": "Perl comes with a small set of core modules. Some of these perform bootstrapping tasks, such as ExtUtils::MakeMaker, which is used to create Makefiles for building and installing other extension modules; others, like List::Util, are merely commonly used.",
"title": "Role"
},
{
"paragraph_id": 4,
"text": "CPAN's main purpose is to help programmers locate modules and programs not included in the Perl standard distribution. Its structure is decentralized. Authors maintain and improve their own modules. Forking, and creating competing modules for the same task or purpose, is common. There is a third-party bug tracking system that is automatically set up for any uploaded distribution, but authors may opt to use a different bug tracking system such as GitHub. Similarly, though GitHub is a popular location to store the source for distributions, it may be stored anywhere the author prefers, or may not be publicly accessible at all. Maintainers may grant permissions to others to maintain or take over their modules, and permissions may be granted by admins for those wishing to take over abandoned modules. Previous versions of updated distributions are retained on CPAN until deleted by the uploader, and a secondary mirror network called BackPAN retains distributions even if they are deleted from CPAN. Also, the complete history of the CPAN and all its modules is available as the GitPAN project, allowing to easily see the complete history for all the modules and for easy maintenance of forks. CPAN is also used to distribute new versions of Perl, as well as related projects, such as Parrot and Raku.",
"title": "Role"
},
{
"paragraph_id": 5,
"text": "Files on the CPAN are referred to as distributions. A distribution may consist of one or more modules, documentation files, or programs packaged in a common archiving format, such as a gzipped tar archive or a ZIP file. Distributions will often contain installation scripts (usually called Makefile.PL or Build.PL) and test scripts which can be run to verify the contents of the distribution are functioning properly. New distributions are uploaded to the Perl Authors Upload Server, or PAUSE (see the section Uploading distributions with PAUSE).",
"title": "Structure"
},
{
"paragraph_id": 6,
"text": "In 2003, distributions started to include metadata files, called META.yml, indicating the distribution's name, version, dependencies, and other useful information; however, not all distributions contain metadata. When metadata is not present in a distribution, the PAUSE's software will try to analyze the code in the distribution to look for the same information; this is not necessarily very reliable. In 2010, version 2 of this specification was created to be used via a new file called META.json, with the YAML format file often also included for backward compatibility.",
"title": "Structure"
},
{
"paragraph_id": 7,
"text": "With thousands of distributions, CPAN needs to be structured to be useful. Authors often place their modules in the natural hierarchy of Perl module names (such as Apache::DBI or Lingua::EN::Inflect) according to purpose or domain, though this is not enforced.",
"title": "Structure"
},
{
"paragraph_id": 8,
"text": "CPAN module distributions usually have names in the form of CGI-Application-3.1 (where the :: used in the module's name has been replaced with a dash, and the version number has been appended to the name), but this is only a convention; many prominent distributions break the convention, especially those that contain multiple modules. Security restrictions prevent a distribution from ever being replaced with an identical filename, so virtually all distribution names do include a version number.",
"title": "Structure"
},
{
"paragraph_id": 9,
"text": "The distribution infrastructure of CPAN consists of its worldwide network of more than 250 mirrors in more than 60 countries. Each full mirror hosts around 31 gigabytes of data.",
"title": "Components"
},
{
"paragraph_id": 10,
"text": "Most mirrors update themselves hourly, daily or bidaily from the CPAN master site. Some sites are major FTP servers which mirror lots of other software, but others are simply servers owned by companies that use Perl heavily. There are at least two mirrors on every continent except Antarctica.",
"title": "Components"
},
{
"paragraph_id": 11,
"text": "Several search engines have been written to help Perl programmers sort through the CPAN. The official search.cpan.org includes textual search, a browsable index of modules, and extracted copies of all distributions currently on the CPAN. On 16 May 2018, the Perl Foundation announced that search.cpan.org would be shut down on 29 June 2018 (after 19 years of operation), due to its aging codebase and maintenance burden. Users will be transitioned and redirected to the third-party alternative MetaCPAN.",
"title": "Components"
},
{
"paragraph_id": 12,
"text": "CPAN Testers are a group of volunteers, who will download and test distributions as they are uploaded to CPAN. This enables the authors to have their modules tested on many platforms and environments that they would otherwise not have access to, thus helping to promote portability, as well as a degree of quality. Smoke testers send reports, which are then collated and used for a variety of presentation websites, including the main reports site, statistics and dependencies.",
"title": "Components"
},
{
"paragraph_id": 13,
"text": "Authors can upload new distributions to the CPAN through the Perl Authors Upload Server (PAUSE). To do so, they must request a PAUSE account.",
"title": "Components"
},
{
"paragraph_id": 14,
"text": "Once registered, they may use a web interface at pause.perl.org, or an FTP interface to upload files to their directory and delete them. Modules in the upload will only be indexed as canonical if the module name has not been used before (granting first-come permission to the uploader), or if the uploader has permission for that name, and if the module is a higher version than any existing entry. This can be specified through PAUSE's web interface.",
"title": "Components"
},
{
"paragraph_id": 15,
"text": "There is also a Perl core module named CPAN; it is usually differentiated from the repository itself by using the name CPAN.pm. CPAN.pm is mainly an interactive shell which can be used to search for, download, and install distributions. An interactive shell called cpan is also provided in the Perl core, and is the usual way of running CPAN.pm. After a short configuration process and mirror selection, it uses tools available on the user's computer to automatically download, unpack, compile, test, and install modules. It is also capable of updating itself.",
"title": "CPAN.pm, CPANPLUS, and cpanminus"
},
{
"paragraph_id": 16,
"text": "An effort to replace CPAN.pm with something cleaner and more modern resulted in the CPANPLUS (or CPAN++) set of modules. CPANPLUS separates the back-end work of downloading, compiling, and installing modules from the interactive shell used to issue commands. It also supports several advanced features, such as cryptographic signature checking and test result reporting. Finally, CPANPLUS can uninstall a distribution. CPANPLUS was added to the Perl core in version 5.10.0, and removed from it in version 5.20.0.",
"title": "CPAN.pm, CPANPLUS, and cpanminus"
},
{
"paragraph_id": 17,
"text": "A smaller, leaner modern alternative to these CPAN installers was developed called cpanminus. cpanminus was designed to have a much smaller memory footprint as often required in limited memory environments, and to be usable as a standalone script such that it can even install itself, requiring only the expected set of core Perl modules to be available. It is also available from CPAN as the module App::cpanminus, which installs the cpanm script. It does not maintain or rely on a persistent configuration, but is configured only by the environment and command-line options. cpanminus does not have an interactive shell component. It recognizes the cpanfile format for specifying prerequisites, useful in ad-hoc Perl projects that may not be designed for CPAN installation. cpanminus also has the ability to uninstall distributions.",
"title": "CPAN.pm, CPANPLUS, and cpanminus"
},
{
"paragraph_id": 18,
"text": "Each of these modules can check a distribution's dependencies and recursively install any prerequisites, either automatically or with individual user approval. Each support FTP and HTTP and can work through firewalls and proxies.",
"title": "CPAN.pm, CPANPLUS, and cpanminus"
},
{
"paragraph_id": 19,
"text": "Experienced Perl programmers often comment that half of Perl's power is in the CPAN. It has been called Perl's killer app. It is roughly equivalent to Composer for PHP; the PyPI (Python Package Index) repository for Python; RubyGems for Ruby; CRAN for R; npm for Node.js; LuaRocks for Lua; Maven for Java; and Hackage for Haskell. CPAN's use of arbitrated name spaces, a testing regime and a well defined documentation style makes it unique.",
"title": "Influence"
},
{
"paragraph_id": 20,
"text": "Given its importance to the Perl developer community, the CPAN both shapes and is shaped by Perl's culture. Its \"self-appointed master librarian\", Jarkko Hietaniemi, often takes part in the April Fools' Day jokes; on 1 April 2002 the site was temporarily named to CJAN, where the \"J\" stood for \"Java\". In 2003, the www.cpan.org domain name was redirected to Matt's Script Archive, a site infamous in the Perl community for having badly written code.",
"title": "Influence"
},
{
"paragraph_id": 21,
"text": "Some of the distributions on the CPAN are distributed as jokes. The Acme:: hierarchy is reserved for joke modules; for instance, Acme::Don't adds a don't function that doesn't run the code given to it (to complement the do built-in, which does). Even outside the Acme:: hierarchy, some modules are still written largely for amusement; one example is Lingua::Romana::Perligata, which can be used to write Perl programs in a subset of Latin.",
"title": "Influence"
},
{
"paragraph_id": 22,
"text": "In 2005, a group of Perl developers who also had an interest in JavaScript got together to create JSAN, the JavaScript Archive Network. The JSAN is a near-direct port of the CPAN infrastructure for use with the JavaScript language, which for most of its lifespan did not have a cohesive \"community\".",
"title": "Influence"
},
{
"paragraph_id": 23,
"text": "In 2008, after a chance meeting with CPAN admin Adam Kennedy at the Open Source Developers Conference, Linux kernel developer Rusty Russell created the CCAN, the Comprehensive C Archive Network. The CCAN is a direct port of the CPAN architecture for use with the C language.",
"title": "Influence"
},
{
"paragraph_id": 24,
"text": "CRAN, the Comprehensive R Archive Network, is a set of mirrors hosting the R programming language distribution(s), documentation, and contributed extensions.",
"title": "Influence"
},
{
"paragraph_id": 25,
"text": "",
"title": "External links"
}
] | The Comprehensive Perl Archive Network (CPAN) is a repository of over 250,000 software modules and accompanying documentation for 39,000 distributions, written in the Perl programming language by over 12,000 contributors. CPAN can denote either the archive network or the Perl program that acts as an interface to the network and as an automated software installer. Most software on CPAN is free and open source software. | 2022-12-29T03:57:13Z | [
"Template:Url",
"Template:Mono",
"Template:Cite web",
"Template:Portal",
"Template:Authority control",
"Template:Short description",
"Template:Reflist",
"Template:Official website",
"Template:Perl",
"Template:Use dmy dates"
] | https://en.wikipedia.org/wiki/CPAN |
|
6,669 | Colorado Rockies | The Colorado Rockies are an American professional baseball team based in Denver. The Rockies compete in Major League Baseball (MLB) as a member club of the National League (NL) West division. The team plays its home baseball games at Coors Field, which is located in the Lower Downtown area of Denver. The club is owned by the Monfort brothers and managed by Bud Black.
The Rockies began as an expansion team for the 1993 season and played their home games for their first two seasons at Mile High Stadium. Since 1995, they have played at Coors Field, which has earned a reputation as a hitter's park. The Rockies have qualified for the postseason five times, each time as a Wild Card winner. In 2007, the team earned its only NL pennant after winning 14 of their final 15 games in the regular season to secure a Wild Card position, capping the streak off with a 13-inning 9–8 victory against the San Diego Padres in the tiebreaker game affectionately known as "Game 163" by Rockies fans. The Rockies then proceeded to sweep the Philadelphia Phillies and Arizona Diamondbacks in the NLDS and NLCS and entered the 2007 World Series as winners of 21 of their last 22 games. However, they were swept by the American League (AL) champions Boston Red Sox in four games.
From 1993 to 2022, the Rockies have an overall record of 2,201–2,495 (.469 winning percentage). After the Denver Nuggets won the 2023 NBA Finals, the Rockies became the only one of Denver’s franchises in the five major North American professional sports leagues (MLB, MLS, NBA, NFL, & NHL) to yet win a championship.
Denver had long been a hotbed of minor league baseball as far back as the late 19th century with the original Denver Bears (or Grizzlies) competing in the Western League before being replaced in 1955 by a AAA team of the same name. Residents and businesses in the area desired a Major League team. Denver's Mile High Stadium was built originally as Denver Bears Stadium, a minor league baseball stadium that could be upgraded to major league standards. Several previous attempts to bring Major League Baseball to Colorado had failed. In 1958, New York lawyer William Shea proposed the new Continental League as a rival to the two existing major leagues. In 1960, the Continental League announced that play would begin in April 1961 with eight teams, including one in Denver headed by Bob Howsam. The new league quickly evaporated, without ever playing a game, when the National League reached expansion agreements to put teams in New York City and Houston, removing much of the impetus behind the Continental League effort. Following the Pittsburgh drug trials in 1985, an unsuccessful attempt was made to purchase the Pittsburgh Pirates and relocate them. However, in January 1990, Colorado's chances for a new team improved when Coors Brewing Company became a limited partner with the AAA Denver Zephyrs.
In 1991, as part of Major League Baseball's two-team expansion (along with the Florida (now Miami) Marlins), an ownership group representing Denver led by John Antonucci and Michael I. Monus was granted a franchise. They took the name "Rockies" due to Denver's proximity to the Rocky Mountains, which is reflected in their logo; the name was previously used by the city's first NHL team (now the New Jersey Devils). Monus and Antonucci were forced to drop out in 1992 after Monus's reputation was ruined by an accounting scandal. Trucking magnate Jerry McMorris stepped in at the 11th hour to save the franchise, allowing the team to begin play in 1993. The Rockies shared Mile High Stadium with the National Football League (NFL)'s Denver Broncos for their first two seasons while Coors Field was constructed. It was completed for the 1995 Major League Baseball season.
In 1993, they began in the West division of the National League. That year the Rockies set the all-time Major League record for attendance, drawing an incredible 4,483,350 fans (a record that stands to this day). The Rockies were MLB's first team based in the Mountain Time Zone. They have reached the Major League Baseball postseason five times, each time as the National League wild card team. Twice (1995 and 2009), they were eliminated in the first round of the playoffs. In 2007, the Rockies advanced to the World Series, only to be swept by the Boston Red Sox. The team's stretch run was among the greatest ever for a Major League Baseball team. Having a record of 76-72 at the start of play on September 16, the Rockies proceeded to win 14 of their final 15 regular season games. The stretch culminated with a 9-8, 13-inning victory over the San Diego Padres in a one-game playoff for the wild card berth. Colorado then swept their first seven playoff games to win the NL pennant (thus, at the start of the World Series, the Rockies had won a total of 21 out of 22 games). Fans and media nicknamed their improbable October run "Rocktober".
Colorado made postseason berths in 2017 and 2018. In 2018, the Rockies became the first team since the 1922 Philadelphia Phillies to play in four cities against four teams in five days, including the 162nd game of the regular season, NL West tie-breaker, NL Wild Card Game and NLDS Game 1, eventually losing to the Milwaukee Brewers in the NLDS.
Like their expansion brethren, the Miami Marlins, they have never won a division title since their establishment and they, along with the Pittsburgh Pirates are also one of three MLB teams that have never won their current division. The Rockies have played their home games at Coors Field since 1995. Their newest spring training home, Salt River Fields at Talking Stick in Scottsdale, Arizona, opened in March 2011 and is shared with the Arizona Diamondbacks.
On June 1, 2006, USA Today reported that Rockies management, including manager Clint Hurdle, had instituted an explicitly Christian code of conduct for the team's players, banning men's magazines (such as Maxim and Playboy) and sexually explicit music from the team's clubhouse. The article sparked controversy, and soon-after The Denver Post published an article featuring many Rockies players contesting the claims made in the USA Today article. Former Rockies pitcher Jason Jennings said: "[The article in USA Today] was just bad. I am not happy at all. Some of the best teammates I have ever had are the furthest thing from Christian", Jennings said. "You don't have to be a Christian to have good character. They can be separate. [The article] was misleading."
On October 17, 2007, a week before the first game of the 2007 World Series against the Boston Red Sox, the Colorado Rockies announced that tickets were to be available to the general public via online sales only, despite prior arrangements to sell the tickets at local retail outlets. Five days later on October 22, California-based ticket vendor Paciolan, Inc., the sole contractor authorized by the Colorado Rockies to distribute tickets, was forced to suspend sales after less than an hour due to an overwhelming number of attempts to purchase tickets. An official release from the baseball organization claimed that they were the victims of a denial of service attack. These claims, however, were unsubstantiated and neither the Rockies nor Paciolan have sought investigation into the matter. The United States Federal Bureau of Investigation started its own investigation into the claims. Ticket sales resumed the next day, with all three home games selling out within two and a half hours.
In March 2021, Ken Rosenthal and Nick Groke reported in The Athletic that, during the 2020 season, the Rockies had made baseball operations personnel work as clubhouse attendants in addition to their front office duties, resulting in work days lasting up to 17 hours. Former staffers described doing laundry for players while team personnel asked them for scouting and statistical information. The article further described a general atmosphere of dysfunction and unaccountability in Colorado's front office. General manager Jeff Bridich resigned the following month.
One of the Rockies' team colors is purple which was inspired by the line "For purple mountain majesties" in "America the Beautiful". The shades of the color used by the ball club lacked uniformity until PMS 2685 was established as the official purple beginning with the 2017 season.
The Rockies' home uniform is white with purple pinstripes, and the Rockies are the first team in Major League history to wear purple pinstripes. The front of the uniform is emblazoned with the team name in silver trimmed in black, and letters and numerals are in black trimmed in silver. During the Rockies' inaugural season, they went without names on the back of their home uniforms, but added them for the following season. In 2000, numerals were added to the chest.
The Rockies' road uniform is grey with purple piping. The front of the uniform originally featured the team name in silver trimmed in purple, but was changed the next season to purple with white trim. Letters and numerals are in purple with white trim. In the 2000 season, piping was replaced with pinstripes, "Colorado" was emblazoned in front, chest numerals were placed, and black trim was added to the letters. Prior to the 2012 season, the Rockies brought back the purple piping on their road uniforms, but kept the other elements of their 2000 uniform change.
The Rockies originally wore an alternate black uniform during their maiden 1993 season, but for only a few games. The uniform featured the team name in silver with purple trim, and letters and numerals in purple with white trim. In the 2005 season, the Rockies started wearing black sleeveless alternate uniforms, featuring "Colorado", letters and numerals in silver with purple and white trim. The uniforms also included black undershirts, and for a few games in 2005, purple undershirts. The Rockies retired the black sleeveless uniform in 2022, replacing it with the "City Connect" uniform (see below).
From 2002 to 2011, the Rockies wore alternate versions of their pinstriped white uniform, featuring the interlocking "CR" on the left chest and numerals on the right chest. This design featured sleeves until 2004, when they went with a vest design with black undershirts.
In addition to the black sleeveless alternate uniform, the Rockies also wear a purple alternate uniform, which they first unveiled in the 2000 season. The design featured "Colorado" in silver with black and white trim, and letters and numerals in black with white trim. At the start of the 2012 season, the Rockies introduced "Purple Mondays" in which the team wears its purple uniform every Monday game day, though the team continued to wear them on other days of the week.
Prior to 2019, the Rockies always wore their white pinstriped pants regardless of what uniform top they wore during home games. However, the Rockies have since added alternate white non-pinstriped pants to pair with either their black or purple alternate uniforms at home, as neither uniform contained pinstripes.
The Rockies currently wear an all-black cap with "CR" in purple trimmed in silver and a purple-brimmed variation as an alternate. The team previously wore an all-purple cap with "CR" in black trimmed in silver, and in the 2018 season, caps with the "CR" in silver to commemorate the team's 25th anniversary.
In 2022, the Rockies were one of seven additional teams to don Nike's "City Connect" uniforms. The set is predominantly green and white with printed mountain range motifs adorning the chest. The lettering was taken from the official Colorado license plates. The right sleeve has a yellow patch featuring the shortened nickname "ROX", the "5280" sign representing the altitude of Denver, two black diamonds representing Double Diamond skiing, and the exact longitude and latitude of Coors Field. The left sleeve has the interlocking "CR" in white with green trim, and purple piping was added to represent purple seats at Coors Field. Caps are green with a white panel, featuring a "CO" patch with various Colorado-inspired symbols, including colors from the state flag and mountain ranges. In 2023, the Rockies tweaked their "City Connect" uniform, pairing it with white pants on day games and green pants on night games.
Todd Helton is the first Colorado player to have his number (17) retired, which was done on Sunday, August 17, 2014.
Jackie Robinson's No. 42, was retired throughout all of baseball in 1997.
Larry Walker, the first member of the Baseball Hall of Fame wearing a Colorado Rockies hat, became the second Colorado player to have his number retired, which occurred in 2021.
Keli McGregor had worked with the Rockies since their inception in 1993, rising from senior director of operations to team president in 2002, until his death on April 20, 2010. He is honored at Coors Field alongside Helton, Walker, and Robinson with his initials.
The Rockies have not re-issued Carlos Gonzalez's No. 5 since leaving the team after 2018.
First base:
Second base:
Shortstop:
Third base:
Outfield:
The Rockies developed an on-and-off rivalry with the Arizona Diamondbacks, often attributed to both teams being the newest in the division. Colorado had joined the NL West in 1993, while the Diamondbacks are the newest team in the league; founding in 1998. The two teams have met twice in the postseason; notably during the 2007 National League Championship Series which saw the Rockies enter the postseason as a wild card, and went on to upset the division champion Diamondbacks in a sweep en route to the franchise's lone World Series appearance. The two teams met again in the 2017 National League Wild Card Game, which was won by Arizona.
The Rockies also have clashed in divisional matchups with the Los Angeles Dodgers and San Francisco Giants particularly as both teams often thwarted the Rockies' postseason ambitions by winning the division. The Rockies have never won the NL West while the Dodgers and Giants have combined for 21 division titles since the Rockies began play in 1993.
The Rockies led MLB attendance records for the first seven years of their existence. The inaugural season is currently the MLB all-time record for home attendance.
+ = 57 home games in strike shortened season. ++ = 72 home games in strike shortened season.
The Colorado Rockies farm system consists of seven minor league affiliates.
As of 2010, Rockies' flagship radio station is KOA 850AM, with some late-season games broadcast on KHOW 630 AM due to conflicts with Denver Broncos games. The Rockies Radio Network is composed of 38 affiliate stations in eight states.
As of 2019, Jack Corrigan and Jerry Schemmel are the radio announcers, serving as a backup TV announcer whenever Drew Goodman is not available.
In January 2020, long-time KOA radio announcer Jerry Schemmel was let go from his role for budgetary reasons from KOA's parent company. He returned in 2022, replacing Mike Rice, who reportedly refused the COVID-19 vaccine.
As of 2013, Spanish language radio broadcasts of the Rockies are heard on KNRV 1150 AM.
From 1997 to 2023, most regular season games were produced and televised by AT&T SportsNet Rocky Mountain. All 150 games produced by AT&T SportsNet Rocky Mountain were broadcast in HD. Jeff Huson and Drew Goodman are the usual TV broadcast team, with Ryan Spilborghs and Kelsey Wingert handling on-field coverage and clubhouse interviews. Jenny Cavnar, Jason Hirsh, and Cory Sullivan handle the pre-game and post-game shows. Corrigan, Spilborghs, Cavnar, and Sullivan also fill in as play-by-play or color commentator during absences of Huson or Goodman. AT&T SportsNet Rocky Mountain is expected to shut down by the end of 2023, with their final Rockies broadcast being a home game against the Minnesota Twins on October 1. The Rockies are considering a deal with Altitude Sports and Entertainment or assigning their local television rights to MLB. | [
{
"paragraph_id": 0,
"text": "The Colorado Rockies are an American professional baseball team based in Denver. The Rockies compete in Major League Baseball (MLB) as a member club of the National League (NL) West division. The team plays its home baseball games at Coors Field, which is located in the Lower Downtown area of Denver. The club is owned by the Monfort brothers and managed by Bud Black.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Rockies began as an expansion team for the 1993 season and played their home games for their first two seasons at Mile High Stadium. Since 1995, they have played at Coors Field, which has earned a reputation as a hitter's park. The Rockies have qualified for the postseason five times, each time as a Wild Card winner. In 2007, the team earned its only NL pennant after winning 14 of their final 15 games in the regular season to secure a Wild Card position, capping the streak off with a 13-inning 9–8 victory against the San Diego Padres in the tiebreaker game affectionately known as \"Game 163\" by Rockies fans. The Rockies then proceeded to sweep the Philadelphia Phillies and Arizona Diamondbacks in the NLDS and NLCS and entered the 2007 World Series as winners of 21 of their last 22 games. However, they were swept by the American League (AL) champions Boston Red Sox in four games.",
"title": ""
},
{
"paragraph_id": 2,
"text": "From 1993 to 2022, the Rockies have an overall record of 2,201–2,495 (.469 winning percentage). After the Denver Nuggets won the 2023 NBA Finals, the Rockies became the only one of Denver’s franchises in the five major North American professional sports leagues (MLB, MLS, NBA, NFL, & NHL) to yet win a championship.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Denver had long been a hotbed of minor league baseball as far back as the late 19th century with the original Denver Bears (or Grizzlies) competing in the Western League before being replaced in 1955 by a AAA team of the same name. Residents and businesses in the area desired a Major League team. Denver's Mile High Stadium was built originally as Denver Bears Stadium, a minor league baseball stadium that could be upgraded to major league standards. Several previous attempts to bring Major League Baseball to Colorado had failed. In 1958, New York lawyer William Shea proposed the new Continental League as a rival to the two existing major leagues. In 1960, the Continental League announced that play would begin in April 1961 with eight teams, including one in Denver headed by Bob Howsam. The new league quickly evaporated, without ever playing a game, when the National League reached expansion agreements to put teams in New York City and Houston, removing much of the impetus behind the Continental League effort. Following the Pittsburgh drug trials in 1985, an unsuccessful attempt was made to purchase the Pittsburgh Pirates and relocate them. However, in January 1990, Colorado's chances for a new team improved when Coors Brewing Company became a limited partner with the AAA Denver Zephyrs.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In 1991, as part of Major League Baseball's two-team expansion (along with the Florida (now Miami) Marlins), an ownership group representing Denver led by John Antonucci and Michael I. Monus was granted a franchise. They took the name \"Rockies\" due to Denver's proximity to the Rocky Mountains, which is reflected in their logo; the name was previously used by the city's first NHL team (now the New Jersey Devils). Monus and Antonucci were forced to drop out in 1992 after Monus's reputation was ruined by an accounting scandal. Trucking magnate Jerry McMorris stepped in at the 11th hour to save the franchise, allowing the team to begin play in 1993. The Rockies shared Mile High Stadium with the National Football League (NFL)'s Denver Broncos for their first two seasons while Coors Field was constructed. It was completed for the 1995 Major League Baseball season.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 1993, they began in the West division of the National League. That year the Rockies set the all-time Major League record for attendance, drawing an incredible 4,483,350 fans (a record that stands to this day). The Rockies were MLB's first team based in the Mountain Time Zone. They have reached the Major League Baseball postseason five times, each time as the National League wild card team. Twice (1995 and 2009), they were eliminated in the first round of the playoffs. In 2007, the Rockies advanced to the World Series, only to be swept by the Boston Red Sox. The team's stretch run was among the greatest ever for a Major League Baseball team. Having a record of 76-72 at the start of play on September 16, the Rockies proceeded to win 14 of their final 15 regular season games. The stretch culminated with a 9-8, 13-inning victory over the San Diego Padres in a one-game playoff for the wild card berth. Colorado then swept their first seven playoff games to win the NL pennant (thus, at the start of the World Series, the Rockies had won a total of 21 out of 22 games). Fans and media nicknamed their improbable October run \"Rocktober\".",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Colorado made postseason berths in 2017 and 2018. In 2018, the Rockies became the first team since the 1922 Philadelphia Phillies to play in four cities against four teams in five days, including the 162nd game of the regular season, NL West tie-breaker, NL Wild Card Game and NLDS Game 1, eventually losing to the Milwaukee Brewers in the NLDS.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Like their expansion brethren, the Miami Marlins, they have never won a division title since their establishment and they, along with the Pittsburgh Pirates are also one of three MLB teams that have never won their current division. The Rockies have played their home games at Coors Field since 1995. Their newest spring training home, Salt River Fields at Talking Stick in Scottsdale, Arizona, opened in March 2011 and is shared with the Arizona Diamondbacks.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "On June 1, 2006, USA Today reported that Rockies management, including manager Clint Hurdle, had instituted an explicitly Christian code of conduct for the team's players, banning men's magazines (such as Maxim and Playboy) and sexually explicit music from the team's clubhouse. The article sparked controversy, and soon-after The Denver Post published an article featuring many Rockies players contesting the claims made in the USA Today article. Former Rockies pitcher Jason Jennings said: \"[The article in USA Today] was just bad. I am not happy at all. Some of the best teammates I have ever had are the furthest thing from Christian\", Jennings said. \"You don't have to be a Christian to have good character. They can be separate. [The article] was misleading.\"",
"title": "History"
},
{
"paragraph_id": 9,
"text": "On October 17, 2007, a week before the first game of the 2007 World Series against the Boston Red Sox, the Colorado Rockies announced that tickets were to be available to the general public via online sales only, despite prior arrangements to sell the tickets at local retail outlets. Five days later on October 22, California-based ticket vendor Paciolan, Inc., the sole contractor authorized by the Colorado Rockies to distribute tickets, was forced to suspend sales after less than an hour due to an overwhelming number of attempts to purchase tickets. An official release from the baseball organization claimed that they were the victims of a denial of service attack. These claims, however, were unsubstantiated and neither the Rockies nor Paciolan have sought investigation into the matter. The United States Federal Bureau of Investigation started its own investigation into the claims. Ticket sales resumed the next day, with all three home games selling out within two and a half hours.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In March 2021, Ken Rosenthal and Nick Groke reported in The Athletic that, during the 2020 season, the Rockies had made baseball operations personnel work as clubhouse attendants in addition to their front office duties, resulting in work days lasting up to 17 hours. Former staffers described doing laundry for players while team personnel asked them for scouting and statistical information. The article further described a general atmosphere of dysfunction and unaccountability in Colorado's front office. General manager Jeff Bridich resigned the following month.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "One of the Rockies' team colors is purple which was inspired by the line \"For purple mountain majesties\" in \"America the Beautiful\". The shades of the color used by the ball club lacked uniformity until PMS 2685 was established as the official purple beginning with the 2017 season.",
"title": "Uniforms"
},
{
"paragraph_id": 12,
"text": "The Rockies' home uniform is white with purple pinstripes, and the Rockies are the first team in Major League history to wear purple pinstripes. The front of the uniform is emblazoned with the team name in silver trimmed in black, and letters and numerals are in black trimmed in silver. During the Rockies' inaugural season, they went without names on the back of their home uniforms, but added them for the following season. In 2000, numerals were added to the chest.",
"title": "Uniforms"
},
{
"paragraph_id": 13,
"text": "The Rockies' road uniform is grey with purple piping. The front of the uniform originally featured the team name in silver trimmed in purple, but was changed the next season to purple with white trim. Letters and numerals are in purple with white trim. In the 2000 season, piping was replaced with pinstripes, \"Colorado\" was emblazoned in front, chest numerals were placed, and black trim was added to the letters. Prior to the 2012 season, the Rockies brought back the purple piping on their road uniforms, but kept the other elements of their 2000 uniform change.",
"title": "Uniforms"
},
{
"paragraph_id": 14,
"text": "The Rockies originally wore an alternate black uniform during their maiden 1993 season, but for only a few games. The uniform featured the team name in silver with purple trim, and letters and numerals in purple with white trim. In the 2005 season, the Rockies started wearing black sleeveless alternate uniforms, featuring \"Colorado\", letters and numerals in silver with purple and white trim. The uniforms also included black undershirts, and for a few games in 2005, purple undershirts. The Rockies retired the black sleeveless uniform in 2022, replacing it with the \"City Connect\" uniform (see below).",
"title": "Uniforms"
},
{
"paragraph_id": 15,
"text": "From 2002 to 2011, the Rockies wore alternate versions of their pinstriped white uniform, featuring the interlocking \"CR\" on the left chest and numerals on the right chest. This design featured sleeves until 2004, when they went with a vest design with black undershirts.",
"title": "Uniforms"
},
{
"paragraph_id": 16,
"text": "In addition to the black sleeveless alternate uniform, the Rockies also wear a purple alternate uniform, which they first unveiled in the 2000 season. The design featured \"Colorado\" in silver with black and white trim, and letters and numerals in black with white trim. At the start of the 2012 season, the Rockies introduced \"Purple Mondays\" in which the team wears its purple uniform every Monday game day, though the team continued to wear them on other days of the week.",
"title": "Uniforms"
},
{
"paragraph_id": 17,
"text": "Prior to 2019, the Rockies always wore their white pinstriped pants regardless of what uniform top they wore during home games. However, the Rockies have since added alternate white non-pinstriped pants to pair with either their black or purple alternate uniforms at home, as neither uniform contained pinstripes.",
"title": "Uniforms"
},
{
"paragraph_id": 18,
"text": "The Rockies currently wear an all-black cap with \"CR\" in purple trimmed in silver and a purple-brimmed variation as an alternate. The team previously wore an all-purple cap with \"CR\" in black trimmed in silver, and in the 2018 season, caps with the \"CR\" in silver to commemorate the team's 25th anniversary.",
"title": "Uniforms"
},
{
"paragraph_id": 19,
"text": "In 2022, the Rockies were one of seven additional teams to don Nike's \"City Connect\" uniforms. The set is predominantly green and white with printed mountain range motifs adorning the chest. The lettering was taken from the official Colorado license plates. The right sleeve has a yellow patch featuring the shortened nickname \"ROX\", the \"5280\" sign representing the altitude of Denver, two black diamonds representing Double Diamond skiing, and the exact longitude and latitude of Coors Field. The left sleeve has the interlocking \"CR\" in white with green trim, and purple piping was added to represent purple seats at Coors Field. Caps are green with a white panel, featuring a \"CO\" patch with various Colorado-inspired symbols, including colors from the state flag and mountain ranges. In 2023, the Rockies tweaked their \"City Connect\" uniform, pairing it with white pants on day games and green pants on night games.",
"title": "Uniforms"
},
{
"paragraph_id": 20,
"text": "Todd Helton is the first Colorado player to have his number (17) retired, which was done on Sunday, August 17, 2014.",
"title": "Baseball Hall of Famers"
},
{
"paragraph_id": 21,
"text": "Jackie Robinson's No. 42, was retired throughout all of baseball in 1997.",
"title": "Baseball Hall of Famers"
},
{
"paragraph_id": 22,
"text": "Larry Walker, the first member of the Baseball Hall of Fame wearing a Colorado Rockies hat, became the second Colorado player to have his number retired, which occurred in 2021.",
"title": "Baseball Hall of Famers"
},
{
"paragraph_id": 23,
"text": "Keli McGregor had worked with the Rockies since their inception in 1993, rising from senior director of operations to team president in 2002, until his death on April 20, 2010. He is honored at Coors Field alongside Helton, Walker, and Robinson with his initials.",
"title": "Baseball Hall of Famers"
},
{
"paragraph_id": 24,
"text": "The Rockies have not re-issued Carlos Gonzalez's No. 5 since leaving the team after 2018.",
"title": "Baseball Hall of Famers"
},
{
"paragraph_id": 25,
"text": "First base:",
"title": "Individual awards"
},
{
"paragraph_id": 26,
"text": "Second base:",
"title": "Individual awards"
},
{
"paragraph_id": 27,
"text": "Shortstop:",
"title": "Individual awards"
},
{
"paragraph_id": 28,
"text": "Third base:",
"title": "Individual awards"
},
{
"paragraph_id": 29,
"text": "Outfield:",
"title": "Individual awards"
},
{
"paragraph_id": 30,
"text": "",
"title": "Individual awards"
},
{
"paragraph_id": 31,
"text": "The Rockies developed an on-and-off rivalry with the Arizona Diamondbacks, often attributed to both teams being the newest in the division. Colorado had joined the NL West in 1993, while the Diamondbacks are the newest team in the league; founding in 1998. The two teams have met twice in the postseason; notably during the 2007 National League Championship Series which saw the Rockies enter the postseason as a wild card, and went on to upset the division champion Diamondbacks in a sweep en route to the franchise's lone World Series appearance. The two teams met again in the 2017 National League Wild Card Game, which was won by Arizona.",
"title": "Rivalries"
},
{
"paragraph_id": 32,
"text": "The Rockies also have clashed in divisional matchups with the Los Angeles Dodgers and San Francisco Giants particularly as both teams often thwarted the Rockies' postseason ambitions by winning the division. The Rockies have never won the NL West while the Dodgers and Giants have combined for 21 division titles since the Rockies began play in 1993.",
"title": "Rivalries"
},
{
"paragraph_id": 33,
"text": "The Rockies led MLB attendance records for the first seven years of their existence. The inaugural season is currently the MLB all-time record for home attendance.",
"title": "Home attendance"
},
{
"paragraph_id": 34,
"text": "+ = 57 home games in strike shortened season. ++ = 72 home games in strike shortened season.",
"title": "Home attendance"
},
{
"paragraph_id": 35,
"text": "The Colorado Rockies farm system consists of seven minor league affiliates.",
"title": "Minor league affiliations"
},
{
"paragraph_id": 36,
"text": "As of 2010, Rockies' flagship radio station is KOA 850AM, with some late-season games broadcast on KHOW 630 AM due to conflicts with Denver Broncos games. The Rockies Radio Network is composed of 38 affiliate stations in eight states.",
"title": "Radio and television"
},
{
"paragraph_id": 37,
"text": "As of 2019, Jack Corrigan and Jerry Schemmel are the radio announcers, serving as a backup TV announcer whenever Drew Goodman is not available.",
"title": "Radio and television"
},
{
"paragraph_id": 38,
"text": "In January 2020, long-time KOA radio announcer Jerry Schemmel was let go from his role for budgetary reasons from KOA's parent company. He returned in 2022, replacing Mike Rice, who reportedly refused the COVID-19 vaccine.",
"title": "Radio and television"
},
{
"paragraph_id": 39,
"text": "As of 2013, Spanish language radio broadcasts of the Rockies are heard on KNRV 1150 AM.",
"title": "Radio and television"
},
{
"paragraph_id": 40,
"text": "From 1997 to 2023, most regular season games were produced and televised by AT&T SportsNet Rocky Mountain. All 150 games produced by AT&T SportsNet Rocky Mountain were broadcast in HD. Jeff Huson and Drew Goodman are the usual TV broadcast team, with Ryan Spilborghs and Kelsey Wingert handling on-field coverage and clubhouse interviews. Jenny Cavnar, Jason Hirsh, and Cory Sullivan handle the pre-game and post-game shows. Corrigan, Spilborghs, Cavnar, and Sullivan also fill in as play-by-play or color commentator during absences of Huson or Goodman. AT&T SportsNet Rocky Mountain is expected to shut down by the end of 2023, with their final Rockies broadcast being a home game against the Minnesota Twins on October 1. The Rockies are considering a deal with Altitude Sports and Entertainment or assigning their local television rights to MLB.",
"title": "Radio and television"
}
] | The Colorado Rockies are an American professional baseball team based in Denver. The Rockies compete in Major League Baseball (MLB) as a member club of the National League (NL) West division. The team plays its home baseball games at Coors Field, which is located in the Lower Downtown area of Denver. The club is owned by the Monfort brothers and managed by Bud Black. The Rockies began as an expansion team for the 1993 season and played their home games for their first two seasons at Mile High Stadium. Since 1995, they have played at Coors Field, which has earned a reputation as a hitter's park. The Rockies have qualified for the postseason five times, each time as a Wild Card winner. In 2007, the team earned its only NL pennant after winning 14 of their final 15 games in the regular season to secure a Wild Card position, capping the streak off with a 13-inning 9–8 victory against the San Diego Padres in the tiebreaker game affectionately known as "Game 163" by Rockies fans. The Rockies then proceeded to sweep the Philadelphia Phillies and Arizona Diamondbacks in the NLDS and NLCS and entered the 2007 World Series as winners of 21 of their last 22 games. However, they were swept by the American League (AL) champions Boston Red Sox in four games. From 1993 to 2022, the Rockies have an overall record of 2,201–2,495. After the Denver Nuggets won the 2023 NBA Finals, the Rockies became the only one of Denver’s franchises in the five major North American professional sports leagues to yet win a championship. | 2001-10-08T21:43:41Z | 2023-12-20T21:48:35Z | [
"Template:Colorado Rockies",
"Template:Infobox MLB",
"Template:Further",
"Template:Multiple image",
"Template:Sister project links",
"Template:Citation needed",
"Template:S-ttl",
"Template:Authority control",
"Template:Winpct",
"Template:Reflist",
"Template:Citation",
"Template:S-start-collapsible",
"Template:See also",
"Template:Cite web",
"Template:Cite news",
"Template:Cite episode",
"Template:Portal bar",
"Template:Mlby",
"Template:S-end",
"Template:Cite journal",
"Template:S-aft",
"Template:Navboxes",
"Template:Baseball year",
"Template:S-start",
"Template:Cite press release",
"Template:MLBTeam",
"Template:About",
"Template:Main article",
"Template:Webarchive",
"Template:S-bef",
"Template:Colorado Rockies roster",
"Template:Short description",
"Template:Use mdy dates",
"Template:Baseball hall of fame list",
"Template:Retired number list"
] | https://en.wikipedia.org/wiki/Colorado_Rockies |
6,670 | Cement | A cement is a binder, a chemical substance used for construction that sets, hardens, and adheres to other materials to bind them together. Cement is seldom used on its own, but rather to bind sand and gravel (aggregate) together. Cement mixed with fine aggregate produces mortar for masonry, or with sand and gravel, produces concrete. Concrete is the most widely used material in existence and is behind only water as the planet's most-consumed resource.
Cements used in construction are usually inorganic, often lime or calcium silicate based, which can be characterized as hydraulic or the less common non-hydraulic, depending on the ability of the cement to set in the presence of water (see hydraulic and non-hydraulic lime plaster).
Hydraulic cements (e.g., Portland cement) set and become adhesive through a chemical reaction between the dry ingredients and water. The chemical reaction results in mineral hydrates that are not very water-soluble and so are quite durable in water and safe from chemical attack. This allows setting in wet conditions or under water and further protects the hardened material from chemical attack. The chemical process for hydraulic cement was found by ancient Romans who used volcanic ash (pozzolana) with added lime (calcium oxide).
Non-hydraulic cement (less common) does not set in wet conditions or under water. Rather, it sets as it dries and reacts with carbon dioxide in the air. It is resistant to attack by chemicals after setting.
The word "cement" can be traced back to the Ancient Roman term opus caementicium, used to describe masonry resembling modern concrete that was made from crushed rock with burnt lime as binder. The volcanic ash and pulverized brick supplements that were added to the burnt lime, to obtain a hydraulic binder, were later referred to as cementum, cimentum, cäment, and cement. In modern times, organic polymers are sometimes used as cements in concrete.
World production of cement is about 4.4 billion tonnes per year (2021, estimation), of which about half is made in China, followed by India and Vietnam.
The cement production process is responsible for nearly 8% (2018) of global CO2 emissions, which includes heating raw materials in a cement kiln by fuel combustion and resulting release of CO2 stored in the calcium carbonate (calcination process). Its hydrated products, such as concrete, gradually reabsorb substantial amounts of atmospheric CO2 (carbonation process) compensating near 30% of initial CO2 emissions, as estimations suggest.
Cement materials can be classified into two distinct categories: hydraulic cements and non-hydraulic cements according to their respective setting and hardening mechanisms. Hydraulic cement setting and hardening involves hydration reactions and therefore requires water, while non-hydraulic cements only react with a gas and can directly set under air.
By far the most common type of cement is hydraulic cement, which hardens by hydration of the clinker minerals when water is added. Hydraulic cements (such as Portland cement) are made of a mixture of silicates and oxides, the four main mineral phases of the clinker, abbreviated in the cement chemist notation, being:
The silicates are responsible for the cement's mechanical properties — the tricalcium aluminate and brownmillerite are essential for the formation of the liquid phase during the sintering (firing) process of clinker at high temperature in the kiln. The chemistry of these reactions is not completely clear and is still the object of research.
First, the limestone (calcium carbonate) is burned to remove its carbon, producing lime (calcium oxide) in what is known as a calcination reaction. This single chemical reaction is a major emitter of global carbon dioxide emissions.
The lime reacts with silicon dioxide to produce dicalcium silicate and tricalcium silicate.
The lime also reacts with aluminium oxide to form tricalcium aluminate.
In the last step, calcium oxide, aluminium oxide, and ferric oxide react together to form cement.
A less common form of cement is non-hydraulic cement, such as slaked lime (calcium oxide mixed with water), which hardens by carbonation in contact with carbon dioxide, which is present in the air (~ 412 vol. ppm ≃ 0.04 vol. %). First calcium oxide (lime) is produced from calcium carbonate (limestone or chalk) by calcination at temperatures above 825 °C (1,517 °F) for about 10 hours at atmospheric pressure:
The calcium oxide is then spent (slaked) by mixing it with water to make slaked lime (calcium hydroxide):
Once the excess water is completely evaporated (this process is technically called setting), the carbonation starts:
This reaction is slow, because the partial pressure of carbon dioxide in the air is low (~ 0.4 millibar). The carbonation reaction requires that the dry cement be exposed to air, so the slaked lime is a non-hydraulic cement and cannot be used under water. This process is called the lime cycle.
Perhaps the earliest known occurrence of cement is from twelve million years ago. A deposit of cement was formed after an occurrence of oil shale located adjacent to a bed of limestone burned by natural causes. These ancient deposits were investigated in the 1960s and 1970s.
Cement, chemically speaking, is a product that includes lime as the primary binding ingredient, but is far from the first material used for cementation. The Babylonians and Assyrians used bitumen to bind together burnt brick or alabaster slabs. In Ancient Egypt, stone blocks were cemented together with a mortar made of sand and roughly burnt gypsum (CaSO4 · 2H2O), which is Plaster of Paris, which often contained calcium carbonate (CaCO3),
Lime (calcium oxide) was used on Crete and by the Ancient Greeks. There is evidence that the Minoans of Crete used crushed potsherds as an artificial pozzolan for hydraulic cement. Nobody knows who first discovered that a combination of hydrated non-hydraulic lime and a pozzolan produces a hydraulic mixture (see also: Pozzolanic reaction), but such concrete was used by the Greeks, specifically the Ancient Macedonians, and three centuries later on a large scale by Roman engineers.
There is... a kind of powder which from natural causes produces astonishing results. It is found in the neighborhood of Baiae and in the country belonging to the towns round about Mount Vesuvius. This substance when mixed with lime and rubble not only lends strength to buildings of other kinds but even when piers of it are constructed in the sea, they set hard underwater.
The Greeks used volcanic tuff from the island of Thera as their pozzolan and the Romans used crushed volcanic ash (activated aluminium silicates) with lime. This mixture could set under water, increasing its resistance to corrosion like rust. The material was called pozzolana from the town of Pozzuoli, west of Naples where volcanic ash was extracted. In the absence of pozzolanic ash, the Romans used powdered brick or pottery as a substitute and they may have used crushed tiles for this purpose before discovering natural sources near Rome. The huge dome of the Pantheon in Rome and the massive Baths of Caracalla are examples of ancient structures made from these concretes, many of which still stand. The vast system of Roman aqueducts also made extensive use of hydraulic cement. Roman concrete was rarely used on the outside of buildings. The normal technique was to use brick facing material as the formwork for an infill of mortar mixed with an aggregate of broken pieces of stone, brick, potsherds, recycled chunks of concrete, or other building rubble.
Lightweight concrete was designed and used for the construction of structural elements by the pre-Columbian builders who lived in a very advanced civilisation in El Tajin near Mexico City, in Mexico. A detailed study of the composition of the aggregate and binder show that the aggregate was pumice and the binder was a pozzolanic cement made with volcanic ash and lime.
Any preservation of this knowledge in literature from the Middle Ages is unknown, but medieval masons and some military engineers actively used hydraulic cement in structures such as canals, fortresses, harbors, and shipbuilding facilities. A mixture of lime mortar and aggregate with brick or stone facing material was used in the Eastern Roman Empire as well as in the West into the Gothic period. The German Rhineland continued to use hydraulic mortar throughout the Middle Ages, having local pozzolana deposits called trass.
Tabby is a building material made from oyster shell lime, sand, and whole oyster shells to form a concrete. The Spanish introduced it to the Americas in the sixteenth century.
The technical knowledge for making hydraulic cement was formalized by French and British engineers in the 18th century.
John Smeaton made an important contribution to the development of cements while planning the construction of the third Eddystone Lighthouse (1755–59) in the English Channel now known as Smeaton's Tower. He needed a hydraulic mortar that would set and develop some strength in the twelve-hour period between successive high tides. He performed experiments with combinations of different limestones and additives including trass and pozzolanas and did exhaustive market research on the available hydraulic limes, visiting their production sites, and noted that the "hydraulicity" of the lime was directly related to the clay content of the limestone used to make it. Smeaton was a civil engineer by profession, and took the idea no further.
In the South Atlantic seaboard of the United States, tabby relying on the oyster-shell middens of earlier Native American populations was used in house construction from the 1730s to the 1860s.
In Britain particularly, good quality building stone became ever more expensive during a period of rapid growth, and it became a common practice to construct prestige buildings from the new industrial bricks, and to finish them with a stucco to imitate stone. Hydraulic limes were favored for this, but the need for a fast set time encouraged the development of new cements. Most famous was Parker's "Roman cement". This was developed by James Parker in the 1780s, and finally patented in 1796. It was, in fact, nothing like material used by the Romans, but was a "natural cement" made by burning septaria – nodules that are found in certain clay deposits, and that contain both clay minerals and calcium carbonate. The burnt nodules were ground to a fine powder. This product, made into a mortar with sand, set in 5–15 minutes. The success of "Roman cement" led other manufacturers to develop rival products by burning artificial hydraulic lime cements of clay and chalk. Roman cement quickly became popular but was largely replaced by Portland cement in the 1850s.
Apparently unaware of Smeaton's work, the same principle was identified by Frenchman Louis Vicat in the first decade of the nineteenth century. Vicat went on to devise a method of combining chalk and clay into an intimate mixture, and, burning this, produced an "artificial cement" in 1817 considered the "principal forerunner" of Portland cement and "...Edgar Dobbs of Southwark patented a cement of this kind in 1811."
In Russia, Egor Cheliev created a new binder by mixing lime and clay. His results were published in 1822 in his book A Treatise on the Art to Prepare a Good Mortar published in St. Petersburg. A few years later in 1825, he published another book, which described various methods of making cement and concrete, and the benefits of cement in the construction of buildings and embankments.
Portland cement, the most common type of cement in general use around the world as a basic ingredient of concrete, mortar, stucco, and non-speciality grout, was developed in England in the mid 19th century, and usually originates from limestone. James Frost produced what he called "British cement" in a similar manner around the same time, but did not obtain a patent until 1822. In 1824, Joseph Aspdin patented a similar material, which he called Portland cement, because the render made from it was in color similar to the prestigious Portland stone quarried on the Isle of Portland, Dorset, England. However, Aspdins' cement was nothing like modern Portland cement but was a first step in its development, called a proto-Portland cement. Joseph Aspdins' son William Aspdin had left his father's company and in his cement manufacturing apparently accidentally produced calcium silicates in the 1840s, a middle step in the development of Portland cement. William Aspdin's innovation was counterintuitive for manufacturers of "artificial cements", because they required more lime in the mix (a problem for his father), a much higher kiln temperature (and therefore more fuel), and the resulting clinker was very hard and rapidly wore down the millstones, which were the only available grinding technology of the time. Manufacturing costs were therefore considerably higher, but the product set reasonably slowly and developed strength quickly, thus opening up a market for use in concrete. The use of concrete in construction grew rapidly from 1850 onward, and was soon the dominant use for cements. Thus Portland cement began its predominant role. Isaac Charles Johnson further refined the production of meso-Portland cement (middle stage of development) and claimed he was the real father of Portland cement.
Setting time and "early strength" are important characteristics of cements. Hydraulic limes, "natural" cements, and "artificial" cements all rely on their belite (2 CaO · SiO2, abbreviated as C2S) content for strength development. Belite develops strength slowly. Because they were burned at temperatures below 1,250 °C (2,280 °F), they contained no alite (3 CaO · SiO2, abbreviated as C3S), which is responsible for early strength in modern cements. The first cement to consistently contain alite was made by William Aspdin in the early 1840s: This was what we call today "modern" Portland cement. Because of the air of mystery with which William Aspdin surrounded his product, others (e.g., Vicat and Johnson) have claimed precedence in this invention, but recent analysis of both his concrete and raw cement have shown that William Aspdin's product made at Northfleet, Kent was a true alite-based cement. However, Aspdin's methods were "rule-of-thumb": Vicat is responsible for establishing the chemical basis of these cements, and Johnson established the importance of sintering the mix in the kiln.
In the US the first large-scale use of cement was Rosendale cement, a natural cement mined from a massive deposit of dolomite discovered in the early 19th century near Rosendale, New York. Rosendale cement was extremely popular for the foundation of buildings (e.g., Statue of Liberty, Capitol Building, Brooklyn Bridge) and lining water pipes. Sorel cement, or magnesia-based cement, was patented in 1867 by the Frenchman Stanislas Sorel. It was stronger than Portland cement but its poor water resistance (leaching) and corrosive properties (pitting corrosion due to the presence of leachable chloride anions and the low pH (8.5–9.5) of its pore water) limited its use as reinforced concrete for building construction.
The next development in the manufacture of Portland cement was the introduction of the rotary kiln. It produced a clinker mixture that was both stronger, because more alite (C3S) is formed at the higher temperature it achieved (1450 °C), and more homogeneous. Because raw material is constantly fed into a rotary kiln, it allowed a continuous manufacturing process to replace lower capacity batch production processes.
Calcium aluminate cements were patented in 1908 in France by Jules Bied for better resistance to sulfates. Also in 1908, Thomas Edison experimented with pre-cast concrete in houses in Union, N.J.
In the US, after World War One, the long curing time of at least a month for Rosendale cement made it unpopular for constructing highways and bridges, and many states and construction firms turned to Portland cement. Because of the switch to Portland cement, by the end of the 1920s only one of the 15 Rosendale cement companies had survived. But in the early 1930s, builders discovered that, while Portland cement set faster, it was not as durable, especially for highways—to the point that some states stopped building highways and roads with cement. Bertrain H. Wait, an engineer whose company had helped construct the New York City's Catskill Aqueduct, was impressed with the durability of Rosendale cement, and came up with a blend of both Rosendale and Portland cements that had the good attributes of both. It was highly durable and had a much faster setting time. Wait convinced the New York Commissioner of Highways to construct an experimental section of highway near New Paltz, New York, using one sack of Rosendale to six sacks of Portland cement. It was a success, and for decades the Rosendale-Portland cement blend was used in concrete highway and concrete bridge construction.
Cementitious materials have been used as a nuclear waste immobilizing matrix for more than a half-century. Technologies of waste cementation have been developed and deployed at industrial scale in many countries. Cementitious wasteforms require a careful selection and design process adapted to each specific type of waste to satisfy the strict waste acceptance criteria for long-term storage and disposal.
Modern development of hydraulic cement began with the start of the Industrial Revolution (around 1800), driven by three main needs:
Modern cements are often Portland cement or Portland cement blends, but other cement blends are used in some industrial settings.
Portland cement, a form of hydraulic cement, is by far the most common type of cement in general use around the world. This cement is made by heating limestone (calcium carbonate) with other materials (such as clay) to 1,450 °C (2,640 °F) in a kiln, in a process known as calcination that liberates a molecule of carbon dioxide from the calcium carbonate to form calcium oxide, or quicklime, which then chemically combines with the other materials in the mix to form calcium silicates and other cementitious compounds. The resulting hard substance, called 'clinker', is then ground with a small amount of gypsum (CaSO4·2H2O) into a powder to make ordinary Portland cement, the most commonly used type of cement (often referred to as OPC). Portland cement is a basic ingredient of concrete, mortar, and most non-specialty grout. The most common use for Portland cement is to make concrete. Portland cement may be grey or white.
Portland cement blends are often available as inter-ground mixtures from cement producers, but similar formulations are often also mixed from the ground components at the concrete mixing plant.
Portland blast-furnace slag cement, or blast furnace cement (ASTM C595 and EN 197-1 nomenclature respectively), contains up to 95% ground granulated blast furnace slag, with the rest Portland clinker and a little gypsum. All compositions produce high ultimate strength, but as slag content is increased, early strength is reduced, while sulfate resistance increases and heat evolution diminishes. Used as an economic alternative to Portland sulfate-resisting and low-heat cements.
Portland-fly ash cement contains up to 40% fly ash under ASTM standards (ASTM C595), or 35% under EN standards (EN 197–1). The fly ash is pozzolanic, so that ultimate strength is maintained. Because fly ash addition allows a lower concrete water content, early strength can also be maintained. Where good quality cheap fly ash is available, this can be an economic alternative to ordinary Portland cement.
Portland pozzolan cement includes fly ash cement, since fly ash is a pozzolan, but also includes cements made from other natural or artificial pozzolans. In countries where volcanic ashes are available (e.g., Italy, Chile, Mexico, the Philippines), these cements are often the most common form in use. The maximum replacement ratios are generally defined as for Portland-fly ash cement.
Portland silica fume cement. Addition of silica fume can yield exceptionally high strengths, and cements containing 5–20% silica fume are occasionally produced, with 10% being the maximum allowed addition under EN 197–1. However, silica fume is more usually added to Portland cement at the concrete mixer.
Masonry cements are used for preparing bricklaying mortars and stuccos, and must not be used in concrete. They are usually complex proprietary formulations containing Portland clinker and a number of other ingredients that may include limestone, hydrated lime, air entrainers, retarders, waterproofers, and coloring agents. They are formulated to yield workable mortars that allow rapid and consistent masonry work. Subtle variations of masonry cement in North America are plastic cements and stucco cements. These are designed to produce a controlled bond with masonry blocks.
Expansive cements contain, in addition to Portland clinker, expansive clinkers (usually sulfoaluminate clinkers), and are designed to offset the effects of drying shrinkage normally encountered in hydraulic cements. This cement can make concrete for floor slabs (up to 60 m square) without contraction joints.
White blended cements may be made using white clinker (containing little or no iron) and white supplementary materials such as high-purity metakaolin. Colored cements serve decorative purposes. Some standards allow the addition of pigments to produce colored Portland cement. Other standards (e.g., ASTM) do not allow pigments in Portland cement, and colored cements are sold as blended hydraulic cements.
Very finely ground cements are cement mixed with sand or with slag or other pozzolan type minerals that are extremely finely ground together. Such cements can have the same physical characteristics as normal cement but with 50% less cement, particularly because there is more surface area for the chemical reaction. Even with intensive grinding they can use up to 50% less energy (and thus less carbon emissions) to fabricate than ordinary Portland cements.
Pozzolan-lime cements are mixtures of ground pozzolan and lime. These are the cements the Romans used, and are present in surviving Roman structures like the Pantheon in Rome. They develop strength slowly, but their ultimate strength can be very high. The hydration products that produce strength are essentially the same as those in Portland cement.
Slag-lime cements—ground granulated blast-furnace slag—are not hydraulic on their own, but are "activated" by addition of alkalis, most economically using lime. They are similar to pozzolan lime cements in their properties. Only granulated slag (i.e., water-quenched, glassy slag) is effective as a cement component.
Supersulfated cements contain about 80% ground granulated blast furnace slag, 15% gypsum or anhydrite and a little Portland clinker or lime as an activator. They produce strength by formation of ettringite, with strength growth similar to a slow Portland cement. They exhibit good resistance to aggressive agents, including sulfate. Calcium aluminate cements are hydraulic cements made primarily from limestone and bauxite. The active ingredients are monocalcium aluminate CaAl2O4 (CaO · Al2O3 or CA in cement chemist notation, CCN) and mayenite Ca12Al14O33 (12 CaO · 7 Al2O3, or C12A7 in CCN). Strength forms by hydration to calcium aluminate hydrates. They are well-adapted for use in refractory (high-temperature resistant) concretes, e.g., for furnace linings.
Calcium sulfoaluminate cements are made from clinkers that include ye'elimite (Ca4(AlO2)6SO4 or C4A3S in Cement chemist's notation) as a primary phase. They are used in expansive cements, in ultra-high early strength cements, and in "low-energy" cements. Hydration produces ettringite, and specialized physical properties (such as expansion or rapid reaction) are obtained by adjustment of the availability of calcium and sulfate ions. Their use as a low-energy alternative to Portland cement has been pioneered in China, where several million tonnes per year are produced. Energy requirements are lower because of the lower kiln temperatures required for reaction, and the lower amount of limestone (which must be endothermically decarbonated) in the mix. In addition, the lower limestone content and lower fuel consumption leads to a CO2 emission around half that associated with Portland clinker. However, SO2 emissions are usually significantly higher.
"Natural" cements corresponding to certain cements of the pre-Portland era, are produced by burning argillaceous limestones at moderate temperatures. The level of clay components in the limestone (around 30–35%) is such that large amounts of belite (the low-early strength, high-late strength mineral in Portland cement) are formed without the formation of excessive amounts of free lime. As with any natural material, such cements have highly variable properties.
Geopolymer cements are made from mixtures of water-soluble alkali metal silicates, and aluminosilicate mineral powders such as fly ash and metakaolin.
Polymer cements are made from organic chemicals that polymerise. Producers often use thermoset materials. While they are often significantly more expensive, they can give a water proof material that has useful tensile strength.
Sorel Cement is a hard, durable cement made by combining magnesium oxide and a magnesium chloride solution
Fiber mesh cement or fiber reinforced concrete is cement that is made up of fibrous materials like synthetic fibers, glass fibers, natural fibers, and steel fibers. This type of mesh is distributed evenly throughout the wet concrete. The purpose of fiber mesh is to reduce water loss from the concrete as well as enhance its structural integrity. When used in plasters, fiber mesh increases cohesiveness, tensile strength, impact resistance, and to reduce shrinkage; ultimately, the main purpose of these combined properties is to reduce cracking.
Cement starts to set when mixed with water, which causes a series of hydration chemical reactions. The constituents slowly hydrate and the mineral hydrates solidify and harden. The interlocking of the hydrates gives cement its strength. Contrary to popular belief, hydraulic cement does not set by drying out — proper curing requires maintaining the appropriate moisture content necessary for the hydration reactions during the setting and the hardening processes. If hydraulic cements dry out during the curing phase, the resulting product can be insufficiently hydrated and significantly weakened. A minimum temperature of 5 °C is recommended, and no more than 30 °C. The concrete at young age must be protected against water evaporation due to direct insolation, elevated temperature, low relative humidity and wind.
The interfacial transition zone (ITZ) is a region of the cement paste around the aggregate particles in concrete. In the zone, a gradual transition in the microstructural features occurs. This zone can be up to 35 micrometer wide. Other studies have shown that the width can be up to 50 micrometer. The average content of unreacted clinker phase decreases and porosity decreases towards the aggregate surface. Similarly, the content of ettringite increases in ITZ.
Bags of cement routinely have health and safety warnings printed on them because not only is cement highly alkaline, but the setting process is exothermic. As a result, wet cement is strongly caustic (pH = 13.5) and can easily cause severe skin burns if not promptly washed off with water. Similarly, dry cement powder in contact with mucous membranes can cause severe eye or respiratory irritation. Some trace elements, such as chromium, from impurities naturally present in the raw materials used to produce cement may cause allergic dermatitis. Reducing agents such as ferrous sulfate (FeSO4) are often added to cement to convert the carcinogenic hexavalent chromate (CrO4) into trivalent chromium (Cr), a less toxic chemical species. Cement users need also to wear appropriate gloves and protective clothing.
In 2010, the world production of hydraulic cement was 3,300 megatonnes (3,600×10^ short tons). The top three producers were China with 1,800, India with 220, and USA with 63.5 million tonnes for a total of over half the world total by the world's three most populated states.
For the world capacity to produce cement in 2010, the situation was similar with the top three states (China, India, and USA) accounting for just under half the world total capacity.
Over 2011 and 2012, global consumption continued to climb, rising to 3585 Mt in 2011 and 3736 Mt in 2012, while annual growth rates eased to 8.3% and 4.2%, respectively.
China, representing an increasing share of world cement consumption, remains the main engine of global growth. By 2012, Chinese demand was recorded at 2160 Mt, representing 58% of world consumption. Annual growth rates, which reached 16% in 2010, appear to have softened, slowing to 5–6% over 2011 and 2012, as China's economy targets a more sustainable growth rate.
Outside of China, worldwide consumption climbed by 4.4% to 1462 Mt in 2010, 5% to 1535 Mt in 2011, and finally 2.7% to 1576 Mt in 2012.
Iran is now the 3rd largest cement producer in the world and has increased its output by over 10% from 2008 to 2011. Because of climbing energy costs in Pakistan and other major cement-producing countries, Iran is in a unique position as a trading partner, utilizing its own surplus petroleum to power clinker plants. Now a top producer in the Middle-East, Iran is further increasing its dominant position in local markets and abroad.
The performance in North America and Europe over the 2010–12 period contrasted strikingly with that of China, as the global financial crisis evolved into a sovereign debt crisis for many economies in this region and recession. Cement consumption levels for this region fell by 1.9% in 2010 to 445 Mt, recovered by 4.9% in 2011, then dipped again by 1.1% in 2012.
The performance in the rest of the world, which includes many emerging economies in Asia, Africa and Latin America and representing some 1020 Mt cement demand in 2010, was positive and more than offset the declines in North America and Europe. Annual consumption growth was recorded at 7.4% in 2010, moderating to 5.1% and 4.3% in 2011 and 2012, respectively.
As at year-end 2012, the global cement industry consisted of 5673 cement production facilities, including both integrated and grinding, of which 3900 were located in China and 1773 in the rest of the world.
Total cement capacity worldwide was recorded at 5245 Mt in 2012, with 2950 Mt located in China and 2295 Mt in the rest of the world.
"For the past 18 years, China consistently has produced more cement than any other country in the world. [...] (However,) China's cement export peaked in 1994 with 11 million tonnes shipped out and has been in steady decline ever since. Only 5.18 million tonnes were exported out of China in 2002. Offered at $34 a ton, Chinese cement is pricing itself out of the market as Thailand is asking as little as $20 for the same quality."
In 2006, it was estimated that China manufactured 1.235 billion tonnes of cement, which was 44% of the world total cement production. "Demand for cement in China is expected to advance 5.4% annually and exceed 1 billion tonnes in 2008, driven by slowing but healthy growth in construction expenditures. Cement consumed in China will amount to 44% of global demand, and China will remain the world's largest national consumer of cement by a large margin."
In 2010, 3.3 billion tonnes of cement was consumed globally. Of this, China accounted for 1.8 billion tonnes.
Cement manufacture causes environmental impacts at all stages of the process. These include emissions of airborne pollution in the form of dust, gases, noise and vibration when operating machinery and during blasting in quarries, and damage to countryside from quarrying. Equipment to reduce dust emissions during quarrying and manufacture of cement is widely used, and equipment to trap and separate exhaust gases are coming into increased use. Environmental protection also includes the re-integration of quarries into the countryside after they have been closed down by returning them to nature or re-cultivating them.
Carbon concentration in cement spans from ≈5% in cement structures to ≈8% in the case of roads in cement. Cement manufacturing releases CO2 in the atmosphere both directly when calcium carbonate is heated, producing lime and carbon dioxide, and also indirectly through the use of energy if its production involves the emission of CO2. The cement industry produces about 10% of global human-made CO2 emissions, of which 60% is from the chemical process, and 40% from burning fuel. A Chatham House study from 2018 estimates that the 4 billion tonnes of cement produced annually account for 8% of worldwide CO2 emissions.
Nearly 900 kg of CO2 are emitted for every 1000 kg of Portland cement produced. In the European Union, the specific energy consumption for the production of cement clinker has been reduced by approximately 30% since the 1970s. This reduction in primary energy requirements is equivalent to approximately 11 million tonnes of coal per year with corresponding benefits in reduction of CO2 emissions. This accounts for approximately 5% of anthropogenic CO2.
The majority of carbon dioxide emissions in the manufacture of Portland cement (approximately 60%) are produced from the chemical decomposition of limestone to lime, an ingredient in Portland cement clinker. These emissions may be reduced by lowering the clinker content of cement. They can also be reduced by alternative fabrication methods such as the intergrinding cement with sand or with slag or other pozzolan type minerals to a very fine powder.
To reduce the transport of heavier raw materials and to minimize the associated costs, it is more economical to build cement plants closer to the limestone quarries rather than to the consumer centers.
As of 2019 carbon capture and storage is about to be trialed, but its financial viability is uncertain.
Hydrated products of Portland cement, such as concrete and mortars, slowly reabsorb atmospheric CO2 gas, which has been released during calcination in a kiln. This natural process, reversed to calcination, is called carbonation. As it depends on CO2 diffusion into the bulk of concrete, its rate depends on many parameters, such as environmental conditions and surface area exposed to the atmosphere. Carbonation is particularly significant at the latter stages of the concrete life - after demolition and crushing of the debris. It was estimated that during the whole life-cycle of cement products, it can be reabsorbed nearly 30% of atmospheric CO2 generated by cement production.
Carbonation process is considered as a mechanism of concrete degradation. It reduces pH of concrete that promotes reinforsment steel corrosion. However, as the product of Ca(OH)2 carbonation, CaCO3, occupies a greater volume, porosity of concrete reduces. This increases strength and hardness of concrete.
There are proposals to reduce carbon footprint of hydraulic cement by adopting non-hydraulic cement, lime mortar, for certain applications. It reabsorbs some of the CO2 during hardening, and has a lower energy requirement in production than Portland cement.
Few other attempts to increase absorption of carbon dioxide include cements based on magnesium (Sorel cement).
In some circumstances, mainly depending on the origin and the composition of the raw materials used, the high-temperature calcination process of limestone and clay minerals can release in the atmosphere gases and dust rich in volatile heavy metals, e.g. thallium, cadmium and mercury are the most toxic. Heavy metals (Tl, Cd, Hg, ...) and also selenium are often found as trace elements in common metal sulfides (pyrite (FeS2), zinc blende (ZnS), galena (PbS), ...) present as secondary minerals in most of the raw materials. Environmental regulations exist in many countries to limit these emissions. As of 2011 in the United States, cement kilns are "legally allowed to pump more toxins into the air than are hazardous-waste incinerators."
The presence of heavy metals in the clinker arises both from the natural raw materials and from the use of recycled by-products or alternative fuels. The high pH prevailing in the cement porewater (12.5 < pH < 13.5) limits the mobility of many heavy metals by decreasing their solubility and increasing their sorption onto the cement mineral phases. Nickel, zinc and lead are commonly found in cement in non-negligible concentrations. Chromium may also directly arise as natural impurity from the raw materials or as secondary contamination from the abrasion of hard chromium steel alloys used in the ball mills when the clinker is ground. As chromate (CrO4) is toxic and may cause severe skin allergies at trace concentration, it is sometimes reduced into trivalent Cr(III) by addition of ferrous sulfate (FeSO4).
A cement plant consumes 3 to 6 GJ of fuel per tonne of clinker produced, depending on the raw materials and the process used. Most cement kilns today use coal and petroleum coke as primary fuels, and to a lesser extent natural gas and fuel oil. Selected waste and by-products with recoverable calorific value can be used as fuels in a cement kiln (referred to as co-processing), replacing a portion of conventional fossil fuels, like coal, if they meet strict specifications. Selected waste and by-products containing useful minerals such as calcium, silica, alumina, and iron can be used as raw materials in the kiln, replacing raw materials such as clay, shale, and limestone. Because some materials have both useful mineral content and recoverable calorific value, the distinction between alternative fuels and raw materials is not always clear. For example, sewage sludge has a low but significant calorific value, and burns to give ash containing minerals useful in the clinker matrix. Scrap automobile and truck tires are useful in cement manufacturing as they have high calorific value and the iron embedded in tires is useful as a feed stock.
Clinker is manufactured by heating raw materials inside the main burner of a kiln to a temperature of 1,450 °C. The flame reaches temperatures of 1,800 °C. The material remains at 1,200 °C for 12–15 seconds at 1,800 °C (and/ or?) for 5–8 seconds (also referred to as residence time). These characteristics of a clinker kiln offer numerous benefits and they ensure a complete destruction of organic compounds, a total neutralization of acid gases, sulphur oxides and hydrogen chloride. Furthermore, heavy metal traces are embedded in the clinker structure and no by-products, such as ash or residues, are produced.
The EU cement industry already uses more than 40% fuels derived from waste and biomass in supplying the thermal energy to the grey clinker making process. Although the choice for this so-called alternative fuels (AF) is typically cost driven, other factors are becoming more important. Use of alternative fuels provides benefits for both society and the company: CO2-emissions are lower than with fossil fuels, waste can be co-processed in an efficient and sustainable manner and the demand for certain virgin materials can be reduced. Yet there are large differences in the share of alternative fuels used between the European Union (EU) member states. The societal benefits could be improved if more member states increase their alternative fuels share. The Ecofys study assessed the barriers and opportunities for further uptake of alternative fuels in 14 EU member states. The Ecofys study found that local factors constrain the market potential to a much larger extent than the technical and economic feasibility of the cement industry itself.
Reduced-footprint cement is a cementitious material that meets or exceeds the functional performance capabilities of Portland cement. Various techniques are under development. One is geopolymer cement, which incorporates recycled materials, thereby reducing consumption of raw materials, water, and energy.
Another approach is to reduce or eliminate the production and release of damaging pollutants and greenhouse gasses, particularly CO2.
Growing environmental concerns and the increasing cost of fuels of fossil origin have resulted, in many countries, in a sharp reduction of the resources needed to produce cement and effluents (dust and exhaust gases).
A team at the University of Edinburgh has developed the 'DUPE' process based on the microbial activity of Sporosarcina pasteurii, a bacterium precipitating calcium carbonate, which, when mixed with sand and urine, can produce mortar blocks with a compressive strength 70% of that of concrete.
Recycling old cement in electric arc furnaces is another approach.
An overview of climate-friendly methods for cement production can be found here. | [
{
"paragraph_id": 0,
"text": "A cement is a binder, a chemical substance used for construction that sets, hardens, and adheres to other materials to bind them together. Cement is seldom used on its own, but rather to bind sand and gravel (aggregate) together. Cement mixed with fine aggregate produces mortar for masonry, or with sand and gravel, produces concrete. Concrete is the most widely used material in existence and is behind only water as the planet's most-consumed resource.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cements used in construction are usually inorganic, often lime or calcium silicate based, which can be characterized as hydraulic or the less common non-hydraulic, depending on the ability of the cement to set in the presence of water (see hydraulic and non-hydraulic lime plaster).",
"title": ""
},
{
"paragraph_id": 2,
"text": "Hydraulic cements (e.g., Portland cement) set and become adhesive through a chemical reaction between the dry ingredients and water. The chemical reaction results in mineral hydrates that are not very water-soluble and so are quite durable in water and safe from chemical attack. This allows setting in wet conditions or under water and further protects the hardened material from chemical attack. The chemical process for hydraulic cement was found by ancient Romans who used volcanic ash (pozzolana) with added lime (calcium oxide).",
"title": ""
},
{
"paragraph_id": 3,
"text": "Non-hydraulic cement (less common) does not set in wet conditions or under water. Rather, it sets as it dries and reacts with carbon dioxide in the air. It is resistant to attack by chemicals after setting.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The word \"cement\" can be traced back to the Ancient Roman term opus caementicium, used to describe masonry resembling modern concrete that was made from crushed rock with burnt lime as binder. The volcanic ash and pulverized brick supplements that were added to the burnt lime, to obtain a hydraulic binder, were later referred to as cementum, cimentum, cäment, and cement. In modern times, organic polymers are sometimes used as cements in concrete.",
"title": ""
},
{
"paragraph_id": 5,
"text": "World production of cement is about 4.4 billion tonnes per year (2021, estimation), of which about half is made in China, followed by India and Vietnam.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The cement production process is responsible for nearly 8% (2018) of global CO2 emissions, which includes heating raw materials in a cement kiln by fuel combustion and resulting release of CO2 stored in the calcium carbonate (calcination process). Its hydrated products, such as concrete, gradually reabsorb substantial amounts of atmospheric CO2 (carbonation process) compensating near 30% of initial CO2 emissions, as estimations suggest.",
"title": ""
},
{
"paragraph_id": 7,
"text": "Cement materials can be classified into two distinct categories: hydraulic cements and non-hydraulic cements according to their respective setting and hardening mechanisms. Hydraulic cement setting and hardening involves hydration reactions and therefore requires water, while non-hydraulic cements only react with a gas and can directly set under air.",
"title": "Chemistry"
},
{
"paragraph_id": 8,
"text": "By far the most common type of cement is hydraulic cement, which hardens by hydration of the clinker minerals when water is added. Hydraulic cements (such as Portland cement) are made of a mixture of silicates and oxides, the four main mineral phases of the clinker, abbreviated in the cement chemist notation, being:",
"title": "Chemistry"
},
{
"paragraph_id": 9,
"text": "The silicates are responsible for the cement's mechanical properties — the tricalcium aluminate and brownmillerite are essential for the formation of the liquid phase during the sintering (firing) process of clinker at high temperature in the kiln. The chemistry of these reactions is not completely clear and is still the object of research.",
"title": "Chemistry"
},
{
"paragraph_id": 10,
"text": "First, the limestone (calcium carbonate) is burned to remove its carbon, producing lime (calcium oxide) in what is known as a calcination reaction. This single chemical reaction is a major emitter of global carbon dioxide emissions.",
"title": "Chemistry"
},
{
"paragraph_id": 11,
"text": "The lime reacts with silicon dioxide to produce dicalcium silicate and tricalcium silicate.",
"title": "Chemistry"
},
{
"paragraph_id": 12,
"text": "The lime also reacts with aluminium oxide to form tricalcium aluminate.",
"title": "Chemistry"
},
{
"paragraph_id": 13,
"text": "In the last step, calcium oxide, aluminium oxide, and ferric oxide react together to form cement.",
"title": "Chemistry"
},
{
"paragraph_id": 14,
"text": "A less common form of cement is non-hydraulic cement, such as slaked lime (calcium oxide mixed with water), which hardens by carbonation in contact with carbon dioxide, which is present in the air (~ 412 vol. ppm ≃ 0.04 vol. %). First calcium oxide (lime) is produced from calcium carbonate (limestone or chalk) by calcination at temperatures above 825 °C (1,517 °F) for about 10 hours at atmospheric pressure:",
"title": "Chemistry"
},
{
"paragraph_id": 15,
"text": "The calcium oxide is then spent (slaked) by mixing it with water to make slaked lime (calcium hydroxide):",
"title": "Chemistry"
},
{
"paragraph_id": 16,
"text": "Once the excess water is completely evaporated (this process is technically called setting), the carbonation starts:",
"title": "Chemistry"
},
{
"paragraph_id": 17,
"text": "This reaction is slow, because the partial pressure of carbon dioxide in the air is low (~ 0.4 millibar). The carbonation reaction requires that the dry cement be exposed to air, so the slaked lime is a non-hydraulic cement and cannot be used under water. This process is called the lime cycle.",
"title": "Chemistry"
},
{
"paragraph_id": 18,
"text": "Perhaps the earliest known occurrence of cement is from twelve million years ago. A deposit of cement was formed after an occurrence of oil shale located adjacent to a bed of limestone burned by natural causes. These ancient deposits were investigated in the 1960s and 1970s.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "Cement, chemically speaking, is a product that includes lime as the primary binding ingredient, but is far from the first material used for cementation. The Babylonians and Assyrians used bitumen to bind together burnt brick or alabaster slabs. In Ancient Egypt, stone blocks were cemented together with a mortar made of sand and roughly burnt gypsum (CaSO4 · 2H2O), which is Plaster of Paris, which often contained calcium carbonate (CaCO3),",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Lime (calcium oxide) was used on Crete and by the Ancient Greeks. There is evidence that the Minoans of Crete used crushed potsherds as an artificial pozzolan for hydraulic cement. Nobody knows who first discovered that a combination of hydrated non-hydraulic lime and a pozzolan produces a hydraulic mixture (see also: Pozzolanic reaction), but such concrete was used by the Greeks, specifically the Ancient Macedonians, and three centuries later on a large scale by Roman engineers.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "There is... a kind of powder which from natural causes produces astonishing results. It is found in the neighborhood of Baiae and in the country belonging to the towns round about Mount Vesuvius. This substance when mixed with lime and rubble not only lends strength to buildings of other kinds but even when piers of it are constructed in the sea, they set hard underwater.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The Greeks used volcanic tuff from the island of Thera as their pozzolan and the Romans used crushed volcanic ash (activated aluminium silicates) with lime. This mixture could set under water, increasing its resistance to corrosion like rust. The material was called pozzolana from the town of Pozzuoli, west of Naples where volcanic ash was extracted. In the absence of pozzolanic ash, the Romans used powdered brick or pottery as a substitute and they may have used crushed tiles for this purpose before discovering natural sources near Rome. The huge dome of the Pantheon in Rome and the massive Baths of Caracalla are examples of ancient structures made from these concretes, many of which still stand. The vast system of Roman aqueducts also made extensive use of hydraulic cement. Roman concrete was rarely used on the outside of buildings. The normal technique was to use brick facing material as the formwork for an infill of mortar mixed with an aggregate of broken pieces of stone, brick, potsherds, recycled chunks of concrete, or other building rubble.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Lightweight concrete was designed and used for the construction of structural elements by the pre-Columbian builders who lived in a very advanced civilisation in El Tajin near Mexico City, in Mexico. A detailed study of the composition of the aggregate and binder show that the aggregate was pumice and the binder was a pozzolanic cement made with volcanic ash and lime.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Any preservation of this knowledge in literature from the Middle Ages is unknown, but medieval masons and some military engineers actively used hydraulic cement in structures such as canals, fortresses, harbors, and shipbuilding facilities. A mixture of lime mortar and aggregate with brick or stone facing material was used in the Eastern Roman Empire as well as in the West into the Gothic period. The German Rhineland continued to use hydraulic mortar throughout the Middle Ages, having local pozzolana deposits called trass.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "Tabby is a building material made from oyster shell lime, sand, and whole oyster shells to form a concrete. The Spanish introduced it to the Americas in the sixteenth century.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The technical knowledge for making hydraulic cement was formalized by French and British engineers in the 18th century.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "John Smeaton made an important contribution to the development of cements while planning the construction of the third Eddystone Lighthouse (1755–59) in the English Channel now known as Smeaton's Tower. He needed a hydraulic mortar that would set and develop some strength in the twelve-hour period between successive high tides. He performed experiments with combinations of different limestones and additives including trass and pozzolanas and did exhaustive market research on the available hydraulic limes, visiting their production sites, and noted that the \"hydraulicity\" of the lime was directly related to the clay content of the limestone used to make it. Smeaton was a civil engineer by profession, and took the idea no further.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In the South Atlantic seaboard of the United States, tabby relying on the oyster-shell middens of earlier Native American populations was used in house construction from the 1730s to the 1860s.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "In Britain particularly, good quality building stone became ever more expensive during a period of rapid growth, and it became a common practice to construct prestige buildings from the new industrial bricks, and to finish them with a stucco to imitate stone. Hydraulic limes were favored for this, but the need for a fast set time encouraged the development of new cements. Most famous was Parker's \"Roman cement\". This was developed by James Parker in the 1780s, and finally patented in 1796. It was, in fact, nothing like material used by the Romans, but was a \"natural cement\" made by burning septaria – nodules that are found in certain clay deposits, and that contain both clay minerals and calcium carbonate. The burnt nodules were ground to a fine powder. This product, made into a mortar with sand, set in 5–15 minutes. The success of \"Roman cement\" led other manufacturers to develop rival products by burning artificial hydraulic lime cements of clay and chalk. Roman cement quickly became popular but was largely replaced by Portland cement in the 1850s.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Apparently unaware of Smeaton's work, the same principle was identified by Frenchman Louis Vicat in the first decade of the nineteenth century. Vicat went on to devise a method of combining chalk and clay into an intimate mixture, and, burning this, produced an \"artificial cement\" in 1817 considered the \"principal forerunner\" of Portland cement and \"...Edgar Dobbs of Southwark patented a cement of this kind in 1811.\"",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In Russia, Egor Cheliev created a new binder by mixing lime and clay. His results were published in 1822 in his book A Treatise on the Art to Prepare a Good Mortar published in St. Petersburg. A few years later in 1825, he published another book, which described various methods of making cement and concrete, and the benefits of cement in the construction of buildings and embankments.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "Portland cement, the most common type of cement in general use around the world as a basic ingredient of concrete, mortar, stucco, and non-speciality grout, was developed in England in the mid 19th century, and usually originates from limestone. James Frost produced what he called \"British cement\" in a similar manner around the same time, but did not obtain a patent until 1822. In 1824, Joseph Aspdin patented a similar material, which he called Portland cement, because the render made from it was in color similar to the prestigious Portland stone quarried on the Isle of Portland, Dorset, England. However, Aspdins' cement was nothing like modern Portland cement but was a first step in its development, called a proto-Portland cement. Joseph Aspdins' son William Aspdin had left his father's company and in his cement manufacturing apparently accidentally produced calcium silicates in the 1840s, a middle step in the development of Portland cement. William Aspdin's innovation was counterintuitive for manufacturers of \"artificial cements\", because they required more lime in the mix (a problem for his father), a much higher kiln temperature (and therefore more fuel), and the resulting clinker was very hard and rapidly wore down the millstones, which were the only available grinding technology of the time. Manufacturing costs were therefore considerably higher, but the product set reasonably slowly and developed strength quickly, thus opening up a market for use in concrete. The use of concrete in construction grew rapidly from 1850 onward, and was soon the dominant use for cements. Thus Portland cement began its predominant role. Isaac Charles Johnson further refined the production of meso-Portland cement (middle stage of development) and claimed he was the real father of Portland cement.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Setting time and \"early strength\" are important characteristics of cements. Hydraulic limes, \"natural\" cements, and \"artificial\" cements all rely on their belite (2 CaO · SiO2, abbreviated as C2S) content for strength development. Belite develops strength slowly. Because they were burned at temperatures below 1,250 °C (2,280 °F), they contained no alite (3 CaO · SiO2, abbreviated as C3S), which is responsible for early strength in modern cements. The first cement to consistently contain alite was made by William Aspdin in the early 1840s: This was what we call today \"modern\" Portland cement. Because of the air of mystery with which William Aspdin surrounded his product, others (e.g., Vicat and Johnson) have claimed precedence in this invention, but recent analysis of both his concrete and raw cement have shown that William Aspdin's product made at Northfleet, Kent was a true alite-based cement. However, Aspdin's methods were \"rule-of-thumb\": Vicat is responsible for establishing the chemical basis of these cements, and Johnson established the importance of sintering the mix in the kiln.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In the US the first large-scale use of cement was Rosendale cement, a natural cement mined from a massive deposit of dolomite discovered in the early 19th century near Rosendale, New York. Rosendale cement was extremely popular for the foundation of buildings (e.g., Statue of Liberty, Capitol Building, Brooklyn Bridge) and lining water pipes. Sorel cement, or magnesia-based cement, was patented in 1867 by the Frenchman Stanislas Sorel. It was stronger than Portland cement but its poor water resistance (leaching) and corrosive properties (pitting corrosion due to the presence of leachable chloride anions and the low pH (8.5–9.5) of its pore water) limited its use as reinforced concrete for building construction.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "The next development in the manufacture of Portland cement was the introduction of the rotary kiln. It produced a clinker mixture that was both stronger, because more alite (C3S) is formed at the higher temperature it achieved (1450 °C), and more homogeneous. Because raw material is constantly fed into a rotary kiln, it allowed a continuous manufacturing process to replace lower capacity batch production processes.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "Calcium aluminate cements were patented in 1908 in France by Jules Bied for better resistance to sulfates. Also in 1908, Thomas Edison experimented with pre-cast concrete in houses in Union, N.J.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "In the US, after World War One, the long curing time of at least a month for Rosendale cement made it unpopular for constructing highways and bridges, and many states and construction firms turned to Portland cement. Because of the switch to Portland cement, by the end of the 1920s only one of the 15 Rosendale cement companies had survived. But in the early 1930s, builders discovered that, while Portland cement set faster, it was not as durable, especially for highways—to the point that some states stopped building highways and roads with cement. Bertrain H. Wait, an engineer whose company had helped construct the New York City's Catskill Aqueduct, was impressed with the durability of Rosendale cement, and came up with a blend of both Rosendale and Portland cements that had the good attributes of both. It was highly durable and had a much faster setting time. Wait convinced the New York Commissioner of Highways to construct an experimental section of highway near New Paltz, New York, using one sack of Rosendale to six sacks of Portland cement. It was a success, and for decades the Rosendale-Portland cement blend was used in concrete highway and concrete bridge construction.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "Cementitious materials have been used as a nuclear waste immobilizing matrix for more than a half-century. Technologies of waste cementation have been developed and deployed at industrial scale in many countries. Cementitious wasteforms require a careful selection and design process adapted to each specific type of waste to satisfy the strict waste acceptance criteria for long-term storage and disposal.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "Modern development of hydraulic cement began with the start of the Industrial Revolution (around 1800), driven by three main needs:",
"title": "Modern cements"
},
{
"paragraph_id": 40,
"text": "Modern cements are often Portland cement or Portland cement blends, but other cement blends are used in some industrial settings.",
"title": "Modern cements"
},
{
"paragraph_id": 41,
"text": "Portland cement, a form of hydraulic cement, is by far the most common type of cement in general use around the world. This cement is made by heating limestone (calcium carbonate) with other materials (such as clay) to 1,450 °C (2,640 °F) in a kiln, in a process known as calcination that liberates a molecule of carbon dioxide from the calcium carbonate to form calcium oxide, or quicklime, which then chemically combines with the other materials in the mix to form calcium silicates and other cementitious compounds. The resulting hard substance, called 'clinker', is then ground with a small amount of gypsum (CaSO4·2H2O) into a powder to make ordinary Portland cement, the most commonly used type of cement (often referred to as OPC). Portland cement is a basic ingredient of concrete, mortar, and most non-specialty grout. The most common use for Portland cement is to make concrete. Portland cement may be grey or white.",
"title": "Modern cements"
},
{
"paragraph_id": 42,
"text": "Portland cement blends are often available as inter-ground mixtures from cement producers, but similar formulations are often also mixed from the ground components at the concrete mixing plant.",
"title": "Modern cements"
},
{
"paragraph_id": 43,
"text": "Portland blast-furnace slag cement, or blast furnace cement (ASTM C595 and EN 197-1 nomenclature respectively), contains up to 95% ground granulated blast furnace slag, with the rest Portland clinker and a little gypsum. All compositions produce high ultimate strength, but as slag content is increased, early strength is reduced, while sulfate resistance increases and heat evolution diminishes. Used as an economic alternative to Portland sulfate-resisting and low-heat cements.",
"title": "Modern cements"
},
{
"paragraph_id": 44,
"text": "Portland-fly ash cement contains up to 40% fly ash under ASTM standards (ASTM C595), or 35% under EN standards (EN 197–1). The fly ash is pozzolanic, so that ultimate strength is maintained. Because fly ash addition allows a lower concrete water content, early strength can also be maintained. Where good quality cheap fly ash is available, this can be an economic alternative to ordinary Portland cement.",
"title": "Modern cements"
},
{
"paragraph_id": 45,
"text": "Portland pozzolan cement includes fly ash cement, since fly ash is a pozzolan, but also includes cements made from other natural or artificial pozzolans. In countries where volcanic ashes are available (e.g., Italy, Chile, Mexico, the Philippines), these cements are often the most common form in use. The maximum replacement ratios are generally defined as for Portland-fly ash cement.",
"title": "Modern cements"
},
{
"paragraph_id": 46,
"text": "Portland silica fume cement. Addition of silica fume can yield exceptionally high strengths, and cements containing 5–20% silica fume are occasionally produced, with 10% being the maximum allowed addition under EN 197–1. However, silica fume is more usually added to Portland cement at the concrete mixer.",
"title": "Modern cements"
},
{
"paragraph_id": 47,
"text": "Masonry cements are used for preparing bricklaying mortars and stuccos, and must not be used in concrete. They are usually complex proprietary formulations containing Portland clinker and a number of other ingredients that may include limestone, hydrated lime, air entrainers, retarders, waterproofers, and coloring agents. They are formulated to yield workable mortars that allow rapid and consistent masonry work. Subtle variations of masonry cement in North America are plastic cements and stucco cements. These are designed to produce a controlled bond with masonry blocks.",
"title": "Modern cements"
},
{
"paragraph_id": 48,
"text": "Expansive cements contain, in addition to Portland clinker, expansive clinkers (usually sulfoaluminate clinkers), and are designed to offset the effects of drying shrinkage normally encountered in hydraulic cements. This cement can make concrete for floor slabs (up to 60 m square) without contraction joints.",
"title": "Modern cements"
},
{
"paragraph_id": 49,
"text": "White blended cements may be made using white clinker (containing little or no iron) and white supplementary materials such as high-purity metakaolin. Colored cements serve decorative purposes. Some standards allow the addition of pigments to produce colored Portland cement. Other standards (e.g., ASTM) do not allow pigments in Portland cement, and colored cements are sold as blended hydraulic cements.",
"title": "Modern cements"
},
{
"paragraph_id": 50,
"text": "Very finely ground cements are cement mixed with sand or with slag or other pozzolan type minerals that are extremely finely ground together. Such cements can have the same physical characteristics as normal cement but with 50% less cement, particularly because there is more surface area for the chemical reaction. Even with intensive grinding they can use up to 50% less energy (and thus less carbon emissions) to fabricate than ordinary Portland cements.",
"title": "Modern cements"
},
{
"paragraph_id": 51,
"text": "Pozzolan-lime cements are mixtures of ground pozzolan and lime. These are the cements the Romans used, and are present in surviving Roman structures like the Pantheon in Rome. They develop strength slowly, but their ultimate strength can be very high. The hydration products that produce strength are essentially the same as those in Portland cement.",
"title": "Modern cements"
},
{
"paragraph_id": 52,
"text": "Slag-lime cements—ground granulated blast-furnace slag—are not hydraulic on their own, but are \"activated\" by addition of alkalis, most economically using lime. They are similar to pozzolan lime cements in their properties. Only granulated slag (i.e., water-quenched, glassy slag) is effective as a cement component.",
"title": "Modern cements"
},
{
"paragraph_id": 53,
"text": "Supersulfated cements contain about 80% ground granulated blast furnace slag, 15% gypsum or anhydrite and a little Portland clinker or lime as an activator. They produce strength by formation of ettringite, with strength growth similar to a slow Portland cement. They exhibit good resistance to aggressive agents, including sulfate. Calcium aluminate cements are hydraulic cements made primarily from limestone and bauxite. The active ingredients are monocalcium aluminate CaAl2O4 (CaO · Al2O3 or CA in cement chemist notation, CCN) and mayenite Ca12Al14O33 (12 CaO · 7 Al2O3, or C12A7 in CCN). Strength forms by hydration to calcium aluminate hydrates. They are well-adapted for use in refractory (high-temperature resistant) concretes, e.g., for furnace linings.",
"title": "Modern cements"
},
{
"paragraph_id": 54,
"text": "Calcium sulfoaluminate cements are made from clinkers that include ye'elimite (Ca4(AlO2)6SO4 or C4A3S in Cement chemist's notation) as a primary phase. They are used in expansive cements, in ultra-high early strength cements, and in \"low-energy\" cements. Hydration produces ettringite, and specialized physical properties (such as expansion or rapid reaction) are obtained by adjustment of the availability of calcium and sulfate ions. Their use as a low-energy alternative to Portland cement has been pioneered in China, where several million tonnes per year are produced. Energy requirements are lower because of the lower kiln temperatures required for reaction, and the lower amount of limestone (which must be endothermically decarbonated) in the mix. In addition, the lower limestone content and lower fuel consumption leads to a CO2 emission around half that associated with Portland clinker. However, SO2 emissions are usually significantly higher.",
"title": "Modern cements"
},
{
"paragraph_id": 55,
"text": "\"Natural\" cements corresponding to certain cements of the pre-Portland era, are produced by burning argillaceous limestones at moderate temperatures. The level of clay components in the limestone (around 30–35%) is such that large amounts of belite (the low-early strength, high-late strength mineral in Portland cement) are formed without the formation of excessive amounts of free lime. As with any natural material, such cements have highly variable properties.",
"title": "Modern cements"
},
{
"paragraph_id": 56,
"text": "Geopolymer cements are made from mixtures of water-soluble alkali metal silicates, and aluminosilicate mineral powders such as fly ash and metakaolin.",
"title": "Modern cements"
},
{
"paragraph_id": 57,
"text": "Polymer cements are made from organic chemicals that polymerise. Producers often use thermoset materials. While they are often significantly more expensive, they can give a water proof material that has useful tensile strength.",
"title": "Modern cements"
},
{
"paragraph_id": 58,
"text": "Sorel Cement is a hard, durable cement made by combining magnesium oxide and a magnesium chloride solution",
"title": "Modern cements"
},
{
"paragraph_id": 59,
"text": "Fiber mesh cement or fiber reinforced concrete is cement that is made up of fibrous materials like synthetic fibers, glass fibers, natural fibers, and steel fibers. This type of mesh is distributed evenly throughout the wet concrete. The purpose of fiber mesh is to reduce water loss from the concrete as well as enhance its structural integrity. When used in plasters, fiber mesh increases cohesiveness, tensile strength, impact resistance, and to reduce shrinkage; ultimately, the main purpose of these combined properties is to reduce cracking.",
"title": "Modern cements"
},
{
"paragraph_id": 60,
"text": "Cement starts to set when mixed with water, which causes a series of hydration chemical reactions. The constituents slowly hydrate and the mineral hydrates solidify and harden. The interlocking of the hydrates gives cement its strength. Contrary to popular belief, hydraulic cement does not set by drying out — proper curing requires maintaining the appropriate moisture content necessary for the hydration reactions during the setting and the hardening processes. If hydraulic cements dry out during the curing phase, the resulting product can be insufficiently hydrated and significantly weakened. A minimum temperature of 5 °C is recommended, and no more than 30 °C. The concrete at young age must be protected against water evaporation due to direct insolation, elevated temperature, low relative humidity and wind.",
"title": "Setting, hardening and curing"
},
{
"paragraph_id": 61,
"text": "The interfacial transition zone (ITZ) is a region of the cement paste around the aggregate particles in concrete. In the zone, a gradual transition in the microstructural features occurs. This zone can be up to 35 micrometer wide. Other studies have shown that the width can be up to 50 micrometer. The average content of unreacted clinker phase decreases and porosity decreases towards the aggregate surface. Similarly, the content of ettringite increases in ITZ.",
"title": "Setting, hardening and curing"
},
{
"paragraph_id": 62,
"text": "Bags of cement routinely have health and safety warnings printed on them because not only is cement highly alkaline, but the setting process is exothermic. As a result, wet cement is strongly caustic (pH = 13.5) and can easily cause severe skin burns if not promptly washed off with water. Similarly, dry cement powder in contact with mucous membranes can cause severe eye or respiratory irritation. Some trace elements, such as chromium, from impurities naturally present in the raw materials used to produce cement may cause allergic dermatitis. Reducing agents such as ferrous sulfate (FeSO4) are often added to cement to convert the carcinogenic hexavalent chromate (CrO4) into trivalent chromium (Cr), a less toxic chemical species. Cement users need also to wear appropriate gloves and protective clothing.",
"title": "Safety issues"
},
{
"paragraph_id": 63,
"text": "In 2010, the world production of hydraulic cement was 3,300 megatonnes (3,600×10^ short tons). The top three producers were China with 1,800, India with 220, and USA with 63.5 million tonnes for a total of over half the world total by the world's three most populated states.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 64,
"text": "For the world capacity to produce cement in 2010, the situation was similar with the top three states (China, India, and USA) accounting for just under half the world total capacity.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 65,
"text": "Over 2011 and 2012, global consumption continued to climb, rising to 3585 Mt in 2011 and 3736 Mt in 2012, while annual growth rates eased to 8.3% and 4.2%, respectively.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 66,
"text": "China, representing an increasing share of world cement consumption, remains the main engine of global growth. By 2012, Chinese demand was recorded at 2160 Mt, representing 58% of world consumption. Annual growth rates, which reached 16% in 2010, appear to have softened, slowing to 5–6% over 2011 and 2012, as China's economy targets a more sustainable growth rate.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 67,
"text": "Outside of China, worldwide consumption climbed by 4.4% to 1462 Mt in 2010, 5% to 1535 Mt in 2011, and finally 2.7% to 1576 Mt in 2012.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 68,
"text": "Iran is now the 3rd largest cement producer in the world and has increased its output by over 10% from 2008 to 2011. Because of climbing energy costs in Pakistan and other major cement-producing countries, Iran is in a unique position as a trading partner, utilizing its own surplus petroleum to power clinker plants. Now a top producer in the Middle-East, Iran is further increasing its dominant position in local markets and abroad.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 69,
"text": "The performance in North America and Europe over the 2010–12 period contrasted strikingly with that of China, as the global financial crisis evolved into a sovereign debt crisis for many economies in this region and recession. Cement consumption levels for this region fell by 1.9% in 2010 to 445 Mt, recovered by 4.9% in 2011, then dipped again by 1.1% in 2012.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 70,
"text": "The performance in the rest of the world, which includes many emerging economies in Asia, Africa and Latin America and representing some 1020 Mt cement demand in 2010, was positive and more than offset the declines in North America and Europe. Annual consumption growth was recorded at 7.4% in 2010, moderating to 5.1% and 4.3% in 2011 and 2012, respectively.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 71,
"text": "As at year-end 2012, the global cement industry consisted of 5673 cement production facilities, including both integrated and grinding, of which 3900 were located in China and 1773 in the rest of the world.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 72,
"text": "Total cement capacity worldwide was recorded at 5245 Mt in 2012, with 2950 Mt located in China and 2295 Mt in the rest of the world.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 73,
"text": "\"For the past 18 years, China consistently has produced more cement than any other country in the world. [...] (However,) China's cement export peaked in 1994 with 11 million tonnes shipped out and has been in steady decline ever since. Only 5.18 million tonnes were exported out of China in 2002. Offered at $34 a ton, Chinese cement is pricing itself out of the market as Thailand is asking as little as $20 for the same quality.\"",
"title": "Cement industry in the world"
},
{
"paragraph_id": 74,
"text": "In 2006, it was estimated that China manufactured 1.235 billion tonnes of cement, which was 44% of the world total cement production. \"Demand for cement in China is expected to advance 5.4% annually and exceed 1 billion tonnes in 2008, driven by slowing but healthy growth in construction expenditures. Cement consumed in China will amount to 44% of global demand, and China will remain the world's largest national consumer of cement by a large margin.\"",
"title": "Cement industry in the world"
},
{
"paragraph_id": 75,
"text": "In 2010, 3.3 billion tonnes of cement was consumed globally. Of this, China accounted for 1.8 billion tonnes.",
"title": "Cement industry in the world"
},
{
"paragraph_id": 76,
"text": "Cement manufacture causes environmental impacts at all stages of the process. These include emissions of airborne pollution in the form of dust, gases, noise and vibration when operating machinery and during blasting in quarries, and damage to countryside from quarrying. Equipment to reduce dust emissions during quarrying and manufacture of cement is widely used, and equipment to trap and separate exhaust gases are coming into increased use. Environmental protection also includes the re-integration of quarries into the countryside after they have been closed down by returning them to nature or re-cultivating them.",
"title": "Environmental impacts"
},
{
"paragraph_id": 77,
"text": "Carbon concentration in cement spans from ≈5% in cement structures to ≈8% in the case of roads in cement. Cement manufacturing releases CO2 in the atmosphere both directly when calcium carbonate is heated, producing lime and carbon dioxide, and also indirectly through the use of energy if its production involves the emission of CO2. The cement industry produces about 10% of global human-made CO2 emissions, of which 60% is from the chemical process, and 40% from burning fuel. A Chatham House study from 2018 estimates that the 4 billion tonnes of cement produced annually account for 8% of worldwide CO2 emissions.",
"title": "Environmental impacts"
},
{
"paragraph_id": 78,
"text": "Nearly 900 kg of CO2 are emitted for every 1000 kg of Portland cement produced. In the European Union, the specific energy consumption for the production of cement clinker has been reduced by approximately 30% since the 1970s. This reduction in primary energy requirements is equivalent to approximately 11 million tonnes of coal per year with corresponding benefits in reduction of CO2 emissions. This accounts for approximately 5% of anthropogenic CO2.",
"title": "Environmental impacts"
},
{
"paragraph_id": 79,
"text": "The majority of carbon dioxide emissions in the manufacture of Portland cement (approximately 60%) are produced from the chemical decomposition of limestone to lime, an ingredient in Portland cement clinker. These emissions may be reduced by lowering the clinker content of cement. They can also be reduced by alternative fabrication methods such as the intergrinding cement with sand or with slag or other pozzolan type minerals to a very fine powder.",
"title": "Environmental impacts"
},
{
"paragraph_id": 80,
"text": "To reduce the transport of heavier raw materials and to minimize the associated costs, it is more economical to build cement plants closer to the limestone quarries rather than to the consumer centers.",
"title": "Environmental impacts"
},
{
"paragraph_id": 81,
"text": "As of 2019 carbon capture and storage is about to be trialed, but its financial viability is uncertain.",
"title": "Environmental impacts"
},
{
"paragraph_id": 82,
"text": "Hydrated products of Portland cement, such as concrete and mortars, slowly reabsorb atmospheric CO2 gas, which has been released during calcination in a kiln. This natural process, reversed to calcination, is called carbonation. As it depends on CO2 diffusion into the bulk of concrete, its rate depends on many parameters, such as environmental conditions and surface area exposed to the atmosphere. Carbonation is particularly significant at the latter stages of the concrete life - after demolition and crushing of the debris. It was estimated that during the whole life-cycle of cement products, it can be reabsorbed nearly 30% of atmospheric CO2 generated by cement production.",
"title": "Environmental impacts"
},
{
"paragraph_id": 83,
"text": "Carbonation process is considered as a mechanism of concrete degradation. It reduces pH of concrete that promotes reinforsment steel corrosion. However, as the product of Ca(OH)2 carbonation, CaCO3, occupies a greater volume, porosity of concrete reduces. This increases strength and hardness of concrete.",
"title": "Environmental impacts"
},
{
"paragraph_id": 84,
"text": "There are proposals to reduce carbon footprint of hydraulic cement by adopting non-hydraulic cement, lime mortar, for certain applications. It reabsorbs some of the CO2 during hardening, and has a lower energy requirement in production than Portland cement.",
"title": "Environmental impacts"
},
{
"paragraph_id": 85,
"text": "Few other attempts to increase absorption of carbon dioxide include cements based on magnesium (Sorel cement).",
"title": "Environmental impacts"
},
{
"paragraph_id": 86,
"text": "In some circumstances, mainly depending on the origin and the composition of the raw materials used, the high-temperature calcination process of limestone and clay minerals can release in the atmosphere gases and dust rich in volatile heavy metals, e.g. thallium, cadmium and mercury are the most toxic. Heavy metals (Tl, Cd, Hg, ...) and also selenium are often found as trace elements in common metal sulfides (pyrite (FeS2), zinc blende (ZnS), galena (PbS), ...) present as secondary minerals in most of the raw materials. Environmental regulations exist in many countries to limit these emissions. As of 2011 in the United States, cement kilns are \"legally allowed to pump more toxins into the air than are hazardous-waste incinerators.\"",
"title": "Environmental impacts"
},
{
"paragraph_id": 87,
"text": "The presence of heavy metals in the clinker arises both from the natural raw materials and from the use of recycled by-products or alternative fuels. The high pH prevailing in the cement porewater (12.5 < pH < 13.5) limits the mobility of many heavy metals by decreasing their solubility and increasing their sorption onto the cement mineral phases. Nickel, zinc and lead are commonly found in cement in non-negligible concentrations. Chromium may also directly arise as natural impurity from the raw materials or as secondary contamination from the abrasion of hard chromium steel alloys used in the ball mills when the clinker is ground. As chromate (CrO4) is toxic and may cause severe skin allergies at trace concentration, it is sometimes reduced into trivalent Cr(III) by addition of ferrous sulfate (FeSO4).",
"title": "Environmental impacts"
},
{
"paragraph_id": 88,
"text": "A cement plant consumes 3 to 6 GJ of fuel per tonne of clinker produced, depending on the raw materials and the process used. Most cement kilns today use coal and petroleum coke as primary fuels, and to a lesser extent natural gas and fuel oil. Selected waste and by-products with recoverable calorific value can be used as fuels in a cement kiln (referred to as co-processing), replacing a portion of conventional fossil fuels, like coal, if they meet strict specifications. Selected waste and by-products containing useful minerals such as calcium, silica, alumina, and iron can be used as raw materials in the kiln, replacing raw materials such as clay, shale, and limestone. Because some materials have both useful mineral content and recoverable calorific value, the distinction between alternative fuels and raw materials is not always clear. For example, sewage sludge has a low but significant calorific value, and burns to give ash containing minerals useful in the clinker matrix. Scrap automobile and truck tires are useful in cement manufacturing as they have high calorific value and the iron embedded in tires is useful as a feed stock.",
"title": "Environmental impacts"
},
{
"paragraph_id": 89,
"text": "Clinker is manufactured by heating raw materials inside the main burner of a kiln to a temperature of 1,450 °C. The flame reaches temperatures of 1,800 °C. The material remains at 1,200 °C for 12–15 seconds at 1,800 °C (and/ or?) for 5–8 seconds (also referred to as residence time). These characteristics of a clinker kiln offer numerous benefits and they ensure a complete destruction of organic compounds, a total neutralization of acid gases, sulphur oxides and hydrogen chloride. Furthermore, heavy metal traces are embedded in the clinker structure and no by-products, such as ash or residues, are produced.",
"title": "Environmental impacts"
},
{
"paragraph_id": 90,
"text": "The EU cement industry already uses more than 40% fuels derived from waste and biomass in supplying the thermal energy to the grey clinker making process. Although the choice for this so-called alternative fuels (AF) is typically cost driven, other factors are becoming more important. Use of alternative fuels provides benefits for both society and the company: CO2-emissions are lower than with fossil fuels, waste can be co-processed in an efficient and sustainable manner and the demand for certain virgin materials can be reduced. Yet there are large differences in the share of alternative fuels used between the European Union (EU) member states. The societal benefits could be improved if more member states increase their alternative fuels share. The Ecofys study assessed the barriers and opportunities for further uptake of alternative fuels in 14 EU member states. The Ecofys study found that local factors constrain the market potential to a much larger extent than the technical and economic feasibility of the cement industry itself.",
"title": "Environmental impacts"
},
{
"paragraph_id": 91,
"text": "Reduced-footprint cement is a cementitious material that meets or exceeds the functional performance capabilities of Portland cement. Various techniques are under development. One is geopolymer cement, which incorporates recycled materials, thereby reducing consumption of raw materials, water, and energy.",
"title": "Reduced-footprint cement"
},
{
"paragraph_id": 92,
"text": "Another approach is to reduce or eliminate the production and release of damaging pollutants and greenhouse gasses, particularly CO2.",
"title": "Reduced-footprint cement"
},
{
"paragraph_id": 93,
"text": "Growing environmental concerns and the increasing cost of fuels of fossil origin have resulted, in many countries, in a sharp reduction of the resources needed to produce cement and effluents (dust and exhaust gases).",
"title": "Reduced-footprint cement"
},
{
"paragraph_id": 94,
"text": "A team at the University of Edinburgh has developed the 'DUPE' process based on the microbial activity of Sporosarcina pasteurii, a bacterium precipitating calcium carbonate, which, when mixed with sand and urine, can produce mortar blocks with a compressive strength 70% of that of concrete.",
"title": "Reduced-footprint cement"
},
{
"paragraph_id": 95,
"text": "Recycling old cement in electric arc furnaces is another approach.",
"title": "Reduced-footprint cement"
},
{
"paragraph_id": 96,
"text": "An overview of climate-friendly methods for cement production can be found here.",
"title": "Reduced-footprint cement"
}
] | A cement is a binder, a chemical substance used for construction that sets, hardens, and adheres to other materials to bind them together. Cement is seldom used on its own, but rather to bind sand and gravel (aggregate) together. Cement mixed with fine aggregate produces mortar for masonry, or with sand and gravel, produces concrete. Concrete is the most widely used material in existence and is behind only water as the planet's most-consumed resource. Cements used in construction are usually inorganic, often lime or calcium silicate based, which can be characterized as hydraulic or the less common non-hydraulic, depending on the ability of the cement to set in the presence of water. Hydraulic cements set and become adhesive through a chemical reaction between the dry ingredients and water. The chemical reaction results in mineral hydrates that are not very water-soluble and so are quite durable in water and safe from chemical attack. This allows setting in wet conditions or under water and further protects the hardened material from chemical attack. The chemical process for hydraulic cement was found by ancient Romans who used volcanic ash (pozzolana) with added lime. Non-hydraulic cement does not set in wet conditions or under water. Rather, it sets as it dries and reacts with carbon dioxide in the air. It is resistant to attack by chemicals after setting. The word "cement" can be traced back to the Ancient Roman term opus caementicium, used to describe masonry resembling modern concrete that was made from crushed rock with burnt lime as binder. The volcanic ash and pulverized brick supplements that were added to the burnt lime, to obtain a hydraulic binder, were later referred to as cementum, cimentum, cäment, and cement. In modern times, organic polymers are sometimes used as cements in concrete. World production of cement is about 4.4 billion tonnes per year, of which about half is made in China, followed by India and Vietnam. The cement production process is responsible for nearly 8% (2018) of global CO2 emissions, which includes heating raw materials in a cement kiln by fuel combustion and resulting release of CO2 stored in the calcium carbonate. Its hydrated products, such as concrete, gradually reabsorb substantial amounts of atmospheric CO2 compensating near 30% of initial CO2 emissions, as estimations suggest. | 2001-10-02T15:56:49Z | 2023-12-29T08:28:39Z | [
"Template:Overline",
"Template:Anchor",
"Template:Authority control",
"Template:Short description",
"Template:Use dmy dates",
"Template:Blockquote",
"Template:R",
"Template:As of",
"Template:Cite journal",
"Template:ISBN",
"Template:Other uses",
"Template:Convert",
"Template:Rp",
"Template:Div col",
"Template:Refend",
"Template:Lang",
"Template:CO2",
"Template:Clarify",
"Template:Further",
"Template:Cite EB1911",
"Template:Concrete navbox",
"Template:Div col end",
"Template:Cite book",
"Template:Webarchive",
"Template:Components of Cement, Comparison of Chemical and Physical Characteristics",
"Template:Main",
"Template:Chem2",
"Template:See also",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Refbegin",
"Template:Commons category"
] | https://en.wikipedia.org/wiki/Cement |
6,671 | Cincinnati Reds | The Cincinnati Reds are an American professional baseball team based in Cincinnati. They compete in Major League Baseball (MLB) as a member club of the National League (NL) Central division and were a charter member of the American Association in 1881 before joining the NL in 1890.
The Reds played in the NL West division from 1969 to 1993, before joining the Central division in 1994. For several years in the 1970s, they were considered the most dominant team in baseball, most notably winning the 1975 and 1976 World Series; the team was colloquially known as the "Big Red Machine" during this time, and it included Hall of Fame members Johnny Bench, Joe Morgan and Tony Pérez, as well as the controversial Pete Rose, the all-time hits leader. Overall, the Reds have won five World Series championships, nine NL pennants, one AA pennant and 10 division titles. The team plays its home games at Great American Ball Park, which opened in 2003. Bob Castellini has been the CEO of the Reds since 2006. From 1882 to 2022, the Reds' overall win–loss record is 10,775–10,601 (a .504 winning percentage).
The origins of the modern Cincinnati Reds baseball team can be traced back to the expulsion from the National League of an earlier team bearing the same name. In 1876, Cincinnati became one of the charter members of the new National League (NL), but the club ran afoul of league organizer and longtime president William Hulbert for selling beer during games and renting out its ballpark on Sundays. Both were important in enticing the city's large German population to support the team. While Hulbert made clear his distaste for both beer and Sunday baseball at the founding of the league, neither practice was against league rules at the time. On October 6, 1880, however, seven of the eight team owners adopted a pledge to ban both beer and Sunday baseball at the regular league meeting in December. Only Cincinnati president W. H. Kennett refused to sign the pledge, so the other owners preemptively expelled Cincinnati from the league for violating the new rules even though they were not yet in effect.
Cincinnati's expulsion incensed Cincinnati Enquirer sports editor O. P. Caylor, who made two attempts to form a new league on behalf of the receivers for the now-bankrupt Reds franchise. When these attempts failed, he formed a new independent ball club known as the Red Stockings in the spring of 1881 and brought the team to St. Louis for a weekend exhibition. The Reds' first game was a 12–3 victory over the St. Louis club. After the 1881 series proved successful, Caylor and former Reds president Justus Thorner received an invitation from Philadelphia businessman Horace Phillips to attend a meeting of several clubs in Pittsburgh, planning to establish a new league to compete with the NL. Upon arriving, however, Caylor and Thorner found that no other owners had accepted the invitation, while even Phillips declined to attend his own meeting. By chance, the duo met former pitcher Al Pratt, who paired them with former Pittsburgh Alleghenys president H. Denny McKnight. Together, the three hatched a scheme to form a new league by sending a telegram to each of the owners who were invited to attend the meeting stating that he was the only person who did not attend, and that everyone else was enthusiastic about the new venture and eager to attend a second meeting in Cincinnati. The ploy worked, and the American Association (AA) was officially formed at the Hotel Gibson in Cincinnati. The new Reds – with Thorner now serving as president – became a charter member of the AA.
Led by the hitting of third baseman Hick Carpenter, the defense of future Hall of Fame second baseman Bid McPhee and the pitching of 40-game-winner Will White, the Reds won the inaugural AA pennant in 1882. With the establishment of the Union Association in 1884, Thorner left the club to finance the Cincinnati Outlaw Reds and managed to acquire the lease on the Reds' Bank Street Grounds playing field, forcing new president Aaron Stern to relocate three blocks away to the hastily built League Park. The club never placed higher than second or lower than fifth for the rest of its tenure in the American Association.
The Cincinnati Red Stockings left the American Association on November 14, 1889, and joined the National League along with the Brooklyn Bridegrooms after a dispute with St. Louis Browns owner Chris Von Der Ahe over the selection of a new league president. The National League was happy to accept the teams in part due to the emergence of the new Player's League, an early failed attempt to break the reserve clause in baseball that threatened both existing leagues. Because the National League decided to expand while the American Association was weakening, the team accepted an invitation to join the National League. After shortening their name to the Reds, the team wandered through the 1890s, signing local stars and aging veterans. During this time, the team never finished above third place (1897) and never closer than 101⁄2 games to first (1890).
At the start of the 20th century, the Reds had hitting stars Sam Crawford and Cy Seymour. Seymour's .377 average in 1905 was the first individual batting crown won by a Red. In 1911, Bob Bescher stole 81 bases, which is still a team record. Like the previous decade, the 1900s were not kind to the Reds, as much of the decade was spent in the league's second division.
In 1912, the club opened Redland Field (renamed Crosley Field in 1934), a new steel-and-concrete ballpark. The Reds had been playing baseball on that same site – the corner of Findlay and Western Avenues on the city's west side – for 28 years in wooden structures that had been occasionally damaged by fires. By the late 1910s, the Reds began to come out of the second division. The 1918 team finished fourth, and new manager Pat Moran led the Reds to an NL pennant in 1919, in what the club advertised as its "Golden Anniversary." The 1919 team had hitting stars Edd Roush and Heinie Groh, while the pitching staff was led by Hod Eller and left-hander Harry "Slim" Sallee. The Reds finished ahead of John McGraw's New York Giants and then won the world championship in eight games over the Chicago White Sox.
By 1920, the "Black Sox" scandal had brought a taint to the Reds' first championship. After 1926 and well into the 1930s, the Reds were second division dwellers. Eppa Rixey, Dolf Luque and Pete Donohue were pitching stars, but the offense never lived up to the pitching. By 1931, the team was bankrupt, the Great Depression was in full swing and Redland Field was in a state of disrepair.
Powel Crosley, Jr., an electronics magnate who, with his brother Lewis M. Crosley, produced radios, refrigerators and other household items, bought the Reds out of bankruptcy in 1933 and hired Larry MacPhail to be the general manager. Crosley had started WLW radio, the Reds flagship radio broadcaster, and the Crosley Broadcasting Corporation in Cincinnati, where he was also a prominent civic leader. MacPhail began to develop the Reds' minor league system and expanded the Reds' fan base. Throughout the rest of the decade, the Reds became a team of "firsts." The now-renamed Crosley Field became the host of the first night game in 1935, which was also the first baseball fireworks night. (The fireworks at the game were shot by Joe Rozzi of Rozzi's Famous Fireworks.) Johnny Vander Meer became the only pitcher in major league history to throw back-to-back no-hitters in 1938. Thanks to Vander Meer, Paul Derringer and second baseman/third baseman-turned-pitcher Bucky Walters, the Reds had a solid pitching staff. The offense came around in the late 1930s. By 1938, the Reds, led by manager Bill McKechnie, were out of the second division, finishing fourth. Ernie Lombardi was named the National League's Most Valuable Player in 1938. By 1939, the Reds were National League champions but were swept in the World Series by the New York Yankees. In 1940, the Reds repeated as NL Champions, and for the first time in 21 years, they captured a world championship, beating the Detroit Tigers 4 games to 3. Frank McCormick was the 1940 NL MVP; other position players included Harry Craft, Lonny Frey, Ival Goodman, Lew Riggs and Bill Werber.
World War II and age finally caught up with the Reds, as the team finished mostly in the second division throughout the 1940s and early 1950s. In 1944, Joe Nuxhall (who was later to become part of the radio broadcasting team), at age 15, pitched for the Reds on loan from Wilson Junior High school in Hamilton, Ohio. He became the youngest player ever to appear in a major league game, a record that still stands today. Ewell "The Whip" Blackwell was the main pitching stalwart before arm problems cut short his career. Ted Kluszewski was the NL home run leader in 1954. The rest of the offense was a collection of over-the-hill players and not-ready-for-prime-time youngsters.
In April 1953, the Reds announced a preference to be called the "Redlegs," saying that the name of the club had been "Red Stockings" and then "Redlegs." A newspaper speculated that it was due to the developing political connotation of the word "red" to mean Communism. From 1956 to 1960, the club's logo was altered to remove the term "REDS" from the inside of the "wishbone C" symbol. The word "REDS" reappeared on the 1961 uniforms, but the point of the "C" was removed. The traditional home uniform logo was reinstated in 1967.
In 1956, the Redlegs, led by National League Rookie of the Year Frank Robinson, hit 221 home runs to tie the NL record. By 1961, Robinson was joined by Vada Pinson, Wally Post, Gordy Coleman and Gene Freese. Pitchers Joey Jay, Jim O'Toole and Bob Purkey led the staff.
The Reds captured the 1961 National League pennant, holding off the Los Angeles Dodgers and San Francisco Giants, only to be defeated by the perennially powerful New York Yankees in the World Series.
The Reds had winning teams during the rest of the 1960s, but did not produce any championships. They won 98 games in 1962, paced by Purkey's 23 wins, but finished third. In 1964, they lost the pennant by one game to the St. Louis Cardinals after having taken first place when the Philadelphia Phillies collapsed in September. Their beloved manager Fred Hutchinson died of cancer just weeks after the end of the 1964 season. The failure of the Reds to win the 1964 pennant led to owner Bill DeWitt selling off key components of the team in anticipation of relocating the franchise. In response to DeWitt's threatened move, women of Cincinnati banded together to form the Rosie Reds to urge DeWitt to keep the franchise in Cincinnati. The Rosie Reds are still in existence, and are currently the oldest fan club in Major League Baseball. After the 1965 season, DeWitt executed what is remembered as the most lopsided trade in baseball history, sending former MVP Frank Robinson to the Baltimore Orioles for pitchers Milt Pappas and Jack Baldschun, and outfielder Dick Simpson. Robinson went on to win the MVP and Triple Crown in the American League in 1966, and led Baltimore to its first-ever World Series title in a sweep of the Los Angeles Dodgers. The Reds did not recover from this trade until the rise of the "Big Red Machine" in the 1970s.
Starting in the early 1960s, the Reds' farm system began producing a series of stars, including Jim Maloney (the Reds' pitching ace of the 1960s), Pete Rose, Tony Pérez, Johnny Bench, Lee May, Tommy Helms, Bernie Carbo, Hal McRae, Dave Concepción and Gary Nolan. The tipping point came in 1967, with the appointment of Bob Howsam as general manager. That same year, the Reds avoided a move to San Diego when the city of Cincinnati and Hamilton County agreed to build a state-of-the-art, downtown stadium on the edge of the Ohio River. The Reds entered into a 30-year lease in exchange for the stadium commitment keeping the franchise in Cincinnati. In a series of strategic moves, Howsam brought in key personnel to complement the homegrown talent. The Reds' final game at Crosley Field, where they had played since 1912, was played on June 24, 1970, with a 5–4 victory over the San Francisco Giants.
Under Howsam's administration starting in the late 1960s, all players coming to the Reds were required to shave and cut their hair for the next three decades in order to present the team as wholesome in an era of turmoil. The rule was controversial, but persisted well into the ownership of Marge Schott. On at least one occasion, in the early 1980s, enforcement of this rule lost the Reds the services of star reliever and Ohio native Rollie Fingers, who would not shave his trademark handlebar mustache in order to join the team. The rule was not officially rescinded until 1999, when the Reds traded for slugger Greg Vaughn, who had a goatee. The New York Yankees continue to have a similar rule today, although Yankees players are permitted to have mustaches. Much like when players leave the Yankees today, players who left the Reds took advantage with their new teams; Pete Rose, for instance, grew his hair out much longer than would be allowed by the Reds once he signed with the Philadelphia Phillies in 1979.
The Reds' rules also included conservative uniforms. In Major League Baseball, a club generally provides most of the equipment and clothing needed for play. However, players are required to supply their gloves and shoes themselves. Many players enter into sponsorship arrangements with shoe manufacturers, but until the mid-1980s, the Reds had a strict rule requiring players to wear only plain black shoes with no prominent logo. Reds players decried what they considered to be the boring color choice, as well as the denial of the opportunity to earn more money through shoe contracts. In 1985, a compromise was struck in which players could paint red marks on their black shoes and were allowed to wear all-red shoes the following year.
In 1970, little-known George "Sparky" Anderson was hired as manager of the Reds, and the team embarked upon a decade of excellence, with a lineup that came to be known as "the Big Red Machine." Playing at Crosley Field until June 30, 1970, when they moved into Riverfront Stadium, a new 52,000-seat multi-purpose venue on the shores of the Ohio River, the Reds began the 1970s with a bang by winning 70 of their first 100 games. Johnny Bench, Tony Pérez, Pete Rose, Lee May and Bobby Tolan were the early offensive leaders of this era. Gary Nolan, Jim Merritt, Wayne Simpson and Jim McGlothlin led a pitching staff that also included veterans Tony Cloninger and Clay Carroll, as well as youngsters Pedro Borbón and Don Gullett. The Reds breezed through the 1970 season, winning the NL West and capturing the NL pennant by sweeping the Pittsburgh Pirates in three games. By the time the club got to the World Series, however, the pitching staff had run out of gas, and the veteran Baltimore Orioles, led by Hall of Fame third baseman and World Series MVP Brooks Robinson, beat the Reds in five games.
After the disastrous 1971 season – the only year in the decade in which the team finished with a losing record – the Reds reloaded by trading veterans Jimmy Stewart, May and Tommy Helms to the Houston Astros for Joe Morgan, César Gerónimo, Jack Billingham, Ed Armbrister and Denis Menke. Meanwhile, Dave Concepción blossomed at shortstop. 1971 was also the year a key component of future world championships was acquired, when George Foster was traded to the Reds from the San Francisco Giants in exchange for shortstop Frank Duffy.
The 1972 Reds won the NL West in baseball's first-ever strike-shortened season, and defeated the Pittsburgh Pirates in a five-game playoff series. They then faced the Oakland Athletics in the World Series, where six of the seven games were decided by one run. With powerful slugger Reggie Jackson sidelined by an injury incurred during Oakland's playoff series, Ohio native Gene Tenace got a chance to play in the series, delivering four home runs that tied the World Series record for homers, propelling Oakland to a dramatic seven-game series win. This was one of the few World Series in which no starting pitcher for either side pitched a complete game.
The Reds won a third NL West crown in 1973 after a dramatic second-half comeback that saw them make up 10+1⁄2 games on the Los Angeles Dodgers after the All-Star break. However, they lost the NL pennant to the New York Mets in five games in the NLCS. In Game 1, Tom Seaver faced Jack Billingham in a classic pitching duel, with all three runs of the 2–1 margin being scored on home runs. John Milner provided New York's run off Billingham, while Pete Rose tied the game in the seventh inning off Seaver, setting the stage for a dramatic game-ending home run by Johnny Bench in the bottom of the ninth. The New York series provided plenty of controversy surrounding the riotous behavior of Shea Stadium fans toward Pete Rose when he and Bud Harrelson scuffled after a hard slide by Rose into Harrelson at second base during the fifth inning of Game 3. A full bench-clearing fight resulted after Harrelson responded to Rose's aggressive move to prevent him from completing a double play by calling him a name. This also led to two more incidents in which play was stopped. The Reds trailed 9–3, and New York's manager Yogi Berra and legendary outfielder Willie Mays, at the request of National League president Warren Giles, appealed to fans in left field to restrain themselves. The next day the series was extended to a fifth game when Rose homered in the 12th inning to tie the series at two games each.
The Reds won 98 games in 1974 but finished second to the 102-win Los Angeles Dodgers. The 1974 season started off with much excitement, as the Atlanta Braves were in town to open the season with the Reds. Hank Aaron entered opening day with 713 home runs, one shy of tying Babe Ruth's record of 714. The first pitch Aaron swung at in the 1974 season was the record-tying home run off Jack Billingham. The next day, the Braves benched Aaron, hoping to save him for his record-breaking home run on their season-opening homestand. Then-commissioner Bowie Kuhn ordered Braves management to play Aaron the next day, where he narrowly missed a historic home run in the fifth inning. Aaron went on to set the record in Atlanta two nights later. The 1974 season also saw the debut of Hall of Fame radio announcer Marty Brennaman after Al Michaels left the Reds to broadcast for the San Francisco Giants.
With 1975, the Big Red Machine lineup solidified with the "Great Eight" starting team of Johnny Bench (catcher), Tony Pérez (first base), Joe Morgan (second base), Dave Concepción (shortstop), Pete Rose (third base), Ken Griffey (right field), César Gerónimo (center field) and George Foster (left field). The starting pitchers included Don Gullett, Fred Norman, Gary Nolan, Jack Billingham, Pat Darcy and Clay Kirby. The bullpen featured Rawly Eastwick and Will McEnaney, who combined for 37 saves, and veterans Pedro Borbón and Clay Carroll. On Opening Day, Rose still played in left field and Foster was not a starter, while John Vukovich, an off-season acquisition, was the starting third baseman. While Vuckovich was a superb fielder, he was a weak hitter. In May, with the team off to a slow start and trailing the Dodgers, Sparky Anderson made a bold move by moving Rose to third base, a position where he had very little experience, and inserting Foster in left field. This was the jolt that the Reds needed to propel them into first place, with Rose proving to be reliable on defense and the addition of Foster to the outfield giving the offense some added punch. During the season, the Reds compiled two notable streaks: 1.) winning 41 out of 50 games in one stretch, and 2.) by going a month without committing any errors on defense.
In the 1975 season, Cincinnati clinched the NL West with 108 victories before sweeping the Pittsburgh Pirates in three games to win the NL pennant. They went on to face the Boston Red Sox in the World Series, splitting the first four games and taking Game 5. After a three-day rain delay, the two teams met in Game 6, considered by many to be the best World Series game ever. The Reds were ahead 6–3 with five outs left when the Red Sox tied the game on former Red Bernie Carbo's three-run home run, his second pinch-hit, three-run homer in the series. After a few close calls both ways, Carlton Fisk hit a dramatic 12th-inning home run off the foul pole in left field to give the Red Sox a 7–6 win and force a decisive game 7. Cincinnati prevailed the next day when Morgan's RBI single won Game 7 and gave the Reds their first championship in 35 years. The Reds have not lost a World Series game since Carlton Fisk's home run, a span of nine straight wins.
1976 saw a return of the same starting eight in the field. The starting rotation was again led by Nolan, Gullett, Billingham and Norman, while the addition of rookies Pat Zachry and Santo Alcalá comprised an underrated staff in which four of the six had ERAs below 3.10. Eastwick, Borbon and McEnaney shared closer duties, recording 26, eight and seven saves, respectively. The Reds won the NL West by 10 games and went undefeated in the postseason, sweeping the Philadelphia Phillies (winning game 3 in their final at-bat) to return to the World Series, where they beat the Yankees at the newly renovated Yankee Stadium in the first Series held there since 1964. This was only the second-ever sweep of the Yankees in the World Series, and the Reds became the first NL team since the 1921–22 New York Giants to win consecutive World Series championships. To date, the 1975 and 1976 Reds were the last NL team to repeat as champions.
Beginning with the 1970 National League pennant, the Reds beat either of the two Pennsylvania-based clubs – the Philadelphia Phillies and the Pittsburgh Pirates – to win their pennants (they beat the Pirates in 1970, 1972, 1975 and 1990, and the Phillies in 1976), making the Big Red Machine part of the rivalry between the two Pennsylvania teams. In 1979, Pete Rose added further fuel to the Big Red Machine, being part of the rivalry when he signed with the Phillies and helped them win their first World Series in 1980.
The late 1970s brought turmoil and change to the Reds. Popular Tony Pérez was sent to the Montreal Expos after the 1976 season, breaking up the Big Red Machine's starting lineup. Manager Sparky Anderson and general manager Bob Howsam later considered this trade to be the biggest mistake of their careers. Starting pitcher Don Gullett left via free agency and signed with the New York Yankees. In an effort to fill that gap, a trade with the Oakland Athletics for starting ace Vida Blue was arranged during the 1977–78 offseason. However, then-commissioner Bowie Kuhn vetoed the trade in order to maintain competitive balance in baseball; some have suggested that the actual reason had more to do with Kuhn's continued feud with Athletics owner Charlie Finley. On June 15, 1977, the Reds acquired pitcher Tom Seaver from the New York Mets for Pat Zachry, Doug Flynn, Steve Henderson and Dan Norman. In other deals that proved to be less successful, the Reds traded Gary Nolan to the California Angels for Craig Hendrickson; Rawly Eastwick to the St. Louis Cardinals for Doug Capilla; and Mike Caldwell to the Milwaukee Brewers for Rick O'Keeffe and Garry Pyka, as well as Rick Auerbach from Texas. The end of the Big Red Machine era was heralded by the replacement of general manager Bob Howsam with Dick Wagner.
In his last season as a Red, Rose gave baseball a thrill as he challenged Joe DiMaggio's 56-game hitting streak, tying for the second-longest streak ever at 44 games. The streak came to an end in Atlanta after striking out in his fifth at-bat in the game against Gene Garber. Rose also earned his 3,000th hit that season, on his way to becoming baseball's all-time hits leader when he rejoined the Reds in the mid-1980s. The year also witnessed the only no-hitter of Hall of Fame pitcher Tom Seaver's career, coming against the St. Louis Cardinals on June 16, 1978.
After the 1978 season and two straight second-place finishes, Wagner fired manager Anderson in a move that proved to be unpopular. Pete Rose, who had played almost every position for the team except pitcher, shortstop and catcher since 1963, signed with Philadelphia as a free agent. By 1979, the starters were Bench (catcher), Dan Driessen (first base), Morgan (second base), Concepción (shortstop) and Ray Knight (third base), with Griffey, Foster and Geronimo again in the outfield. The pitching staff had experienced a complete turnover since 1976, except for Fred Norman. In addition to ace starter Tom Seaver, the remaining starters were Mike LaCoss, Bill Bonham and Paul Moskau. In the bullpen, only Borbon had remained. Dave Tomlin and Mario Soto worked middle relief, with Tom Hume and Doug Bair closing. The Reds won the 1979 NL West behind the pitching of Seaver, but were dispatched in the NL playoffs by the Pittsburgh Pirates. Game 2 featured a controversial play in which a ball hit by Pittsburgh's Phil Garner was caught by Reds outfielder Dave Collins but was ruled a trap, setting the Pirates up to take a 2–1 lead. The Pirates swept the series 3 games to 0 and went on to win the World Series against the Baltimore Orioles.
The 1981 team fielded a strong lineup, with only Concepción, Foster and Griffey retaining their spots from the 1975–76 heyday. After Johnny Bench was able to play only a few games as catcher each year after 1980 due to ongoing injuries, Joe Nolan took over as starting catcher. Driessen and Bench shared first base, and Knight starred at third. Morgan and Geronimo had been replaced at second base and center field by Ron Oester and Dave Collins, respectively. Mario Soto posted a banner year starting on the mound, only surpassed by the outstanding performance of Seaver's Cy Young runner-up season. La Coss, Bruce Berenyi and Frank Pastore rounded out the starting rotation. Hume again led the bullpen as closer, joined by Bair and Joe Price. In 1981, the Reds had the best overall record in baseball, but finished second in the division in both of the half-seasons that resulted from a mid-season players' strike, and missed the playoffs. To commemorate this, a team photo was taken, accompanied by a banner that read "Baseball's Best Record 1981."
By 1982, the Reds were a shell of the original Red Machine, having lost 101 games that year. Johnny Bench, after an unsuccessful transition to third base, retired a year later.
After the heartbreak of 1981, general manager Dick Wagner pursued the strategy of ridding the team of veterans, including third baseman Knight and the entire starting outfield of Griffey, Foster and Collins. Bench, after being able to catch only seven games in 1981, was moved from platooning at first base to be the starting third baseman; Alex Treviño became the regular starting catcher. The outfield was staffed with Paul Householder, César Cedeño and future Colorado Rockies and Pittsburgh Pirates manager Clint Hurdle on Opening Day. Hurdle was an immediate bust, and rookie Eddie Milner took his place in the starting outfield early in the year. The highly touted Householder struggled throughout the year despite extensive playing time. Cedeno, while providing steady veteran play, was a disappointment, unable to recapture his glory days with the Houston Astros. The starting rotation featured the emergence of a dominant Mario Soto and featured strong years by Pastore and Bruce Berenyi, but Seaver was injured all year, and their efforts were wasted without a strong offensive lineup. Tom Hume still led the bullpen along with Joe Price, but the colorful Brad "The Animal" Lesley was unable to consistently excel, and former All-Star Jim Kern was also a disappointment. Kern was also publicly upset over having to shave off his prominent beard to join the Reds, and helped force the issue of getting traded during mid-season by growing it back. The season also saw the midseason firing of manager John McNamara, who was replaced as skipper by Russ Nixon.
The Reds fell to the bottom of the Western Division for the next few years. After the 1982 season, Seaver was traded back to the Mets. 1983 found Dann Bilardello behind the plate, Bench returning to part-time duty at first base, rookie Nick Esasky taking over at third base and Gary Redus taking over from Cedeno. Tom Hume's effectiveness as a closer had diminished, and no other consistent relievers emerged. Dave Concepción was the sole remaining starter from the Big Red Machine era.
Wagner's tenure ended in 1983, when Howsam, the architect of the Big Red Machine, was brought back. The popular Howsam began his second term as the Reds' general manager by signing Cincinnati native Dave Parker as a free agent from Pittsburgh. In 1984, the Reds began to move up, depending on trades and some minor leaguers. In that season, Dave Parker, Dave Concepción and Tony Pérez were in Cincinnati uniforms. In August of the same year, Pete Rose was reacquired and hired to be the Reds player-manager. After raising the franchise from the grave, Howsam gave way to the administration of Bill Bergesch, who attempted to build the team around a core of highly regarded young players in addition to veterans like Parker. However, he was unable to capitalize on an excess of young and highly touted position players including Kurt Stillwell, Tracy Jones and Kal Daniels by trading them for pitching. Despite the emergence of Tom Browning as Rookie of the Year in 1985, when he won 20 games, the rotation was devastated by the early demise of Mario Soto's career to arm injury.
Under Bergesch, the Reds finished second four times from 1985 to 1989. Among the highlights, Rose became the all-time hits leader, Tom Browning threw a perfect game, Eric Davis became the first player in baseball history to hit at least 35 home runs and steal 50 bases, and Chris Sabo was the 1988 National League Rookie of the Year. The Reds also had a bullpen star in John Franco, who was with the team from 1984 to 1989. Rose once had Concepción pitch late in a game at Dodger Stadium. In 1989, following the release of the Dowd Report, which accused Rose of betting on baseball games, Rose was banned from baseball by Commissioner Bart Giamatti, who declared him guilty of "conduct detrimental to baseball." Controversy also swirled around Reds owner Marge Schott, who was accused several times of ethnic and racial slurs.
In 1987, general manager Bergesch was replaced by Murray Cook, who initiated a series of deals that would finally bring the Reds back to the championship, starting with acquisitions of Danny Jackson and José Rijo. An aging Dave Parker was let go after a revival of his career in Cincinnati following the Pittsburgh drug trials. Barry Larkin emerged as the starting shortstop over Kurt Stillwell, who, along with reliever Ted Power, was traded for Jackson. In 1989, Cook was succeeded by Bob Quinn, who put the final pieces of the championship puzzle together, with the acquisitions of Hal Morris, Billy Hatcher and Randy Myers.
In 1990, the Reds, under new manager Lou Piniella, shocked baseball by leading the NL West from wire-to-wire, making them the only NL team to do so. Winning their first nine games, they started 33–12 and maintained their lead throughout the year. Led by Chris Sabo, Barry Larkin, Eric Davis, Paul O'Neill and Billy Hatcher on the field, and by José Rijo, Tom Browning and the "Nasty Boys" – Rob Dibble, Norm Charlton and Randy Myers – on the mound, the Reds took out the Pirates in the NLCS. The Reds swept the heavily favored Oakland Athletics in four straight and extended a winning streak in the World Series to nine consecutive games. This Series, however, saw Eric Davis severely bruise a kidney diving for a fly ball in Game 4, and his play was greatly limited the next year.
In 1992, Quinn was replaced in the front office by Jim Bowden. On the field, manager Lou Piniella wanted outfielder Paul O'Neill to be a power hitter to fill the void Eric Davis left when he was traded to the Los Angeles Dodgers in exchange for Tim Belcher. However, O'Neill only hit .246 with 14 home runs. The Reds returned to winning after a losing season in 1991, but 90 wins was only enough for second place behind the division-winning Atlanta Braves. Before the season ended, Piniella got into an altercation with reliever Rob Dibble. In the offseason, Paul O'Neill was traded to the New York Yankees for outfielder Roberto Kelly, who was a disappointment for the Reds over the next couple of years, while O'Neill led a downtrodden Yankees franchise to a return to glory. Around this time, the Reds would replace their Big Red Machine–era uniforms in favor of a pinstriped uniform with no sleeves.
For the 1993 season, Piniella was replaced by fan favorite Tony Pérez, but he lasted only 44 games at the helm before being replaced by Davey Johnson. With Johnson steering the team, the Reds made steady progress. In 1994, the Reds were in the newly created National League Central Division with the Chicago Cubs, St. Louis Cardinals, and rivals Pittsburgh Pirates and Houston Astros. By the time the strike hit, the Reds finished a half-game ahead of the Houston Astros for first place in the NL Central. In 1995, the Reds won the division thanks to MVP Barry Larkin. After defeating the NL West champion Dodgers in the first NLDS since 1981, however, they lost to the Atlanta Braves.
Team owner Marge Schott announced mid-season that Johnson would be gone by the end of the year, regardless of the team's outcome, to be replaced by former Reds third baseman Ray Knight. Johnson and Schott had never gotten along, and she did not approve of Johnson living with his fiancée before they were married. In contrast, Knight, along with his wife, professional golfer Nancy Lopez, were friends of Schott. The team took a dive under Knight, who was unable to complete two full seasons as manager and was subjected to complaints in the press about his strict managerial style.
In 1999, the Reds won 96 games, led by manager Jack McKeon, but lost to the New York Mets in a one-game playoff. Earlier that year, Schott sold controlling interest in the Reds to Cincinnati businessman Carl Lindner. Despite an 85–77 finish in 2000, and being named 1999 NL manager of the year, McKeon was fired after the 2000 season. The Reds did not have another winning season until 2010.
Riverfront Stadium, by then known as Cinergy Field, was demolished in 2002. Great American Ball Park opened in 2003, with high expectations for a team led by local favorites, including outfielder Ken Griffey Jr., shortstop Barry Larkin and first baseman Sean Casey. Although attendance improved considerably with the new ballpark, the Reds continued to lose. Schott had not invested much in the farm system since the early 1990s, leaving the team relatively thin on talent. After years of promises that the club was rebuilding toward the opening of the new ballpark, general manager Jim Bowden and manager Bob Boone were fired on July 28. This broke up the father-son combo of manager Bob Boone and third baseman Aaron Boone, and the latter was soon traded to the New York Yankees. Tragedy struck in November when Dernell Stenson, a promising young outfielder, was shot and killed during a carjack. Following the season, Dan O'Brien was hired as the Reds' 16th general manager on October 27, 2003, succeeding Jim Bowden.
The 2004 and 2005 seasons continued the trend of big-hitting, poor pitching and poor records. Griffey, Jr. joined the 500 home run club in 2004, but was again hampered by injuries. Adam Dunn emerged as consistent home run hitter, including a 535-foot (163 m) home run against José Lima. He also broke the major league record for strikeouts in 2004. Although a number of free agents were signed before 2005, the Reds were quickly in last place, and manager Dave Miley was forced out in the 2005 midseason and replaced by Jerry Narron. Like many other small-market clubs, the Reds dispatched some of their veteran players and began entrusting their future to a young nucleus that included Adam Dunn and Austin Kearns.
2004 saw the opening of the Cincinnati Reds Hall of Fame (HOF), which had been in existence in name only since the 1950s, with player plaques, photos and other memorabilia scattered throughout their front offices. Ownership and management desired a standalone facility where the public could walk through interactive displays, see locker room recreations, watch videos of classic Reds moments and peruse historical items, such as the history of Reds uniforms dating back to the 1920s or a baseball marking every hit Pete Rose had during his career.
Robert Castellini took over as controlling owner from Lindner in 2006. Castellini promptly fired general manager Dan O'Brien and hired Wayne Krivsky. The Reds made a run at the playoffs, but ultimately fell short. The 2007 season was again mired in mediocrity. Midway through the season, Jerry Narron was fired as manager and replaced by Pete Mackanin. The Reds ended up posting a winning record under Mackanin, but finished the season in fifth place in the Central Division. Mackanin was manager in an interim capacity only, and the Reds, seeking a big name to fill the spot, ultimately brought in Dusty Baker. Early in the 2008 season, Krivsky was fired and replaced by Walt Jocketty. Although the Reds did not win under Krivsky, he is credited with revamping the farm system and signing young talent that could potentially lead the team to success in the future.
The Reds failed to post winning records in both 2008 and 2009. In 2010, with NL MVP Joey Votto and Gold Glovers Brandon Phillips and Scott Rolen, the Reds posted a 91–71 record and were NL Central champions. The following week, the Reds became only the second team in MLB history to be no-hit in a postseason game when Philadelphia's Roy Halladay shut down the National League's No. 1 offense in Game 1 of the NLDS. The Reds eventually lost in a three-game sweep of the NLDS to Philadelphia.
After coming off their surprising 2010 NL Central Division title, the Reds fell short of many expectations for the 2011 season. Multiple injuries and inconsistent starting pitching played a big role in their mid-season collapse, along with a less productive offense as compared to the previous year. The Reds ended the season at 79–83, and won the 2012 NL Central Division Title. On September 28, Homer Bailey threw a 1–0 no-hitter against the Pittsburgh Pirates, marking the first Reds no-hitter since Tom Browning's perfect game in 1988. Finishing with a 97–65 record, the Reds earned the second seed in the Division Series and a matchup with the eventual World Series champion, the San Francisco Giants. After taking a 2–0 lead with road victories at AT&T Park, they headed home looking to win the series. However, they lost three straight at their home ballpark, becoming the first National League team since the Chicago Cubs in 1984 to lose a division series after leading 2–0.
In the offseason, the team traded outfielder Drew Stubbs – as part of a three-team deal with the Arizona Diamondbacks and Cleveland Indians – to the Indians, and in turn received right fielder Shin-Soo Choo. On July 2, 2013, Homer Bailey pitched a no-hitter against the San Francisco Giants for a 4–0 Reds victory, making him the third pitcher in Reds history with two complete-game no-hitters in their career.
Following six consecutive losses to close out the 2013 season, including a loss to the Pittsburgh Pirates at PNC Park in the National League wild-card playoff game, the Reds decided to fire Dusty Baker. During his six years as manager, Baker led the Reds to the playoff three times; however, they never advanced beyond the first round.
On October 22, 2013, the Reds hired pitching coach Bryan Price to replace Baker as manager. Under Price, the Reds were led by pitchers Johnny Cueto and the hard-throwing Aroldis Chapman. The offense was led by All-Star third baseman Todd Frazier, Joey Votto and Brandon Phillips, but although they had plenty of star power, the Reds never got off to a good start and ended the season in lowly fourth place in the division to go along with a 76–86 record. During the offseason, the Reds traded pitchers Alfredo Simón to the Tigers and Mat Latos to the Marlins. In return, they acquired young talents such as Eugenio Suárez and Anthony DeSclafani. They also acquired veteran slugger Marlon Byrd from the Phillies to play left field.
The Reds' 2015 season wasn't much better, as they finished with the second-worst record in the league at 64–98, their worst finish since 1982. The Reds were forced to trade star pitchers Johnny Cueto and Mike Leake to the Kansas City Royals and San Francisco Giants, respectively, receiving minor league pitching prospects for both. Shortly after the season's end, the Reds traded Home Run Derby champion Todd Frazier to the Chicago White Sox and closing pitcher Aroldis Chapman to the New York Yankees.
In 2016, the Reds broke the then-record for home runs allowed during a single season, The Reds held this record until the 2019 season when it was broken by the Baltimore Orioles. The previous record holder was the 1996 Detroit Tigers with 241 home runs yielded to opposing teams. The Reds went 68–94 and again were one of the worst teams in MLB. The Reds traded outfielder Jay Bruce to the Mets just before the July 31 non-waiver trade deadline in exchange for two prospects: infielder Dilson Herrera and pitcher Max Wotell. During the offseason, the Reds traded Brandon Phillips to the Atlanta Braves in exchange for two minor league pitchers.
On September 25, 2020, the Reds earned their first postseason berth since 2013, ultimately earning the seventh seed in the expanded 2020 playoffs. The 2020 season had been shortened to 60 games as a result of the COVID-19 pandemic. The Reds lost their first-round series against the Atlanta Braves two games to none.
The Reds finished the 2021 season with a record of 83–79, good for third in the NL Central.
In 2022, the Reds started out the regular season with a ghastly 3–22 record. Their three-game win total in 25 games had not seen since the 2003 Detroit Tigers and was tied for second-worst overall behind the 1988 Baltimore Orioles, who started 2–23 in their first 25 games. They would finish the season with a record of 62–100.
The 2023 season found the Reds in contention for a wild card birth up until the final weekend of the season. They eventually fell short of a playoff birth by 2 games with a record of 82–80. The team was led by a group of young players including rookies Spencer Steer, Matt McLain and Elly De La Cruz. Cruz caused quite a buzz from the beginning of his mid-season call up and in his 15th career game became the first Red to hit for the cycle since Eric Davis in 1989. At the end of the season, retirement speculation surrounded former MVP Joey Votto.
The Cincinnati Reds play their home games at Great American Ball Park, located at 100 Joe Nuxhall Way, in downtown Cincinnati. Great American Ball Park opened in 2003 at the cost of $290 million and has a capacity of 42,271. Along with serving as the home field for the Reds, the stadium also holds the Cincinnati Reds Hall of Fame, which was added as a part of Reds tradition allowing fans to walk through the history of the franchise as well as participating in many interactive baseball features.
Great American Ball Park is the seventh home of the Cincinnati Reds, built immediately to the east of the site on which Riverfront Stadium, later named Cinergy Field, once stood. The first ballpark the Reds occupied was Bank Street Grounds from 1882 to 1883 until they moved to League Park I in 1884, where they would remain until 1893. Through the late 1890s and early 1900s, the Reds moved to two different parks, where they stayed for less than 10 years: League Park II was the third home field for the Reds from 1894 to 1901, and then they moved to the Palace of the Fans, which served as the home of the Reds in the 1910s. It was in 1912 that the Reds moved to Crosley Field, which they called home for 58 years. Crosley served as the home field for the Reds for two World Series titles and five National League pennants. Beginning June 30, 1970, and during the dynasty of the Big Red Machine, the Reds played in Riverfront Stadium, appropriately named due to its location right by the Ohio River. Riverfront saw three World Series titles and five National League pennants. It was in the late 1990s that the city agreed to build two separate stadiums on the riverfront for the Reds and the Cincinnati Bengals. Thus, in 2003, the Reds began a new era with the opening of the current stadium.
The Reds hold their spring training in Goodyear, Arizona, at Goodyear Ballpark. The Reds moved into this stadium and the Cactus League in 2010 after staying in the Grapefruit League for most of their history. The Reds share Goodyear Park with their rivals in Ohio, the Cleveland Guardians.
Throughout the team's history, many different variations of the classic wishbone "C" logo have been introduced. In the team's early history, the Reds logo has been simply the wishbone "C" with the word "REDS" inside, the only colors used being red and white. However, during the 1950s, during the renaming and re-branding of the team as the Cincinnati Redlegs because of the connections to communism of the word "Reds," the color blue was introduced as part of the Reds color combination. During the 1960s and 1970s, the Reds saw a move toward the more traditional colors, abandoning the navy blue. A new logo also appeared with the new era of baseball in 1972, when the team went away from the script "REDS" inside of the "C," instead putting their mascot, Mr. Redlegs, in its place as well as putting the name of the team inside of the wishbone "C." In the 1990s, the more traditional, early logos of Reds came back with the current logo reflecting more of what the team's logo was when they were founded.
Along with the logo, the Reds' uniforms have been changed many different times throughout their history. Following their departure from being called the "Redlegs" in 1956, the Reds made a groundbreaking change to their uniforms with the use of sleeveless jerseys, seen only once before in the Major Leagues by the Chicago Cubs. At home and away, the cap was all-red with a white wishbone "C" insignia. The long-sleeved undershirts were red. The uniform was plain white with a red wishbone "C" logo on the left and the uniform number on the right. On the road, the wishbone "C" was replaced by the mustachioed "Mr. Redlegs" logo, the pillbox-hat-wearing man with a baseball for a head. The home stockings were red with six white stripes. The away stockings had only three white stripes.
The Reds changed uniforms again in 1961, when they replaced the traditional wishbone "C" insignia with an oval-shaped "C" logo, but continued to use the sleeveless jerseys. At home, the Reds wore white caps with the red bill with the oval "C" in red, white sleeveless jerseys with red pinstripes, with the oval "C-REDS" logo in black with red lettering on the left breast and the number in red on the right. The gray away uniform included a gray cap with the red oval "C" and a red bill. Their gray away uniforms, which also included a sleeveless jersey, bore "CINCINNATI" in an arched block style across with the number below on the left. In 1964, players' last names were placed on the back of each set of uniforms, below the numbers. Those uniforms were scrapped after the 1966 season.
However, the Cincinnati uniform design most familiar to baseball enthusiasts is the one whose basic form, with minor variations, held sway for 25 seasons from 1967 to 1992. Most significantly, the point was restored to the "C" insignia, making it a wishbone again. During this era, the Reds wore all-red caps both at home and on the road. The caps bore the simple wishbone "C" insignia in white. The uniforms were standard short-sleeved jerseys and standard trousers – white at home and gray on the road. The home uniform featured the wishbone "C-REDS" logo in red with white type on the left breast and the uniform number in red on the right. The away uniform bore "CINCINNATI" in an arched block style across the front with the uniform number below on the left. Red, long-sleeved undershirts and plain red stirrups over white sanitary stockings completed the basic design. The Reds wore pinstriped home uniforms in 1967 only, and the uniforms were flannel through 1971, changing to double-knits with pullover jerseys and belt-less pants in 1972. Those uniforms lasted 20 seasons, and the 1992 Reds were the last MLB team to date whose primary uniforms featured pullover jerseys and belt-less pants.
The 1993 uniforms, which did away with the pullovers and brought back button-down jerseys, kept white and gray as the base colors for the home and away uniforms, but added red pinstripes. The home jerseys were sleeveless, showing more of the red undershirts. The color scheme of the "C-REDS" logo on the home uniform was reversed, now red lettering on a white background. A new home cap was created that had a red bill and a white crown with red pinstripes and a red wishbone "C" insignia. The away uniform kept the all-red cap, but moved the uniform number to the left to more closely match the home uniform. The only additional change to these uniforms was the introduction of black as a primary color of the Reds in 1999, especially on their road uniforms.
The Reds' latest uniform change came in December 2006, which differed significantly from the uniforms worn during the previous eight seasons. The home caps returned to an all-red design with a white wishbone "C," lightly outlined in black. Caps with red crowns and a black bill became the new road caps. Additionally, the sleeveless jersey was abandoned for a more traditional design. The numbers and lettering for the names on the backs of the jerseys were changed to an early 1900s–style typeface, and a handlebar-mustached "Mr. Redlegs" – reminiscent of the logo used by the Reds in the 1950s and 1960s – was placed on the left sleeve.
In 2023, the Reds and Nike, Inc. introduced a new City Connect jersey, which features a modified "C" on the cap and on the sleeve of the jersey. For the Jersey, it features "CINCY" (shorten for Cincinnati) across the chest of the jersey. On the collar, it features an Ohio Buckeye and also features the motto of Cincinnati "Juncta Juvant" (Latin for "Strength of Unity"). The design of the jersey is to inspired the future of the Reds jersey.
The Cincinnati Reds have retired 10 numbers in franchise history, as well as honoring Jackie Robinson, whose number is retired league-wide around Major League Baseball.
All of the retired numbers are located at Great American Ball Park behind home plate on the outside of the press box. Along with the retired players' and managers' number, the following broadcasters are honored with microphones by the broadcast booth: Marty Brennaman, Waite Hoyt and Joe Nuxhall.
On April 15, 1997, No. 42 was retired throughout Major League Baseball in honor of Jackie Robinson.
The Reds have hosted the Major League Baseball All-Star Game five times: twice at Crosley Field (1938, 1953), twice at Riverfront Stadium (1970, 1988) and once at Great American Ball Park (2015).
The Ohio Cup was an annual pre-season baseball game, which pitted the Ohio rivals Cleveland Indians and Cincinnati Reds. In its first series it was a single-game cup, played each year at minor-league Cooper Stadium in Columbus, and was staged just days before the start of each new Major League Baseball season.
A total of eight Ohio Cup games were played, between 1989 and 1996, with the Indians winning six of them. The winner of the game each year was awarded the Ohio Cup in postgame ceremonies. The Ohio Cup was a favorite among baseball fans in Columbus, with attendances regularly topping 15,000.
The Ohio Cup games ended with the introduction of regular-season interleague play in 1997. Thereafter, the two teams competed annually in the regular-season Battle of Ohio or Buckeye Series. The Ohio Cup was revived in 2008 as a reward for the team with the better overall record in the Reds–Indians series each year.
The Pirates-Reds rivalry at one point in time was one of the fiercest matchups in the National League during the 1970s; both teams often met in the postseason multiple times prior to both being realigned to the National League Central in 1993. The two teams date far into the infancy of MLB, having both been founded in the 1880s, and first met during the 1900 MLB season. Both teams combine for 10 World Series championships and 18 pennants. The Pirates and Reds met 5 times during the NLCS in 1970, 1972, 1975, 1979, and 1990. Most recently; both teams met again during the 2013 NL Wild Card Game.
As of 2023, the Pirates currently lead the rivalry 1141–1113, However; the Reds lead in postseason wins 13–8.
The Dodgers–Reds rivalry was one of the most intense during the 1970s through the early 1990s. They often competed for the NL West division title. From 1970 to 1990, they had eleven 1–2 finishes in the standings, with seven of them being within 5½ games or fewer. Both teams also played in numerous championships during this span, combining to win 10 NL Pennants and 5 World Series titles from 1970–1990 Notably as the Big Red Machine teams clashed frequently with the Tommy Lasorda era Dodgers teams. Reds manager Sparky Anderson once said, "I don't think there's a rivalry like ours in either league. The Giants are supposed to be the Dodgers' natural rivals, but I don't think the feeling is there anymore. It's not there the way it is with us and the Dodgers." The rivalry ended when division realignment moved the Reds to the NL Central. However, they did face one another in the 1995 NLDS.
The Reds' flagship radio station has been WLW, 700AM since 1969. Prior to that, the Reds were heard over WKRC, WCPO, WSAI and WCKY. WLW, a 50,000-watt station, is "clear channel" in more than one way, as iHeartMedia owns the "blowtorch" outlet, which is also known as "The Nation's Station." Reds games can be heard on over 100 local radio stations through the Reds on Radio Network.
Since 2020, the Reds broadcast team has been former Pensacola Blue Wahoos radio play-by-play announcer Tommy Thrall and retired relief pitcher Jeff Brantley on color commentary.
Marty Brennaman called Reds games from 1974 to 2019, most famously alongside former Reds pitcher and color commentator Joe Nuxhall through 2007. Brennaman has won the Ford C. Frick Award for his work, which includes his famous call of "... and this one belongs to the Reds!" after a win. Nuxhall preceded Brennaman in the Reds' booth, beginning in 1967 (the year after his retirement as an active player) until his death in 2007. (From 2004 to 2007, Nuxhall only called select home games.)
In 2007, Thom Brennaman, a veteran announcer seen nationwide on Fox Sports, joined his father Marty in the radio booth. Brantley, formerly of ESPN, also joined the network in 2007. Three years later in 2010, Brantley and Thom Brennaman's increased TV schedule led to more appearances for Jim Kelch, who had filled in on the network since 2008. Kelch's contract expired after the 2017 season.
In 2019, Thrall was brought in to provide in-game and post-game coverage, as well as act as a fill-in play-by-play announcer. He succeeded Marty Brennaman when the former retired at the end of the 2019 season.
Televised games are seen exclusively on Bally Sports Ohio and Bally Sports Indiana. In addition, Bally Sports South televises Bally Sports Ohio broadcasts of Reds games to Tennessee and western North Carolina. George Grande, who hosted the first SportsCenter on ESPN in 1979, was the play-by-play announcer, usually alongside Chris Welsh, from 1993 until his retirement during the final game of the 2009 season. Since 2009, Grande has worked part time for the Reds as play-by-play announcer in September when Thom Brennaman is covering the NFL for Fox Sports. He has also made guest appearances throughout each season. Brennaman had been the head play-by-play commentator since 2010, with Welsh and Brantley sharing time as the color commentators. Cincinnati native Paul Keels, who left in 2011 to devote more time to his full-time job as the play-by-play announcer for the Ohio State Buckeyes Radio Network, was the Reds' backup play-by-play television announcer during the 2010 season. Jim Kelch served as Keels' replacement. The Reds also added former Reds first baseman Sean Casey – known as "The Mayor" by Reds fans – to do color commentary for approximately 15 games in 2011.
NBC affiliate WLWT carried Reds games from 1948 to 1995. Among those that have called games for WLWT include Waite Hoyt, Ray Lane, Steve Physioc, Johnny Bench, Joe Morgan and Ken Wilson. Al Michaels, who established a long career with ABC and NBC, spent three years in Cincinnati early in his career. The last regularly scheduled, over-the-air broadcasts of Reds games were on WSTR-TV from 1996 to 1998. Since 2010, WKRC-TV has simulcast Opening Day games with Fox/Bally Sports Ohio, which it came into common ownership with in 2019.
On August 19, 2020, Thom Brennaman was caught uttering a homophobic slur during a game against the Kansas City Royals. Brennaman eventually apologized for the incident and was suspended, but on September 26, he resigned from his duties as the Reds' TV play-by-play announcer. This ended the Brennamans' 46-year association with the Reds franchise, dating back to Marty's first season in 1974. Sideline reporter Jim Day served as the interim play-by-play voice for the remainder of the 2020 season, after which the Reds hired John Sadak to serve as its television play-by-play announcer.
The Reds Community Fund, founded in 2001, is focused on the youth of the Greater Cincinnati area with the goal of improving the lives of participants by leveraging the traditions of the Reds. The fund sponsors the Reviving Baseball in Inner Cities (RBI) program with a goal of 30–50 young people graduating high school and attending college annually. It also holds an annual telethon, raising in excess of $120,000. An example of the fund's community involvement is its renovation of Hoffman Fields in the Evanston neighborhood of the city, upgrading the entire recreation complex, for a total of over 400 baseball diamonds renovated at 200 locations throughout the region.
During the COVID-19 pandemic in 2020, since no spectators were allowed at MLB games, the Reds offered fans the opportunity to purchase paper cutouts of their own photographs in the stands at Great American Ball Park. The promotion raised over $300,000 for the fund, more than the fund's traditional events such as Redsfest, the Redlegs Run, an annual golf outing and the Fox Sports Ohio Telethon.
The Cincinnati Reds farm system consists of six minor league affiliates. | [
{
"paragraph_id": 0,
"text": "The Cincinnati Reds are an American professional baseball team based in Cincinnati. They compete in Major League Baseball (MLB) as a member club of the National League (NL) Central division and were a charter member of the American Association in 1881 before joining the NL in 1890.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Reds played in the NL West division from 1969 to 1993, before joining the Central division in 1994. For several years in the 1970s, they were considered the most dominant team in baseball, most notably winning the 1975 and 1976 World Series; the team was colloquially known as the \"Big Red Machine\" during this time, and it included Hall of Fame members Johnny Bench, Joe Morgan and Tony Pérez, as well as the controversial Pete Rose, the all-time hits leader. Overall, the Reds have won five World Series championships, nine NL pennants, one AA pennant and 10 division titles. The team plays its home games at Great American Ball Park, which opened in 2003. Bob Castellini has been the CEO of the Reds since 2006. From 1882 to 2022, the Reds' overall win–loss record is 10,775–10,601 (a .504 winning percentage).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The origins of the modern Cincinnati Reds baseball team can be traced back to the expulsion from the National League of an earlier team bearing the same name. In 1876, Cincinnati became one of the charter members of the new National League (NL), but the club ran afoul of league organizer and longtime president William Hulbert for selling beer during games and renting out its ballpark on Sundays. Both were important in enticing the city's large German population to support the team. While Hulbert made clear his distaste for both beer and Sunday baseball at the founding of the league, neither practice was against league rules at the time. On October 6, 1880, however, seven of the eight team owners adopted a pledge to ban both beer and Sunday baseball at the regular league meeting in December. Only Cincinnati president W. H. Kennett refused to sign the pledge, so the other owners preemptively expelled Cincinnati from the league for violating the new rules even though they were not yet in effect.",
"title": "Franchise history"
},
{
"paragraph_id": 3,
"text": "Cincinnati's expulsion incensed Cincinnati Enquirer sports editor O. P. Caylor, who made two attempts to form a new league on behalf of the receivers for the now-bankrupt Reds franchise. When these attempts failed, he formed a new independent ball club known as the Red Stockings in the spring of 1881 and brought the team to St. Louis for a weekend exhibition. The Reds' first game was a 12–3 victory over the St. Louis club. After the 1881 series proved successful, Caylor and former Reds president Justus Thorner received an invitation from Philadelphia businessman Horace Phillips to attend a meeting of several clubs in Pittsburgh, planning to establish a new league to compete with the NL. Upon arriving, however, Caylor and Thorner found that no other owners had accepted the invitation, while even Phillips declined to attend his own meeting. By chance, the duo met former pitcher Al Pratt, who paired them with former Pittsburgh Alleghenys president H. Denny McKnight. Together, the three hatched a scheme to form a new league by sending a telegram to each of the owners who were invited to attend the meeting stating that he was the only person who did not attend, and that everyone else was enthusiastic about the new venture and eager to attend a second meeting in Cincinnati. The ploy worked, and the American Association (AA) was officially formed at the Hotel Gibson in Cincinnati. The new Reds – with Thorner now serving as president – became a charter member of the AA.",
"title": "Franchise history"
},
{
"paragraph_id": 4,
"text": "Led by the hitting of third baseman Hick Carpenter, the defense of future Hall of Fame second baseman Bid McPhee and the pitching of 40-game-winner Will White, the Reds won the inaugural AA pennant in 1882. With the establishment of the Union Association in 1884, Thorner left the club to finance the Cincinnati Outlaw Reds and managed to acquire the lease on the Reds' Bank Street Grounds playing field, forcing new president Aaron Stern to relocate three blocks away to the hastily built League Park. The club never placed higher than second or lower than fifth for the rest of its tenure in the American Association.",
"title": "Franchise history"
},
{
"paragraph_id": 5,
"text": "The Cincinnati Red Stockings left the American Association on November 14, 1889, and joined the National League along with the Brooklyn Bridegrooms after a dispute with St. Louis Browns owner Chris Von Der Ahe over the selection of a new league president. The National League was happy to accept the teams in part due to the emergence of the new Player's League, an early failed attempt to break the reserve clause in baseball that threatened both existing leagues. Because the National League decided to expand while the American Association was weakening, the team accepted an invitation to join the National League. After shortening their name to the Reds, the team wandered through the 1890s, signing local stars and aging veterans. During this time, the team never finished above third place (1897) and never closer than 101⁄2 games to first (1890).",
"title": "Franchise history"
},
{
"paragraph_id": 6,
"text": "At the start of the 20th century, the Reds had hitting stars Sam Crawford and Cy Seymour. Seymour's .377 average in 1905 was the first individual batting crown won by a Red. In 1911, Bob Bescher stole 81 bases, which is still a team record. Like the previous decade, the 1900s were not kind to the Reds, as much of the decade was spent in the league's second division.",
"title": "Franchise history"
},
{
"paragraph_id": 7,
"text": "In 1912, the club opened Redland Field (renamed Crosley Field in 1934), a new steel-and-concrete ballpark. The Reds had been playing baseball on that same site – the corner of Findlay and Western Avenues on the city's west side – for 28 years in wooden structures that had been occasionally damaged by fires. By the late 1910s, the Reds began to come out of the second division. The 1918 team finished fourth, and new manager Pat Moran led the Reds to an NL pennant in 1919, in what the club advertised as its \"Golden Anniversary.\" The 1919 team had hitting stars Edd Roush and Heinie Groh, while the pitching staff was led by Hod Eller and left-hander Harry \"Slim\" Sallee. The Reds finished ahead of John McGraw's New York Giants and then won the world championship in eight games over the Chicago White Sox.",
"title": "Franchise history"
},
{
"paragraph_id": 8,
"text": "By 1920, the \"Black Sox\" scandal had brought a taint to the Reds' first championship. After 1926 and well into the 1930s, the Reds were second division dwellers. Eppa Rixey, Dolf Luque and Pete Donohue were pitching stars, but the offense never lived up to the pitching. By 1931, the team was bankrupt, the Great Depression was in full swing and Redland Field was in a state of disrepair.",
"title": "Franchise history"
},
{
"paragraph_id": 9,
"text": "Powel Crosley, Jr., an electronics magnate who, with his brother Lewis M. Crosley, produced radios, refrigerators and other household items, bought the Reds out of bankruptcy in 1933 and hired Larry MacPhail to be the general manager. Crosley had started WLW radio, the Reds flagship radio broadcaster, and the Crosley Broadcasting Corporation in Cincinnati, where he was also a prominent civic leader. MacPhail began to develop the Reds' minor league system and expanded the Reds' fan base. Throughout the rest of the decade, the Reds became a team of \"firsts.\" The now-renamed Crosley Field became the host of the first night game in 1935, which was also the first baseball fireworks night. (The fireworks at the game were shot by Joe Rozzi of Rozzi's Famous Fireworks.) Johnny Vander Meer became the only pitcher in major league history to throw back-to-back no-hitters in 1938. Thanks to Vander Meer, Paul Derringer and second baseman/third baseman-turned-pitcher Bucky Walters, the Reds had a solid pitching staff. The offense came around in the late 1930s. By 1938, the Reds, led by manager Bill McKechnie, were out of the second division, finishing fourth. Ernie Lombardi was named the National League's Most Valuable Player in 1938. By 1939, the Reds were National League champions but were swept in the World Series by the New York Yankees. In 1940, the Reds repeated as NL Champions, and for the first time in 21 years, they captured a world championship, beating the Detroit Tigers 4 games to 3. Frank McCormick was the 1940 NL MVP; other position players included Harry Craft, Lonny Frey, Ival Goodman, Lew Riggs and Bill Werber.",
"title": "Franchise history"
},
{
"paragraph_id": 10,
"text": "World War II and age finally caught up with the Reds, as the team finished mostly in the second division throughout the 1940s and early 1950s. In 1944, Joe Nuxhall (who was later to become part of the radio broadcasting team), at age 15, pitched for the Reds on loan from Wilson Junior High school in Hamilton, Ohio. He became the youngest player ever to appear in a major league game, a record that still stands today. Ewell \"The Whip\" Blackwell was the main pitching stalwart before arm problems cut short his career. Ted Kluszewski was the NL home run leader in 1954. The rest of the offense was a collection of over-the-hill players and not-ready-for-prime-time youngsters.",
"title": "Franchise history"
},
{
"paragraph_id": 11,
"text": "In April 1953, the Reds announced a preference to be called the \"Redlegs,\" saying that the name of the club had been \"Red Stockings\" and then \"Redlegs.\" A newspaper speculated that it was due to the developing political connotation of the word \"red\" to mean Communism. From 1956 to 1960, the club's logo was altered to remove the term \"REDS\" from the inside of the \"wishbone C\" symbol. The word \"REDS\" reappeared on the 1961 uniforms, but the point of the \"C\" was removed. The traditional home uniform logo was reinstated in 1967.",
"title": "Franchise history"
},
{
"paragraph_id": 12,
"text": "In 1956, the Redlegs, led by National League Rookie of the Year Frank Robinson, hit 221 home runs to tie the NL record. By 1961, Robinson was joined by Vada Pinson, Wally Post, Gordy Coleman and Gene Freese. Pitchers Joey Jay, Jim O'Toole and Bob Purkey led the staff.",
"title": "Franchise history"
},
{
"paragraph_id": 13,
"text": "The Reds captured the 1961 National League pennant, holding off the Los Angeles Dodgers and San Francisco Giants, only to be defeated by the perennially powerful New York Yankees in the World Series.",
"title": "Franchise history"
},
{
"paragraph_id": 14,
"text": "The Reds had winning teams during the rest of the 1960s, but did not produce any championships. They won 98 games in 1962, paced by Purkey's 23 wins, but finished third. In 1964, they lost the pennant by one game to the St. Louis Cardinals after having taken first place when the Philadelphia Phillies collapsed in September. Their beloved manager Fred Hutchinson died of cancer just weeks after the end of the 1964 season. The failure of the Reds to win the 1964 pennant led to owner Bill DeWitt selling off key components of the team in anticipation of relocating the franchise. In response to DeWitt's threatened move, women of Cincinnati banded together to form the Rosie Reds to urge DeWitt to keep the franchise in Cincinnati. The Rosie Reds are still in existence, and are currently the oldest fan club in Major League Baseball. After the 1965 season, DeWitt executed what is remembered as the most lopsided trade in baseball history, sending former MVP Frank Robinson to the Baltimore Orioles for pitchers Milt Pappas and Jack Baldschun, and outfielder Dick Simpson. Robinson went on to win the MVP and Triple Crown in the American League in 1966, and led Baltimore to its first-ever World Series title in a sweep of the Los Angeles Dodgers. The Reds did not recover from this trade until the rise of the \"Big Red Machine\" in the 1970s.",
"title": "Franchise history"
},
{
"paragraph_id": 15,
"text": "Starting in the early 1960s, the Reds' farm system began producing a series of stars, including Jim Maloney (the Reds' pitching ace of the 1960s), Pete Rose, Tony Pérez, Johnny Bench, Lee May, Tommy Helms, Bernie Carbo, Hal McRae, Dave Concepción and Gary Nolan. The tipping point came in 1967, with the appointment of Bob Howsam as general manager. That same year, the Reds avoided a move to San Diego when the city of Cincinnati and Hamilton County agreed to build a state-of-the-art, downtown stadium on the edge of the Ohio River. The Reds entered into a 30-year lease in exchange for the stadium commitment keeping the franchise in Cincinnati. In a series of strategic moves, Howsam brought in key personnel to complement the homegrown talent. The Reds' final game at Crosley Field, where they had played since 1912, was played on June 24, 1970, with a 5–4 victory over the San Francisco Giants.",
"title": "Franchise history"
},
{
"paragraph_id": 16,
"text": "Under Howsam's administration starting in the late 1960s, all players coming to the Reds were required to shave and cut their hair for the next three decades in order to present the team as wholesome in an era of turmoil. The rule was controversial, but persisted well into the ownership of Marge Schott. On at least one occasion, in the early 1980s, enforcement of this rule lost the Reds the services of star reliever and Ohio native Rollie Fingers, who would not shave his trademark handlebar mustache in order to join the team. The rule was not officially rescinded until 1999, when the Reds traded for slugger Greg Vaughn, who had a goatee. The New York Yankees continue to have a similar rule today, although Yankees players are permitted to have mustaches. Much like when players leave the Yankees today, players who left the Reds took advantage with their new teams; Pete Rose, for instance, grew his hair out much longer than would be allowed by the Reds once he signed with the Philadelphia Phillies in 1979.",
"title": "Franchise history"
},
{
"paragraph_id": 17,
"text": "The Reds' rules also included conservative uniforms. In Major League Baseball, a club generally provides most of the equipment and clothing needed for play. However, players are required to supply their gloves and shoes themselves. Many players enter into sponsorship arrangements with shoe manufacturers, but until the mid-1980s, the Reds had a strict rule requiring players to wear only plain black shoes with no prominent logo. Reds players decried what they considered to be the boring color choice, as well as the denial of the opportunity to earn more money through shoe contracts. In 1985, a compromise was struck in which players could paint red marks on their black shoes and were allowed to wear all-red shoes the following year.",
"title": "Franchise history"
},
{
"paragraph_id": 18,
"text": "In 1970, little-known George \"Sparky\" Anderson was hired as manager of the Reds, and the team embarked upon a decade of excellence, with a lineup that came to be known as \"the Big Red Machine.\" Playing at Crosley Field until June 30, 1970, when they moved into Riverfront Stadium, a new 52,000-seat multi-purpose venue on the shores of the Ohio River, the Reds began the 1970s with a bang by winning 70 of their first 100 games. Johnny Bench, Tony Pérez, Pete Rose, Lee May and Bobby Tolan were the early offensive leaders of this era. Gary Nolan, Jim Merritt, Wayne Simpson and Jim McGlothlin led a pitching staff that also included veterans Tony Cloninger and Clay Carroll, as well as youngsters Pedro Borbón and Don Gullett. The Reds breezed through the 1970 season, winning the NL West and capturing the NL pennant by sweeping the Pittsburgh Pirates in three games. By the time the club got to the World Series, however, the pitching staff had run out of gas, and the veteran Baltimore Orioles, led by Hall of Fame third baseman and World Series MVP Brooks Robinson, beat the Reds in five games.",
"title": "Franchise history"
},
{
"paragraph_id": 19,
"text": "After the disastrous 1971 season – the only year in the decade in which the team finished with a losing record – the Reds reloaded by trading veterans Jimmy Stewart, May and Tommy Helms to the Houston Astros for Joe Morgan, César Gerónimo, Jack Billingham, Ed Armbrister and Denis Menke. Meanwhile, Dave Concepción blossomed at shortstop. 1971 was also the year a key component of future world championships was acquired, when George Foster was traded to the Reds from the San Francisco Giants in exchange for shortstop Frank Duffy.",
"title": "Franchise history"
},
{
"paragraph_id": 20,
"text": "The 1972 Reds won the NL West in baseball's first-ever strike-shortened season, and defeated the Pittsburgh Pirates in a five-game playoff series. They then faced the Oakland Athletics in the World Series, where six of the seven games were decided by one run. With powerful slugger Reggie Jackson sidelined by an injury incurred during Oakland's playoff series, Ohio native Gene Tenace got a chance to play in the series, delivering four home runs that tied the World Series record for homers, propelling Oakland to a dramatic seven-game series win. This was one of the few World Series in which no starting pitcher for either side pitched a complete game.",
"title": "Franchise history"
},
{
"paragraph_id": 21,
"text": "The Reds won a third NL West crown in 1973 after a dramatic second-half comeback that saw them make up 10+1⁄2 games on the Los Angeles Dodgers after the All-Star break. However, they lost the NL pennant to the New York Mets in five games in the NLCS. In Game 1, Tom Seaver faced Jack Billingham in a classic pitching duel, with all three runs of the 2–1 margin being scored on home runs. John Milner provided New York's run off Billingham, while Pete Rose tied the game in the seventh inning off Seaver, setting the stage for a dramatic game-ending home run by Johnny Bench in the bottom of the ninth. The New York series provided plenty of controversy surrounding the riotous behavior of Shea Stadium fans toward Pete Rose when he and Bud Harrelson scuffled after a hard slide by Rose into Harrelson at second base during the fifth inning of Game 3. A full bench-clearing fight resulted after Harrelson responded to Rose's aggressive move to prevent him from completing a double play by calling him a name. This also led to two more incidents in which play was stopped. The Reds trailed 9–3, and New York's manager Yogi Berra and legendary outfielder Willie Mays, at the request of National League president Warren Giles, appealed to fans in left field to restrain themselves. The next day the series was extended to a fifth game when Rose homered in the 12th inning to tie the series at two games each.",
"title": "Franchise history"
},
{
"paragraph_id": 22,
"text": "The Reds won 98 games in 1974 but finished second to the 102-win Los Angeles Dodgers. The 1974 season started off with much excitement, as the Atlanta Braves were in town to open the season with the Reds. Hank Aaron entered opening day with 713 home runs, one shy of tying Babe Ruth's record of 714. The first pitch Aaron swung at in the 1974 season was the record-tying home run off Jack Billingham. The next day, the Braves benched Aaron, hoping to save him for his record-breaking home run on their season-opening homestand. Then-commissioner Bowie Kuhn ordered Braves management to play Aaron the next day, where he narrowly missed a historic home run in the fifth inning. Aaron went on to set the record in Atlanta two nights later. The 1974 season also saw the debut of Hall of Fame radio announcer Marty Brennaman after Al Michaels left the Reds to broadcast for the San Francisco Giants.",
"title": "Franchise history"
},
{
"paragraph_id": 23,
"text": "With 1975, the Big Red Machine lineup solidified with the \"Great Eight\" starting team of Johnny Bench (catcher), Tony Pérez (first base), Joe Morgan (second base), Dave Concepción (shortstop), Pete Rose (third base), Ken Griffey (right field), César Gerónimo (center field) and George Foster (left field). The starting pitchers included Don Gullett, Fred Norman, Gary Nolan, Jack Billingham, Pat Darcy and Clay Kirby. The bullpen featured Rawly Eastwick and Will McEnaney, who combined for 37 saves, and veterans Pedro Borbón and Clay Carroll. On Opening Day, Rose still played in left field and Foster was not a starter, while John Vukovich, an off-season acquisition, was the starting third baseman. While Vuckovich was a superb fielder, he was a weak hitter. In May, with the team off to a slow start and trailing the Dodgers, Sparky Anderson made a bold move by moving Rose to third base, a position where he had very little experience, and inserting Foster in left field. This was the jolt that the Reds needed to propel them into first place, with Rose proving to be reliable on defense and the addition of Foster to the outfield giving the offense some added punch. During the season, the Reds compiled two notable streaks: 1.) winning 41 out of 50 games in one stretch, and 2.) by going a month without committing any errors on defense.",
"title": "Franchise history"
},
{
"paragraph_id": 24,
"text": "In the 1975 season, Cincinnati clinched the NL West with 108 victories before sweeping the Pittsburgh Pirates in three games to win the NL pennant. They went on to face the Boston Red Sox in the World Series, splitting the first four games and taking Game 5. After a three-day rain delay, the two teams met in Game 6, considered by many to be the best World Series game ever. The Reds were ahead 6–3 with five outs left when the Red Sox tied the game on former Red Bernie Carbo's three-run home run, his second pinch-hit, three-run homer in the series. After a few close calls both ways, Carlton Fisk hit a dramatic 12th-inning home run off the foul pole in left field to give the Red Sox a 7–6 win and force a decisive game 7. Cincinnati prevailed the next day when Morgan's RBI single won Game 7 and gave the Reds their first championship in 35 years. The Reds have not lost a World Series game since Carlton Fisk's home run, a span of nine straight wins.",
"title": "Franchise history"
},
{
"paragraph_id": 25,
"text": "1976 saw a return of the same starting eight in the field. The starting rotation was again led by Nolan, Gullett, Billingham and Norman, while the addition of rookies Pat Zachry and Santo Alcalá comprised an underrated staff in which four of the six had ERAs below 3.10. Eastwick, Borbon and McEnaney shared closer duties, recording 26, eight and seven saves, respectively. The Reds won the NL West by 10 games and went undefeated in the postseason, sweeping the Philadelphia Phillies (winning game 3 in their final at-bat) to return to the World Series, where they beat the Yankees at the newly renovated Yankee Stadium in the first Series held there since 1964. This was only the second-ever sweep of the Yankees in the World Series, and the Reds became the first NL team since the 1921–22 New York Giants to win consecutive World Series championships. To date, the 1975 and 1976 Reds were the last NL team to repeat as champions.",
"title": "Franchise history"
},
{
"paragraph_id": 26,
"text": "Beginning with the 1970 National League pennant, the Reds beat either of the two Pennsylvania-based clubs – the Philadelphia Phillies and the Pittsburgh Pirates – to win their pennants (they beat the Pirates in 1970, 1972, 1975 and 1990, and the Phillies in 1976), making the Big Red Machine part of the rivalry between the two Pennsylvania teams. In 1979, Pete Rose added further fuel to the Big Red Machine, being part of the rivalry when he signed with the Phillies and helped them win their first World Series in 1980.",
"title": "Franchise history"
},
{
"paragraph_id": 27,
"text": "The late 1970s brought turmoil and change to the Reds. Popular Tony Pérez was sent to the Montreal Expos after the 1976 season, breaking up the Big Red Machine's starting lineup. Manager Sparky Anderson and general manager Bob Howsam later considered this trade to be the biggest mistake of their careers. Starting pitcher Don Gullett left via free agency and signed with the New York Yankees. In an effort to fill that gap, a trade with the Oakland Athletics for starting ace Vida Blue was arranged during the 1977–78 offseason. However, then-commissioner Bowie Kuhn vetoed the trade in order to maintain competitive balance in baseball; some have suggested that the actual reason had more to do with Kuhn's continued feud with Athletics owner Charlie Finley. On June 15, 1977, the Reds acquired pitcher Tom Seaver from the New York Mets for Pat Zachry, Doug Flynn, Steve Henderson and Dan Norman. In other deals that proved to be less successful, the Reds traded Gary Nolan to the California Angels for Craig Hendrickson; Rawly Eastwick to the St. Louis Cardinals for Doug Capilla; and Mike Caldwell to the Milwaukee Brewers for Rick O'Keeffe and Garry Pyka, as well as Rick Auerbach from Texas. The end of the Big Red Machine era was heralded by the replacement of general manager Bob Howsam with Dick Wagner.",
"title": "Franchise history"
},
{
"paragraph_id": 28,
"text": "In his last season as a Red, Rose gave baseball a thrill as he challenged Joe DiMaggio's 56-game hitting streak, tying for the second-longest streak ever at 44 games. The streak came to an end in Atlanta after striking out in his fifth at-bat in the game against Gene Garber. Rose also earned his 3,000th hit that season, on his way to becoming baseball's all-time hits leader when he rejoined the Reds in the mid-1980s. The year also witnessed the only no-hitter of Hall of Fame pitcher Tom Seaver's career, coming against the St. Louis Cardinals on June 16, 1978.",
"title": "Franchise history"
},
{
"paragraph_id": 29,
"text": "After the 1978 season and two straight second-place finishes, Wagner fired manager Anderson in a move that proved to be unpopular. Pete Rose, who had played almost every position for the team except pitcher, shortstop and catcher since 1963, signed with Philadelphia as a free agent. By 1979, the starters were Bench (catcher), Dan Driessen (first base), Morgan (second base), Concepción (shortstop) and Ray Knight (third base), with Griffey, Foster and Geronimo again in the outfield. The pitching staff had experienced a complete turnover since 1976, except for Fred Norman. In addition to ace starter Tom Seaver, the remaining starters were Mike LaCoss, Bill Bonham and Paul Moskau. In the bullpen, only Borbon had remained. Dave Tomlin and Mario Soto worked middle relief, with Tom Hume and Doug Bair closing. The Reds won the 1979 NL West behind the pitching of Seaver, but were dispatched in the NL playoffs by the Pittsburgh Pirates. Game 2 featured a controversial play in which a ball hit by Pittsburgh's Phil Garner was caught by Reds outfielder Dave Collins but was ruled a trap, setting the Pirates up to take a 2–1 lead. The Pirates swept the series 3 games to 0 and went on to win the World Series against the Baltimore Orioles.",
"title": "Franchise history"
},
{
"paragraph_id": 30,
"text": "The 1981 team fielded a strong lineup, with only Concepción, Foster and Griffey retaining their spots from the 1975–76 heyday. After Johnny Bench was able to play only a few games as catcher each year after 1980 due to ongoing injuries, Joe Nolan took over as starting catcher. Driessen and Bench shared first base, and Knight starred at third. Morgan and Geronimo had been replaced at second base and center field by Ron Oester and Dave Collins, respectively. Mario Soto posted a banner year starting on the mound, only surpassed by the outstanding performance of Seaver's Cy Young runner-up season. La Coss, Bruce Berenyi and Frank Pastore rounded out the starting rotation. Hume again led the bullpen as closer, joined by Bair and Joe Price. In 1981, the Reds had the best overall record in baseball, but finished second in the division in both of the half-seasons that resulted from a mid-season players' strike, and missed the playoffs. To commemorate this, a team photo was taken, accompanied by a banner that read \"Baseball's Best Record 1981.\"",
"title": "Franchise history"
},
{
"paragraph_id": 31,
"text": "By 1982, the Reds were a shell of the original Red Machine, having lost 101 games that year. Johnny Bench, after an unsuccessful transition to third base, retired a year later.",
"title": "Franchise history"
},
{
"paragraph_id": 32,
"text": "After the heartbreak of 1981, general manager Dick Wagner pursued the strategy of ridding the team of veterans, including third baseman Knight and the entire starting outfield of Griffey, Foster and Collins. Bench, after being able to catch only seven games in 1981, was moved from platooning at first base to be the starting third baseman; Alex Treviño became the regular starting catcher. The outfield was staffed with Paul Householder, César Cedeño and future Colorado Rockies and Pittsburgh Pirates manager Clint Hurdle on Opening Day. Hurdle was an immediate bust, and rookie Eddie Milner took his place in the starting outfield early in the year. The highly touted Householder struggled throughout the year despite extensive playing time. Cedeno, while providing steady veteran play, was a disappointment, unable to recapture his glory days with the Houston Astros. The starting rotation featured the emergence of a dominant Mario Soto and featured strong years by Pastore and Bruce Berenyi, but Seaver was injured all year, and their efforts were wasted without a strong offensive lineup. Tom Hume still led the bullpen along with Joe Price, but the colorful Brad \"The Animal\" Lesley was unable to consistently excel, and former All-Star Jim Kern was also a disappointment. Kern was also publicly upset over having to shave off his prominent beard to join the Reds, and helped force the issue of getting traded during mid-season by growing it back. The season also saw the midseason firing of manager John McNamara, who was replaced as skipper by Russ Nixon.",
"title": "Franchise history"
},
{
"paragraph_id": 33,
"text": "The Reds fell to the bottom of the Western Division for the next few years. After the 1982 season, Seaver was traded back to the Mets. 1983 found Dann Bilardello behind the plate, Bench returning to part-time duty at first base, rookie Nick Esasky taking over at third base and Gary Redus taking over from Cedeno. Tom Hume's effectiveness as a closer had diminished, and no other consistent relievers emerged. Dave Concepción was the sole remaining starter from the Big Red Machine era.",
"title": "Franchise history"
},
{
"paragraph_id": 34,
"text": "Wagner's tenure ended in 1983, when Howsam, the architect of the Big Red Machine, was brought back. The popular Howsam began his second term as the Reds' general manager by signing Cincinnati native Dave Parker as a free agent from Pittsburgh. In 1984, the Reds began to move up, depending on trades and some minor leaguers. In that season, Dave Parker, Dave Concepción and Tony Pérez were in Cincinnati uniforms. In August of the same year, Pete Rose was reacquired and hired to be the Reds player-manager. After raising the franchise from the grave, Howsam gave way to the administration of Bill Bergesch, who attempted to build the team around a core of highly regarded young players in addition to veterans like Parker. However, he was unable to capitalize on an excess of young and highly touted position players including Kurt Stillwell, Tracy Jones and Kal Daniels by trading them for pitching. Despite the emergence of Tom Browning as Rookie of the Year in 1985, when he won 20 games, the rotation was devastated by the early demise of Mario Soto's career to arm injury.",
"title": "Franchise history"
},
{
"paragraph_id": 35,
"text": "Under Bergesch, the Reds finished second four times from 1985 to 1989. Among the highlights, Rose became the all-time hits leader, Tom Browning threw a perfect game, Eric Davis became the first player in baseball history to hit at least 35 home runs and steal 50 bases, and Chris Sabo was the 1988 National League Rookie of the Year. The Reds also had a bullpen star in John Franco, who was with the team from 1984 to 1989. Rose once had Concepción pitch late in a game at Dodger Stadium. In 1989, following the release of the Dowd Report, which accused Rose of betting on baseball games, Rose was banned from baseball by Commissioner Bart Giamatti, who declared him guilty of \"conduct detrimental to baseball.\" Controversy also swirled around Reds owner Marge Schott, who was accused several times of ethnic and racial slurs.",
"title": "Franchise history"
},
{
"paragraph_id": 36,
"text": "In 1987, general manager Bergesch was replaced by Murray Cook, who initiated a series of deals that would finally bring the Reds back to the championship, starting with acquisitions of Danny Jackson and José Rijo. An aging Dave Parker was let go after a revival of his career in Cincinnati following the Pittsburgh drug trials. Barry Larkin emerged as the starting shortstop over Kurt Stillwell, who, along with reliever Ted Power, was traded for Jackson. In 1989, Cook was succeeded by Bob Quinn, who put the final pieces of the championship puzzle together, with the acquisitions of Hal Morris, Billy Hatcher and Randy Myers.",
"title": "Franchise history"
},
{
"paragraph_id": 37,
"text": "In 1990, the Reds, under new manager Lou Piniella, shocked baseball by leading the NL West from wire-to-wire, making them the only NL team to do so. Winning their first nine games, they started 33–12 and maintained their lead throughout the year. Led by Chris Sabo, Barry Larkin, Eric Davis, Paul O'Neill and Billy Hatcher on the field, and by José Rijo, Tom Browning and the \"Nasty Boys\" – Rob Dibble, Norm Charlton and Randy Myers – on the mound, the Reds took out the Pirates in the NLCS. The Reds swept the heavily favored Oakland Athletics in four straight and extended a winning streak in the World Series to nine consecutive games. This Series, however, saw Eric Davis severely bruise a kidney diving for a fly ball in Game 4, and his play was greatly limited the next year.",
"title": "Franchise history"
},
{
"paragraph_id": 38,
"text": "In 1992, Quinn was replaced in the front office by Jim Bowden. On the field, manager Lou Piniella wanted outfielder Paul O'Neill to be a power hitter to fill the void Eric Davis left when he was traded to the Los Angeles Dodgers in exchange for Tim Belcher. However, O'Neill only hit .246 with 14 home runs. The Reds returned to winning after a losing season in 1991, but 90 wins was only enough for second place behind the division-winning Atlanta Braves. Before the season ended, Piniella got into an altercation with reliever Rob Dibble. In the offseason, Paul O'Neill was traded to the New York Yankees for outfielder Roberto Kelly, who was a disappointment for the Reds over the next couple of years, while O'Neill led a downtrodden Yankees franchise to a return to glory. Around this time, the Reds would replace their Big Red Machine–era uniforms in favor of a pinstriped uniform with no sleeves.",
"title": "Franchise history"
},
{
"paragraph_id": 39,
"text": "For the 1993 season, Piniella was replaced by fan favorite Tony Pérez, but he lasted only 44 games at the helm before being replaced by Davey Johnson. With Johnson steering the team, the Reds made steady progress. In 1994, the Reds were in the newly created National League Central Division with the Chicago Cubs, St. Louis Cardinals, and rivals Pittsburgh Pirates and Houston Astros. By the time the strike hit, the Reds finished a half-game ahead of the Houston Astros for first place in the NL Central. In 1995, the Reds won the division thanks to MVP Barry Larkin. After defeating the NL West champion Dodgers in the first NLDS since 1981, however, they lost to the Atlanta Braves.",
"title": "Franchise history"
},
{
"paragraph_id": 40,
"text": "Team owner Marge Schott announced mid-season that Johnson would be gone by the end of the year, regardless of the team's outcome, to be replaced by former Reds third baseman Ray Knight. Johnson and Schott had never gotten along, and she did not approve of Johnson living with his fiancée before they were married. In contrast, Knight, along with his wife, professional golfer Nancy Lopez, were friends of Schott. The team took a dive under Knight, who was unable to complete two full seasons as manager and was subjected to complaints in the press about his strict managerial style.",
"title": "Franchise history"
},
{
"paragraph_id": 41,
"text": "In 1999, the Reds won 96 games, led by manager Jack McKeon, but lost to the New York Mets in a one-game playoff. Earlier that year, Schott sold controlling interest in the Reds to Cincinnati businessman Carl Lindner. Despite an 85–77 finish in 2000, and being named 1999 NL manager of the year, McKeon was fired after the 2000 season. The Reds did not have another winning season until 2010.",
"title": "Franchise history"
},
{
"paragraph_id": 42,
"text": "Riverfront Stadium, by then known as Cinergy Field, was demolished in 2002. Great American Ball Park opened in 2003, with high expectations for a team led by local favorites, including outfielder Ken Griffey Jr., shortstop Barry Larkin and first baseman Sean Casey. Although attendance improved considerably with the new ballpark, the Reds continued to lose. Schott had not invested much in the farm system since the early 1990s, leaving the team relatively thin on talent. After years of promises that the club was rebuilding toward the opening of the new ballpark, general manager Jim Bowden and manager Bob Boone were fired on July 28. This broke up the father-son combo of manager Bob Boone and third baseman Aaron Boone, and the latter was soon traded to the New York Yankees. Tragedy struck in November when Dernell Stenson, a promising young outfielder, was shot and killed during a carjack. Following the season, Dan O'Brien was hired as the Reds' 16th general manager on October 27, 2003, succeeding Jim Bowden.",
"title": "Franchise history"
},
{
"paragraph_id": 43,
"text": "The 2004 and 2005 seasons continued the trend of big-hitting, poor pitching and poor records. Griffey, Jr. joined the 500 home run club in 2004, but was again hampered by injuries. Adam Dunn emerged as consistent home run hitter, including a 535-foot (163 m) home run against José Lima. He also broke the major league record for strikeouts in 2004. Although a number of free agents were signed before 2005, the Reds were quickly in last place, and manager Dave Miley was forced out in the 2005 midseason and replaced by Jerry Narron. Like many other small-market clubs, the Reds dispatched some of their veteran players and began entrusting their future to a young nucleus that included Adam Dunn and Austin Kearns.",
"title": "Franchise history"
},
{
"paragraph_id": 44,
"text": "2004 saw the opening of the Cincinnati Reds Hall of Fame (HOF), which had been in existence in name only since the 1950s, with player plaques, photos and other memorabilia scattered throughout their front offices. Ownership and management desired a standalone facility where the public could walk through interactive displays, see locker room recreations, watch videos of classic Reds moments and peruse historical items, such as the history of Reds uniforms dating back to the 1920s or a baseball marking every hit Pete Rose had during his career.",
"title": "Franchise history"
},
{
"paragraph_id": 45,
"text": "Robert Castellini took over as controlling owner from Lindner in 2006. Castellini promptly fired general manager Dan O'Brien and hired Wayne Krivsky. The Reds made a run at the playoffs, but ultimately fell short. The 2007 season was again mired in mediocrity. Midway through the season, Jerry Narron was fired as manager and replaced by Pete Mackanin. The Reds ended up posting a winning record under Mackanin, but finished the season in fifth place in the Central Division. Mackanin was manager in an interim capacity only, and the Reds, seeking a big name to fill the spot, ultimately brought in Dusty Baker. Early in the 2008 season, Krivsky was fired and replaced by Walt Jocketty. Although the Reds did not win under Krivsky, he is credited with revamping the farm system and signing young talent that could potentially lead the team to success in the future.",
"title": "Franchise history"
},
{
"paragraph_id": 46,
"text": "The Reds failed to post winning records in both 2008 and 2009. In 2010, with NL MVP Joey Votto and Gold Glovers Brandon Phillips and Scott Rolen, the Reds posted a 91–71 record and were NL Central champions. The following week, the Reds became only the second team in MLB history to be no-hit in a postseason game when Philadelphia's Roy Halladay shut down the National League's No. 1 offense in Game 1 of the NLDS. The Reds eventually lost in a three-game sweep of the NLDS to Philadelphia.",
"title": "Franchise history"
},
{
"paragraph_id": 47,
"text": "After coming off their surprising 2010 NL Central Division title, the Reds fell short of many expectations for the 2011 season. Multiple injuries and inconsistent starting pitching played a big role in their mid-season collapse, along with a less productive offense as compared to the previous year. The Reds ended the season at 79–83, and won the 2012 NL Central Division Title. On September 28, Homer Bailey threw a 1–0 no-hitter against the Pittsburgh Pirates, marking the first Reds no-hitter since Tom Browning's perfect game in 1988. Finishing with a 97–65 record, the Reds earned the second seed in the Division Series and a matchup with the eventual World Series champion, the San Francisco Giants. After taking a 2–0 lead with road victories at AT&T Park, they headed home looking to win the series. However, they lost three straight at their home ballpark, becoming the first National League team since the Chicago Cubs in 1984 to lose a division series after leading 2–0.",
"title": "Franchise history"
},
{
"paragraph_id": 48,
"text": "In the offseason, the team traded outfielder Drew Stubbs – as part of a three-team deal with the Arizona Diamondbacks and Cleveland Indians – to the Indians, and in turn received right fielder Shin-Soo Choo. On July 2, 2013, Homer Bailey pitched a no-hitter against the San Francisco Giants for a 4–0 Reds victory, making him the third pitcher in Reds history with two complete-game no-hitters in their career.",
"title": "Franchise history"
},
{
"paragraph_id": 49,
"text": "Following six consecutive losses to close out the 2013 season, including a loss to the Pittsburgh Pirates at PNC Park in the National League wild-card playoff game, the Reds decided to fire Dusty Baker. During his six years as manager, Baker led the Reds to the playoff three times; however, they never advanced beyond the first round.",
"title": "Franchise history"
},
{
"paragraph_id": 50,
"text": "On October 22, 2013, the Reds hired pitching coach Bryan Price to replace Baker as manager. Under Price, the Reds were led by pitchers Johnny Cueto and the hard-throwing Aroldis Chapman. The offense was led by All-Star third baseman Todd Frazier, Joey Votto and Brandon Phillips, but although they had plenty of star power, the Reds never got off to a good start and ended the season in lowly fourth place in the division to go along with a 76–86 record. During the offseason, the Reds traded pitchers Alfredo Simón to the Tigers and Mat Latos to the Marlins. In return, they acquired young talents such as Eugenio Suárez and Anthony DeSclafani. They also acquired veteran slugger Marlon Byrd from the Phillies to play left field.",
"title": "Franchise history"
},
{
"paragraph_id": 51,
"text": "The Reds' 2015 season wasn't much better, as they finished with the second-worst record in the league at 64–98, their worst finish since 1982. The Reds were forced to trade star pitchers Johnny Cueto and Mike Leake to the Kansas City Royals and San Francisco Giants, respectively, receiving minor league pitching prospects for both. Shortly after the season's end, the Reds traded Home Run Derby champion Todd Frazier to the Chicago White Sox and closing pitcher Aroldis Chapman to the New York Yankees.",
"title": "Franchise history"
},
{
"paragraph_id": 52,
"text": "In 2016, the Reds broke the then-record for home runs allowed during a single season, The Reds held this record until the 2019 season when it was broken by the Baltimore Orioles. The previous record holder was the 1996 Detroit Tigers with 241 home runs yielded to opposing teams. The Reds went 68–94 and again were one of the worst teams in MLB. The Reds traded outfielder Jay Bruce to the Mets just before the July 31 non-waiver trade deadline in exchange for two prospects: infielder Dilson Herrera and pitcher Max Wotell. During the offseason, the Reds traded Brandon Phillips to the Atlanta Braves in exchange for two minor league pitchers.",
"title": "Franchise history"
},
{
"paragraph_id": 53,
"text": "On September 25, 2020, the Reds earned their first postseason berth since 2013, ultimately earning the seventh seed in the expanded 2020 playoffs. The 2020 season had been shortened to 60 games as a result of the COVID-19 pandemic. The Reds lost their first-round series against the Atlanta Braves two games to none.",
"title": "Franchise history"
},
{
"paragraph_id": 54,
"text": "The Reds finished the 2021 season with a record of 83–79, good for third in the NL Central.",
"title": "Franchise history"
},
{
"paragraph_id": 55,
"text": "In 2022, the Reds started out the regular season with a ghastly 3–22 record. Their three-game win total in 25 games had not seen since the 2003 Detroit Tigers and was tied for second-worst overall behind the 1988 Baltimore Orioles, who started 2–23 in their first 25 games. They would finish the season with a record of 62–100.",
"title": "Franchise history"
},
{
"paragraph_id": 56,
"text": "The 2023 season found the Reds in contention for a wild card birth up until the final weekend of the season. They eventually fell short of a playoff birth by 2 games with a record of 82–80. The team was led by a group of young players including rookies Spencer Steer, Matt McLain and Elly De La Cruz. Cruz caused quite a buzz from the beginning of his mid-season call up and in his 15th career game became the first Red to hit for the cycle since Eric Davis in 1989. At the end of the season, retirement speculation surrounded former MVP Joey Votto.",
"title": "Franchise history"
},
{
"paragraph_id": 57,
"text": "The Cincinnati Reds play their home games at Great American Ball Park, located at 100 Joe Nuxhall Way, in downtown Cincinnati. Great American Ball Park opened in 2003 at the cost of $290 million and has a capacity of 42,271. Along with serving as the home field for the Reds, the stadium also holds the Cincinnati Reds Hall of Fame, which was added as a part of Reds tradition allowing fans to walk through the history of the franchise as well as participating in many interactive baseball features.",
"title": "Ballparks"
},
{
"paragraph_id": 58,
"text": "Great American Ball Park is the seventh home of the Cincinnati Reds, built immediately to the east of the site on which Riverfront Stadium, later named Cinergy Field, once stood. The first ballpark the Reds occupied was Bank Street Grounds from 1882 to 1883 until they moved to League Park I in 1884, where they would remain until 1893. Through the late 1890s and early 1900s, the Reds moved to two different parks, where they stayed for less than 10 years: League Park II was the third home field for the Reds from 1894 to 1901, and then they moved to the Palace of the Fans, which served as the home of the Reds in the 1910s. It was in 1912 that the Reds moved to Crosley Field, which they called home for 58 years. Crosley served as the home field for the Reds for two World Series titles and five National League pennants. Beginning June 30, 1970, and during the dynasty of the Big Red Machine, the Reds played in Riverfront Stadium, appropriately named due to its location right by the Ohio River. Riverfront saw three World Series titles and five National League pennants. It was in the late 1990s that the city agreed to build two separate stadiums on the riverfront for the Reds and the Cincinnati Bengals. Thus, in 2003, the Reds began a new era with the opening of the current stadium.",
"title": "Ballparks"
},
{
"paragraph_id": 59,
"text": "The Reds hold their spring training in Goodyear, Arizona, at Goodyear Ballpark. The Reds moved into this stadium and the Cactus League in 2010 after staying in the Grapefruit League for most of their history. The Reds share Goodyear Park with their rivals in Ohio, the Cleveland Guardians.",
"title": "Ballparks"
},
{
"paragraph_id": 60,
"text": "Throughout the team's history, many different variations of the classic wishbone \"C\" logo have been introduced. In the team's early history, the Reds logo has been simply the wishbone \"C\" with the word \"REDS\" inside, the only colors used being red and white. However, during the 1950s, during the renaming and re-branding of the team as the Cincinnati Redlegs because of the connections to communism of the word \"Reds,\" the color blue was introduced as part of the Reds color combination. During the 1960s and 1970s, the Reds saw a move toward the more traditional colors, abandoning the navy blue. A new logo also appeared with the new era of baseball in 1972, when the team went away from the script \"REDS\" inside of the \"C,\" instead putting their mascot, Mr. Redlegs, in its place as well as putting the name of the team inside of the wishbone \"C.\" In the 1990s, the more traditional, early logos of Reds came back with the current logo reflecting more of what the team's logo was when they were founded.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 61,
"text": "Along with the logo, the Reds' uniforms have been changed many different times throughout their history. Following their departure from being called the \"Redlegs\" in 1956, the Reds made a groundbreaking change to their uniforms with the use of sleeveless jerseys, seen only once before in the Major Leagues by the Chicago Cubs. At home and away, the cap was all-red with a white wishbone \"C\" insignia. The long-sleeved undershirts were red. The uniform was plain white with a red wishbone \"C\" logo on the left and the uniform number on the right. On the road, the wishbone \"C\" was replaced by the mustachioed \"Mr. Redlegs\" logo, the pillbox-hat-wearing man with a baseball for a head. The home stockings were red with six white stripes. The away stockings had only three white stripes.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 62,
"text": "The Reds changed uniforms again in 1961, when they replaced the traditional wishbone \"C\" insignia with an oval-shaped \"C\" logo, but continued to use the sleeveless jerseys. At home, the Reds wore white caps with the red bill with the oval \"C\" in red, white sleeveless jerseys with red pinstripes, with the oval \"C-REDS\" logo in black with red lettering on the left breast and the number in red on the right. The gray away uniform included a gray cap with the red oval \"C\" and a red bill. Their gray away uniforms, which also included a sleeveless jersey, bore \"CINCINNATI\" in an arched block style across with the number below on the left. In 1964, players' last names were placed on the back of each set of uniforms, below the numbers. Those uniforms were scrapped after the 1966 season.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 63,
"text": "However, the Cincinnati uniform design most familiar to baseball enthusiasts is the one whose basic form, with minor variations, held sway for 25 seasons from 1967 to 1992. Most significantly, the point was restored to the \"C\" insignia, making it a wishbone again. During this era, the Reds wore all-red caps both at home and on the road. The caps bore the simple wishbone \"C\" insignia in white. The uniforms were standard short-sleeved jerseys and standard trousers – white at home and gray on the road. The home uniform featured the wishbone \"C-REDS\" logo in red with white type on the left breast and the uniform number in red on the right. The away uniform bore \"CINCINNATI\" in an arched block style across the front with the uniform number below on the left. Red, long-sleeved undershirts and plain red stirrups over white sanitary stockings completed the basic design. The Reds wore pinstriped home uniforms in 1967 only, and the uniforms were flannel through 1971, changing to double-knits with pullover jerseys and belt-less pants in 1972. Those uniforms lasted 20 seasons, and the 1992 Reds were the last MLB team to date whose primary uniforms featured pullover jerseys and belt-less pants.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 64,
"text": "The 1993 uniforms, which did away with the pullovers and brought back button-down jerseys, kept white and gray as the base colors for the home and away uniforms, but added red pinstripes. The home jerseys were sleeveless, showing more of the red undershirts. The color scheme of the \"C-REDS\" logo on the home uniform was reversed, now red lettering on a white background. A new home cap was created that had a red bill and a white crown with red pinstripes and a red wishbone \"C\" insignia. The away uniform kept the all-red cap, but moved the uniform number to the left to more closely match the home uniform. The only additional change to these uniforms was the introduction of black as a primary color of the Reds in 1999, especially on their road uniforms.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 65,
"text": "The Reds' latest uniform change came in December 2006, which differed significantly from the uniforms worn during the previous eight seasons. The home caps returned to an all-red design with a white wishbone \"C,\" lightly outlined in black. Caps with red crowns and a black bill became the new road caps. Additionally, the sleeveless jersey was abandoned for a more traditional design. The numbers and lettering for the names on the backs of the jerseys were changed to an early 1900s–style typeface, and a handlebar-mustached \"Mr. Redlegs\" – reminiscent of the logo used by the Reds in the 1950s and 1960s – was placed on the left sleeve.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 66,
"text": "In 2023, the Reds and Nike, Inc. introduced a new City Connect jersey, which features a modified \"C\" on the cap and on the sleeve of the jersey. For the Jersey, it features \"CINCY\" (shorten for Cincinnati) across the chest of the jersey. On the collar, it features an Ohio Buckeye and also features the motto of Cincinnati \"Juncta Juvant\" (Latin for \"Strength of Unity\"). The design of the jersey is to inspired the future of the Reds jersey.",
"title": "Logos and uniforms"
},
{
"paragraph_id": 67,
"text": "The Cincinnati Reds have retired 10 numbers in franchise history, as well as honoring Jackie Robinson, whose number is retired league-wide around Major League Baseball.",
"title": "Awards and accolades"
},
{
"paragraph_id": 68,
"text": "All of the retired numbers are located at Great American Ball Park behind home plate on the outside of the press box. Along with the retired players' and managers' number, the following broadcasters are honored with microphones by the broadcast booth: Marty Brennaman, Waite Hoyt and Joe Nuxhall.",
"title": "Awards and accolades"
},
{
"paragraph_id": 69,
"text": "On April 15, 1997, No. 42 was retired throughout Major League Baseball in honor of Jackie Robinson.",
"title": "Awards and accolades"
},
{
"paragraph_id": 70,
"text": "The Reds have hosted the Major League Baseball All-Star Game five times: twice at Crosley Field (1938, 1953), twice at Riverfront Stadium (1970, 1988) and once at Great American Ball Park (2015).",
"title": "MLB All-Star Games"
},
{
"paragraph_id": 71,
"text": "The Ohio Cup was an annual pre-season baseball game, which pitted the Ohio rivals Cleveland Indians and Cincinnati Reds. In its first series it was a single-game cup, played each year at minor-league Cooper Stadium in Columbus, and was staged just days before the start of each new Major League Baseball season.",
"title": "Rivalries"
},
{
"paragraph_id": 72,
"text": "A total of eight Ohio Cup games were played, between 1989 and 1996, with the Indians winning six of them. The winner of the game each year was awarded the Ohio Cup in postgame ceremonies. The Ohio Cup was a favorite among baseball fans in Columbus, with attendances regularly topping 15,000.",
"title": "Rivalries"
},
{
"paragraph_id": 73,
"text": "The Ohio Cup games ended with the introduction of regular-season interleague play in 1997. Thereafter, the two teams competed annually in the regular-season Battle of Ohio or Buckeye Series. The Ohio Cup was revived in 2008 as a reward for the team with the better overall record in the Reds–Indians series each year.",
"title": "Rivalries"
},
{
"paragraph_id": 74,
"text": "The Pirates-Reds rivalry at one point in time was one of the fiercest matchups in the National League during the 1970s; both teams often met in the postseason multiple times prior to both being realigned to the National League Central in 1993. The two teams date far into the infancy of MLB, having both been founded in the 1880s, and first met during the 1900 MLB season. Both teams combine for 10 World Series championships and 18 pennants. The Pirates and Reds met 5 times during the NLCS in 1970, 1972, 1975, 1979, and 1990. Most recently; both teams met again during the 2013 NL Wild Card Game.",
"title": "Rivalries"
},
{
"paragraph_id": 75,
"text": "As of 2023, the Pirates currently lead the rivalry 1141–1113, However; the Reds lead in postseason wins 13–8.",
"title": "Rivalries"
},
{
"paragraph_id": 76,
"text": "The Dodgers–Reds rivalry was one of the most intense during the 1970s through the early 1990s. They often competed for the NL West division title. From 1970 to 1990, they had eleven 1–2 finishes in the standings, with seven of them being within 5½ games or fewer. Both teams also played in numerous championships during this span, combining to win 10 NL Pennants and 5 World Series titles from 1970–1990 Notably as the Big Red Machine teams clashed frequently with the Tommy Lasorda era Dodgers teams. Reds manager Sparky Anderson once said, \"I don't think there's a rivalry like ours in either league. The Giants are supposed to be the Dodgers' natural rivals, but I don't think the feeling is there anymore. It's not there the way it is with us and the Dodgers.\" The rivalry ended when division realignment moved the Reds to the NL Central. However, they did face one another in the 1995 NLDS.",
"title": "Rivalries"
},
{
"paragraph_id": 77,
"text": "The Reds' flagship radio station has been WLW, 700AM since 1969. Prior to that, the Reds were heard over WKRC, WCPO, WSAI and WCKY. WLW, a 50,000-watt station, is \"clear channel\" in more than one way, as iHeartMedia owns the \"blowtorch\" outlet, which is also known as \"The Nation's Station.\" Reds games can be heard on over 100 local radio stations through the Reds on Radio Network.",
"title": "Media"
},
{
"paragraph_id": 78,
"text": "Since 2020, the Reds broadcast team has been former Pensacola Blue Wahoos radio play-by-play announcer Tommy Thrall and retired relief pitcher Jeff Brantley on color commentary.",
"title": "Media"
},
{
"paragraph_id": 79,
"text": "Marty Brennaman called Reds games from 1974 to 2019, most famously alongside former Reds pitcher and color commentator Joe Nuxhall through 2007. Brennaman has won the Ford C. Frick Award for his work, which includes his famous call of \"... and this one belongs to the Reds!\" after a win. Nuxhall preceded Brennaman in the Reds' booth, beginning in 1967 (the year after his retirement as an active player) until his death in 2007. (From 2004 to 2007, Nuxhall only called select home games.)",
"title": "Media"
},
{
"paragraph_id": 80,
"text": "In 2007, Thom Brennaman, a veteran announcer seen nationwide on Fox Sports, joined his father Marty in the radio booth. Brantley, formerly of ESPN, also joined the network in 2007. Three years later in 2010, Brantley and Thom Brennaman's increased TV schedule led to more appearances for Jim Kelch, who had filled in on the network since 2008. Kelch's contract expired after the 2017 season.",
"title": "Media"
},
{
"paragraph_id": 81,
"text": "In 2019, Thrall was brought in to provide in-game and post-game coverage, as well as act as a fill-in play-by-play announcer. He succeeded Marty Brennaman when the former retired at the end of the 2019 season.",
"title": "Media"
},
{
"paragraph_id": 82,
"text": "Televised games are seen exclusively on Bally Sports Ohio and Bally Sports Indiana. In addition, Bally Sports South televises Bally Sports Ohio broadcasts of Reds games to Tennessee and western North Carolina. George Grande, who hosted the first SportsCenter on ESPN in 1979, was the play-by-play announcer, usually alongside Chris Welsh, from 1993 until his retirement during the final game of the 2009 season. Since 2009, Grande has worked part time for the Reds as play-by-play announcer in September when Thom Brennaman is covering the NFL for Fox Sports. He has also made guest appearances throughout each season. Brennaman had been the head play-by-play commentator since 2010, with Welsh and Brantley sharing time as the color commentators. Cincinnati native Paul Keels, who left in 2011 to devote more time to his full-time job as the play-by-play announcer for the Ohio State Buckeyes Radio Network, was the Reds' backup play-by-play television announcer during the 2010 season. Jim Kelch served as Keels' replacement. The Reds also added former Reds first baseman Sean Casey – known as \"The Mayor\" by Reds fans – to do color commentary for approximately 15 games in 2011.",
"title": "Media"
},
{
"paragraph_id": 83,
"text": "NBC affiliate WLWT carried Reds games from 1948 to 1995. Among those that have called games for WLWT include Waite Hoyt, Ray Lane, Steve Physioc, Johnny Bench, Joe Morgan and Ken Wilson. Al Michaels, who established a long career with ABC and NBC, spent three years in Cincinnati early in his career. The last regularly scheduled, over-the-air broadcasts of Reds games were on WSTR-TV from 1996 to 1998. Since 2010, WKRC-TV has simulcast Opening Day games with Fox/Bally Sports Ohio, which it came into common ownership with in 2019.",
"title": "Media"
},
{
"paragraph_id": 84,
"text": "On August 19, 2020, Thom Brennaman was caught uttering a homophobic slur during a game against the Kansas City Royals. Brennaman eventually apologized for the incident and was suspended, but on September 26, he resigned from his duties as the Reds' TV play-by-play announcer. This ended the Brennamans' 46-year association with the Reds franchise, dating back to Marty's first season in 1974. Sideline reporter Jim Day served as the interim play-by-play voice for the remainder of the 2020 season, after which the Reds hired John Sadak to serve as its television play-by-play announcer.",
"title": "Media"
},
{
"paragraph_id": 85,
"text": "The Reds Community Fund, founded in 2001, is focused on the youth of the Greater Cincinnati area with the goal of improving the lives of participants by leveraging the traditions of the Reds. The fund sponsors the Reviving Baseball in Inner Cities (RBI) program with a goal of 30–50 young people graduating high school and attending college annually. It also holds an annual telethon, raising in excess of $120,000. An example of the fund's community involvement is its renovation of Hoffman Fields in the Evanston neighborhood of the city, upgrading the entire recreation complex, for a total of over 400 baseball diamonds renovated at 200 locations throughout the region.",
"title": "Community involvement"
},
{
"paragraph_id": 86,
"text": "During the COVID-19 pandemic in 2020, since no spectators were allowed at MLB games, the Reds offered fans the opportunity to purchase paper cutouts of their own photographs in the stands at Great American Ball Park. The promotion raised over $300,000 for the fund, more than the fund's traditional events such as Redsfest, the Redlegs Run, an annual golf outing and the Fox Sports Ohio Telethon.",
"title": "Community involvement"
},
{
"paragraph_id": 87,
"text": "The Cincinnati Reds farm system consists of six minor league affiliates.",
"title": "Minor league affiliations"
}
] | The Cincinnati Reds are an American professional baseball team based in Cincinnati. They compete in Major League Baseball (MLB) as a member club of the National League (NL) Central division and were a charter member of the American Association in 1881 before joining the NL in 1890. The Reds played in the NL West division from 1969 to 1993, before joining the Central division in 1994. For several years in the 1970s, they were considered the most dominant team in baseball, most notably winning the 1975 and 1976 World Series; the team was colloquially known as the "Big Red Machine" during this time, and it included Hall of Fame members Johnny Bench, Joe Morgan and Tony Pérez, as well as the controversial Pete Rose, the all-time hits leader. Overall, the Reds have won five World Series championships, nine NL pennants, one AA pennant and 10 division titles. The team plays its home games at Great American Ball Park, which opened in 2003. Bob Castellini has been the CEO of the Reds since 2006. From 1882 to 2022, the Reds' overall win–loss record is 10,775–10,601. | 2001-10-19T17:40:11Z | 2023-12-28T02:40:16Z | [
"Template:S-ttl",
"Template:Cincinnati Reds roster",
"Template:S-start-collapsible",
"Template:S-end",
"Template:More citations needed section",
"Template:Cite book",
"Template:Multiple image",
"Template:Cite magazine",
"Template:MLBTeam",
"Template:Authority control",
"Template:Main",
"Template:Frac",
"Template:Stack",
"Template:Baseball year",
"Template:Cite news",
"Template:Cite web",
"Template:Infobox MLB",
"Template:Convert",
"Template:Portal bar",
"Template:Hatnote group",
"Template:Winning percentage",
"Template:S-bef",
"Template:Navboxes",
"Template:Baseball hall of fame list",
"Template:Commons category",
"Template:Wsy",
"Template:Retired number list",
"Template:Ford C. Frick award list",
"Template:Reflist",
"Template:S-aft",
"Template:Cincinnati Reds",
"Template:Short description",
"Template:See also"
] | https://en.wikipedia.org/wiki/Cincinnati_Reds |
6,672 | Caribbean cuisine | Caribbean cuisine is a fusion of West African, Creole, Amerindian, European, Latin American, Indian/South Asian, Middle Eastern, and Chinese. These traditions were brought from many countries when they moved to the Caribbean. In addition, the population has created styles that are unique to the region.
As a result of the colonization, the Caribbean is a fusion of multiple sources; British, Spanish, Dutch and French colonized the area and brought their respective cuisines that mixed with West African as well as Amerindian, East Asian, Arab, South Asian influences from enslaved, indentured and other laborers brought to work on the plantations.
In 1493, during the voyages of Christopher Columbus, the Spaniards introduced a variety of ingredients, including coconut, chickpeas, cilantro, eggplants, onions and garlic.
Ingredients that are common in most islands' dishes are rice, plantains, beans, cassava, cilantro, bell peppers, chickpeas, tomatoes, sweet potatoes, coconut, and any of various meats that are locally available like beef, poultry, pork or fish. A characteristic seasoning for the region is a green herb-and-oil-based marinade called sofrito, which imparts a flavor profile which is quintessentially Caribbean in character. Ingredients may include garlic, onions, scotch bonnet peppers, celery, green onions, and herbs like cilantro, Mexican mint, chives, marjoram, rosemary, tarragon and thyme. This green seasoning is used for a variety of dishes like curries, stews and roasted meats.
Traditional dishes are so important to regional culture that, for example, the local version of Caribbean goat stew has been chosen as the official national dish of Montserrat and is also one of the signature dishes of St. Kitts and Nevis. Another popular dish in the Anglophone Caribbean is called "cook-up", or pelau. Ackee and saltfish is another popular dish that is unique to Jamaica. Callaloo is a dish containing leafy vegetables such as spinach and sometimes okra amongst others, widely distributed in the Caribbean, with a distinctively mixed African and indigenous character.
The variety of dessert dishes in the area also reflects the mixed origins of the recipes. In some areas, black cake, a derivative of English Christmas pudding, may be served, especially on special occasions.
Over time, food from the Caribbean has evolved into a narrative technique through which their culture has been accentuated and promoted. However, by studying Caribbean culture through a literary lens there then runs the risk of generalizing exoticist ideas about food practices from the tropics. Some food theorists argue that this depiction of Caribbean food in various forms of media contributes to the inaccurate conceptions revolving around their culinary practices, which are much more grounded in unpleasant historical events. Therefore, it can be argued that the connection between the idea of the Caribbean being the ultimate paradise and Caribbean food being exotic is based on inaccurate information. | [
{
"paragraph_id": 0,
"text": "Caribbean cuisine is a fusion of West African, Creole, Amerindian, European, Latin American, Indian/South Asian, Middle Eastern, and Chinese. These traditions were brought from many countries when they moved to the Caribbean. In addition, the population has created styles that are unique to the region.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As a result of the colonization, the Caribbean is a fusion of multiple sources; British, Spanish, Dutch and French colonized the area and brought their respective cuisines that mixed with West African as well as Amerindian, East Asian, Arab, South Asian influences from enslaved, indentured and other laborers brought to work on the plantations.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "In 1493, during the voyages of Christopher Columbus, the Spaniards introduced a variety of ingredients, including coconut, chickpeas, cilantro, eggplants, onions and garlic.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Ingredients that are common in most islands' dishes are rice, plantains, beans, cassava, cilantro, bell peppers, chickpeas, tomatoes, sweet potatoes, coconut, and any of various meats that are locally available like beef, poultry, pork or fish. A characteristic seasoning for the region is a green herb-and-oil-based marinade called sofrito, which imparts a flavor profile which is quintessentially Caribbean in character. Ingredients may include garlic, onions, scotch bonnet peppers, celery, green onions, and herbs like cilantro, Mexican mint, chives, marjoram, rosemary, tarragon and thyme. This green seasoning is used for a variety of dishes like curries, stews and roasted meats.",
"title": "Caribbean dishes"
},
{
"paragraph_id": 4,
"text": "Traditional dishes are so important to regional culture that, for example, the local version of Caribbean goat stew has been chosen as the official national dish of Montserrat and is also one of the signature dishes of St. Kitts and Nevis. Another popular dish in the Anglophone Caribbean is called \"cook-up\", or pelau. Ackee and saltfish is another popular dish that is unique to Jamaica. Callaloo is a dish containing leafy vegetables such as spinach and sometimes okra amongst others, widely distributed in the Caribbean, with a distinctively mixed African and indigenous character.",
"title": "Caribbean dishes"
},
{
"paragraph_id": 5,
"text": "The variety of dessert dishes in the area also reflects the mixed origins of the recipes. In some areas, black cake, a derivative of English Christmas pudding, may be served, especially on special occasions.",
"title": "Caribbean dishes"
},
{
"paragraph_id": 6,
"text": "Over time, food from the Caribbean has evolved into a narrative technique through which their culture has been accentuated and promoted. However, by studying Caribbean culture through a literary lens there then runs the risk of generalizing exoticist ideas about food practices from the tropics. Some food theorists argue that this depiction of Caribbean food in various forms of media contributes to the inaccurate conceptions revolving around their culinary practices, which are much more grounded in unpleasant historical events. Therefore, it can be argued that the connection between the idea of the Caribbean being the ultimate paradise and Caribbean food being exotic is based on inaccurate information.",
"title": "Caribbean dishes"
}
] | Caribbean cuisine is a fusion of West African, Creole, Amerindian, European, Latin American, Indian/South Asian, Middle Eastern, and Chinese. These traditions were brought from many countries when they moved to the Caribbean. In addition, the population has created styles that are unique to the region. | 2001-10-02T19:30:11Z | 2023-12-10T21:37:25Z | [
"Template:Webarchive",
"Template:Short description",
"Template:Redirect",
"Template:Further",
"Template:Div col end",
"Template:Cookbook",
"Template:Portal",
"Template:Reflist",
"Template:Div col",
"Template:Cite web",
"Template:Cuisine",
"Template:Caribbean topics",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Caribbean_cuisine |
6,673 | Central Powers | The Central Powers, also known as the Central Empires, were one of the two main coalitions that fought in World War I (1914–1918). It consisted of Germany, Austria-Hungary, the Ottoman Empire, and Bulgaria; this was also known as the Quadruple Alliance.
The Central Powers' origin was the alliance of Germany and Austria-Hungary in 1879. Despite having nominally joined the Triple Alliance before, Italy did not take part in World War I on the side of the Central Powers. The Ottoman Empire and Bulgaria did not join until after World War I had begun. The Central Powers faced, and were defeated by, the Allied Powers, which themselves had formed around the Triple Entente.
At the start of the war, the Central Powers consisted of the German Empire and the Austro-Hungarian Empire. The Ottoman Empire joined later in 1914, followed by the Kingdom of Bulgaria in 1915. The name "Central Powers" is derived from the location of these countries; all four were located between the Russian Empire in the east and France and the United Kingdom in the west.
In early July 1914, in the aftermath of the assassination of Austro-Hungarian Archduke Franz Ferdinand and faced with the prospect of war between Austria-Hungary and Serbia, Kaiser Wilhelm II and the German government informed the Austro-Hungarian government that Germany would uphold its alliance with Austria-Hungary and defend it from possible Russian intervention if a war between Austria-Hungary and Serbia took place. When Russia enacted a general mobilization, Germany viewed the act as provocative. The Russian government promised Germany that its general mobilization did not mean preparation for war with Germany but was a reaction to the tensions between Austria-Hungary and Serbia. The German government regarded the Russian promise of no war with Germany to be nonsense in light of its general mobilization, and Germany, in turn, mobilized for war. On 1 August, Germany sent an ultimatum to Russia stating that since both Germany and Russia were in a state of military mobilization, an effective state of war existed between the two countries. Later that day, France, an ally of Russia, declared a state of general mobilization.
In August 1914, Germany attacked Russia, citing Russian aggression as demonstrated by the mobilization of the Russian army, which had resulted in Germany mobilizing in response.
After Germany declared war on Russia, France, with its alliance with Russia, prepared a general mobilization in expectation of war. On 3 August 1914, Germany responded to this action by declaring war on France. Germany, facing a two-front war, enacted what was known as the Schlieffen Plan, which involved German armed forces moving through Belgium and swinging south into France and towards the French capital of Paris. This plan was hoped to quickly gain victory against the French and allow German forces to concentrate on the Eastern Front. Belgium was a neutral country and would not accept German forces crossing its territory. Germany disregarded Belgian neutrality and invaded the country to launch an offensive towards Paris. This caused Great Britain to declare war against the German Empire, as the action violated the Treaty of London that both nations signed in 1839 guaranteeing Belgian neutrality.
Subsequently, several states declared war on Germany in late August 1914, with Italy declaring war on Germany in August 1916, the United States in April 1917, and Greece in July 1917.
After successfully beating France in the Franco-Prussian War, the German Empire incorporated the province of Alsace-Lorraine upon its founding in 1871. However, the province was still claimed by French revanchists, leading to its recession to France at the Treaty of Versailles.
The German Empire was late to colonization, only beginning overseas expansion in the 1870s and 1880s. Support for colonization was opposed by much of the government, including chancellor Otto von Bismarck, but it became a colonial power after participating in the Berlin Conference. Then, private companies were founded and began settling parts of Africa, the Pacific, and China. Later these groups became German protectorates and colonies.
Cameroon was a German colony existing from 1884 until its complete occupation in 1915. It was ceded to France as a League of Nations Mandate at the war's end.
German East Africa was founded in 1885 and expanded to include modern-day Tanzania (except Zanzibar), Rwanda, Burundi, and parts of Mozambique. It was the only German colony to not be fully conquered during the war, with resistance by commander Paul von Lettow-Vorbeck lasting until November 1918. Later it was surrendered to the Allies in 1919 and split between the Belgian Congo, Portuguese Mozambique, and the newly founded colony of Tanganyika.
South West Africa, modern-day Namibia, became under German rule in 1885 and was absorbed into South Africa following its invasion in 1915. Members of the local German army and police fought with South Africans seeking independence in the Maritz Rebellion.
Togoland, now part of Ghana, was made a German protectorate in 1884. However, after a swift campaign, it was occupied by the Allies in 1915 and divided between French Togoland and British Togoland.
The Jiaozhou Bay Leased Territory was a German dependency in East Asia leased from China in 1898. Japanese forces occupied it following the Siege of Tsingtao.
German New Guinea was a German protectorate in the Pacific. It was occupied by Australian forces in 1914.
German Samoa was a German protectorate following the Tripartite Convention. It was occupied by the New Zealand Expeditionary Force in 1914.
Austria-Hungary regarded the assassination of Archduke Franz Ferdinand as having been orchestrated with the assistance of Serbia. The country viewed the assassination as setting a dangerous precedent of encouraging the country's South Slav population to rebel and threaten to tear apart the multinational country. Austria-Hungary sent a formal ultimatum to Serbia demanding a full-scale investigation of Serbian government complicity in the assassination and complete compliance by Serbia in agreeing to the terms demanded by Austria-Hungary. Serbia submitted to accept most of the demands. However, Austria-Hungary viewed this as insufficient and used this lack of full compliance to justify military intervention. These demands have been viewed as a diplomatic cover for an inevitable Austro-Hungarian declaration of war on Serbia.
Russia had warned Austria-Hungary that the Russian government would not tolerate Austria-Hungary invading Serbia. However, with Germany supporting Austria-Hungary's actions, the Austro-Hungarian government hoped that Russia would not intervene and that the conflict with Serbia would remain a regional conflict.
Austria-Hungary's invasion of Serbia resulted in Russia declaring war on the country, and Germany, in turn, declared war on Russia, setting off the beginning of the clash of alliances that resulted in the World War.
Austria-Hungary was internally divided into two states with their own governments, joined through the Habsburg throne. Austria, also known as Cisleithania, contained various duchies and principalities but also the Kingdom of Bohemia, the Kingdom of Dalmatia, and the Kingdom of Galicia and Lodomeria. Hungary (Transleithania) comprised the Kingdom of Hungary and the Kingdom of Croatia-Slavonia. In Bosnia and Herzegovina, sovereign authority was shared by both Austria and Hungary.
The Ottoman Empire joined the war on the side of the Central Powers in November 1914. The Ottoman Empire had gained strong economic connections with Germany through the Berlin-to-Baghdad railway project that was still incomplete at the time. The Ottoman Empire made a formal alliance with Germany signed on 2 August 1914. The alliance treaty expected that the Ottoman Empire would become involved in the conflict in a short amount of time. However, for the first several months of the war, the Ottoman Empire maintained neutrality though it allowed a German naval squadron to enter and stay near the strait of Bosphorus. Ottoman officials informed the German government that the country needed time to prepare for conflict. Germany provided financial aid and weapons shipments to the Ottoman Empire.
After pressure escalated from the German government demanding that the Ottoman Empire fulfill its treaty obligations, or else Germany would expel the country from the alliance and terminate economic and military assistance, the Ottoman government entered the war with the recently acquired cruisers from Germany, the Yavuz Sultan Selim (formerly SMS Goeben) and the Midilli (formerly SMS Breslau) launching a naval raid on the Russian port of Odesa, thus engaging in military action in accordance with its alliance obligations with Germany. Russia and the Triple Entente declared war on the Ottoman Empire.
Bulgaria was still resentful after its defeat in July 1913 at the hands of Serbia, Greece and Romania. It signed a treaty of defensive alliance with the Ottoman Empire on 19 August 1914. Bulgaria was the last country to join the Central Powers, which it did in October 1915 by declaring war on Serbia. It invaded Serbia in conjunction with German and Austro-Hungarian forces. Bulgaria held claims on the region of Vardar Macedonia then held by Serbia following the Balkan Wars of 1912–1913 and the Treaty of Bucharest (1913). As a condition of entering the war on the side of the Central Powers, Bulgaria was granted the right to reclaim that territory.
In opposition to offensive operations by Union of South Africa, which had joined the war, Boer army officers of what is now known as the Maritz Rebellion "refounded" the South African Republic in September 1914. Germany assisted the rebels, some rebels operating in and out of the German colony of German South-West Africa. The rebels were all defeated or captured by South African government forces by 4 February 1915.
The Senussi Order was a Muslim political-religious tariqa (Sufi order) and clan in Libya, previously under Ottoman control, which had been lost to Italy in 1912. In 1915, they were courted by the Ottoman Empire and Germany, and Grand Senussi Ahmed Sharif as-Senussi declared jihad and attacked the Italians in Libya and British controlled Egypt in the Senussi Campaign.
In 1915 the Sultanate of Darfur renounced allegiance to the Sudan government and aligned with the Ottomans. The Anglo-Egyptian Darfur Expedition preemptively acted in March 1916 to prevent an attack on Sudan and took control of the Sultanate by November 1916.
The Zaian Confederation began to fight against France in the Zaian War to prevent French expansion into Morocco. The fighting lasted from 1914 and continued after the First World War ended, to 1921. The Central Powers (mainly the Germans) began to attempt to incite unrest to hopefully divert French resources from Europe.
The Dervish State fought against the British Empire, Ethiopian Empire, Italian Empire, and the French Empire between 1896 and 1925. During World War I, the Dervish State received many supplies from the German Empire and the Ottoman Empire to carry on fighting the Allies. However, looting from other Somali tribes in the Korahe raid eventually led to its collapse in 1925.
With the Bolshevik attack of late 1917, the General Secretariat of Ukraine sought military protection first from the Central Powers and later from the armed forces of the Entente.
The Ottoman Empire also had its own allies in Azerbaijan and the Northern Caucasus. The three nations fought alongside each other under the Army of Islam in the Battle of Baku.
The Kingdom of Poland was a client state of Germany proclaimed in 1916 and established on 14 January 1917. This government was recognized by the emperors of Germany and Austria-Hungary in November 1916, and it adopted a constitution in 1917. The decision to create a Polish State was taken by Germany in order to attempt to legitimize its military occupation amongst the Polish inhabitants, following upon German propaganda sent to Polish inhabitants in 1915 that German soldiers were arriving as liberators to free Poland from subjugation by Russia. The German government utilized the state alongside punitive threats to induce Polish landowners living in the German-occupied Baltic territories to move to the state and sell their Baltic property to Germans in exchange for moving to Poland. Efforts were made to induce similar emigration of Poles from Prussia to the state.
The Kingdom of Lithuania was a client state of Germany created on 16 February 1918.
The Belarusian Democratic Republic was a client state of Germany created on 9 March 1918.
The Ukrainian State was a client state of Germany led by Hetman Pavlo Skoropadskyi from 29 April 1918, after the government of the Ukrainian People's Republic was overthrown.
The Crimean Regional Government was a client state of Germany created on 25 June 1918. It was officially part of the Ukrainian State but acted separate from the central government. The Kuban People's Republic eventually voted to join the Ukrainian State.
The Duchy of Courland and Semigallia was a client state of Germany created on 8 March 1918.
The Baltic State also known as the "United Baltic Duchy", was proclaimed on 22 September 1918 by the Baltic German ruling class. It was to encompass the former Estonian governorates and incorporate the recently established Courland and Semigallia into a unified state. An armed force in the form of the Baltische Landeswehr was created in November 1918, just before the surrender of Germany, which would participate in the Russian Civil War in the Baltics.
Finland had been an autonomous Grand Duchy within the Russian Empire since 1809, and the collapse of the Russian Empire in 1917 gave it its independence. Following the end of the Finnish Civil War, in which Germany supported the "Whites" against the Soviet-backed labour movement, in May 1918, there were moves to create a Kingdom of Finland. A German prince was elected. However, the Armistice intervened.
The Democratic Republic of Georgia declared independence in 1918 which then led to border conflicts between the newly formed republic and the Ottoman Empire. Soon after, the Ottoman Empire invaded the republic and quickly reached Borjomi. This forced Georgia to ask for help from Germany, which they were granted. Germany forced the Ottomans to withdraw from Georgian territories and recognize Georgian sovereignty. Germany, Georgia and the Ottomans signed a peace treaty, the Treaty of Batum which ended the conflict with the last two. In return, Georgia became a German "ally". This time period of Georgian-German friendship was known as German Caucasus expedition.
The Don Republic was founded on 18 May 1918. Their ataman Pyotr Krasnov portrayed himself as willing to serve as a pro-German warlord.
Jabal Shammar was an Arab state in the Middle East that was closely associated with the Ottoman Empire.
In 1918, the Azerbaijan Democratic Republic, facing Bolshevik revolution and opposition from the Muslim Musavat Party, was then occupied by the Ottoman Empire, which expelled the Bolsheviks while supporting the Musavat Party. The Ottoman Empire maintained a presence in Azerbaijan until the end of the war in November 1918.
The Mountainous Republic of the Northern Caucasus was associated with the Central Powers.
States listed in this section were not officially members of the Central Powers. Still, during the war, they cooperated with one or more Central Powers members on a level that makes their neutrality disputable.
The Ethiopian Empire was officially neutral throughout World War I but widely suspected of sympathy for the Central Powers between 1915 and 1916. At the time, Ethiopia was one of only two fully independent states in Africa (the other being Liberia) and a major power in the Horn of Africa. Its ruler, Lij Iyasu, was widely suspected of harbouring pro-Islamic sentiments and being sympathetic to the Ottoman Empire. The German Empire also attempted to reach out to Iyasu, dispatching several unsuccessful expeditions to the region to attempt to encourage it to collaborate in an Arab Revolt-style uprising in East Africa. One of the unsuccessful expeditions was led by Leo Frobenius, a celebrated ethnographer and personal friend of Kaiser Wilhelm II. Under Iyasu's directions, Ethiopia probably supplied weapons to the Muslim Dervish rebels during the Somaliland Campaign of 1915 to 1916, indirectly helping the Central Powers' cause.
Fearing the rising influence of Iyasu and the Ottoman Empire, the Christian nobles of Ethiopia conspired against Iyasu over 1915. Iyasu was first excommunicated by the Ethiopian Orthodox Patriarch and eventually deposed in a coup d'état on 27 September 1916. A less pro-Ottoman regent, Ras Tafari Makonnen, was installed on the throne.
Other movements supported the efforts of the Central Powers for their own reasons, such as the radical Irish Nationalists who launched the Easter Rising in Dublin in April 1916; they referred to their "gallant allies in Europe". However, most Irish Nationalists supported the British and allied war effort up until 1916, when the Irish political landscape was changing. In 1914, Józef Piłsudski was permitted by Germany and Austria-Hungary to form independent Polish legions. Piłsudski wanted his legions to help the Central Powers defeat Russia and then side with France and the UK and win the war with them.
Bulgaria signed an armistice with the Allies on 29 September 1918, following a successful Allied advance in Macedonia. The Ottoman Empire followed suit on 30 October 1918 in the face of British and Arab gains in Palestine and Syria. Austria and Hungary concluded ceasefires separately during the first week of November following the disintegration of the Habsburg Empire and the Italian offensive at Vittorio Veneto; Germany signed the armistice ending the war on the morning of 11 November 1918 after the Hundred Days Offensive, and a succession of advances by New Zealand, Australian, Canadian, Belgian, British, French and US forces in north-eastern France and Belgium. There was no unified treaty ending the war; the Central Powers were dealt with in separate treaties. | [
{
"paragraph_id": 0,
"text": "The Central Powers, also known as the Central Empires, were one of the two main coalitions that fought in World War I (1914–1918). It consisted of Germany, Austria-Hungary, the Ottoman Empire, and Bulgaria; this was also known as the Quadruple Alliance.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Central Powers' origin was the alliance of Germany and Austria-Hungary in 1879. Despite having nominally joined the Triple Alliance before, Italy did not take part in World War I on the side of the Central Powers. The Ottoman Empire and Bulgaria did not join until after World War I had begun. The Central Powers faced, and were defeated by, the Allied Powers, which themselves had formed around the Triple Entente.",
"title": ""
},
{
"paragraph_id": 2,
"text": "At the start of the war, the Central Powers consisted of the German Empire and the Austro-Hungarian Empire. The Ottoman Empire joined later in 1914, followed by the Kingdom of Bulgaria in 1915. The name \"Central Powers\" is derived from the location of these countries; all four were located between the Russian Empire in the east and France and the United Kingdom in the west.",
"title": "Member states"
},
{
"paragraph_id": 3,
"text": "In early July 1914, in the aftermath of the assassination of Austro-Hungarian Archduke Franz Ferdinand and faced with the prospect of war between Austria-Hungary and Serbia, Kaiser Wilhelm II and the German government informed the Austro-Hungarian government that Germany would uphold its alliance with Austria-Hungary and defend it from possible Russian intervention if a war between Austria-Hungary and Serbia took place. When Russia enacted a general mobilization, Germany viewed the act as provocative. The Russian government promised Germany that its general mobilization did not mean preparation for war with Germany but was a reaction to the tensions between Austria-Hungary and Serbia. The German government regarded the Russian promise of no war with Germany to be nonsense in light of its general mobilization, and Germany, in turn, mobilized for war. On 1 August, Germany sent an ultimatum to Russia stating that since both Germany and Russia were in a state of military mobilization, an effective state of war existed between the two countries. Later that day, France, an ally of Russia, declared a state of general mobilization.",
"title": "Combatants"
},
{
"paragraph_id": 4,
"text": "In August 1914, Germany attacked Russia, citing Russian aggression as demonstrated by the mobilization of the Russian army, which had resulted in Germany mobilizing in response.",
"title": "Combatants"
},
{
"paragraph_id": 5,
"text": "After Germany declared war on Russia, France, with its alliance with Russia, prepared a general mobilization in expectation of war. On 3 August 1914, Germany responded to this action by declaring war on France. Germany, facing a two-front war, enacted what was known as the Schlieffen Plan, which involved German armed forces moving through Belgium and swinging south into France and towards the French capital of Paris. This plan was hoped to quickly gain victory against the French and allow German forces to concentrate on the Eastern Front. Belgium was a neutral country and would not accept German forces crossing its territory. Germany disregarded Belgian neutrality and invaded the country to launch an offensive towards Paris. This caused Great Britain to declare war against the German Empire, as the action violated the Treaty of London that both nations signed in 1839 guaranteeing Belgian neutrality.",
"title": "Combatants"
},
{
"paragraph_id": 6,
"text": "Subsequently, several states declared war on Germany in late August 1914, with Italy declaring war on Germany in August 1916, the United States in April 1917, and Greece in July 1917.",
"title": "Combatants"
},
{
"paragraph_id": 7,
"text": "After successfully beating France in the Franco-Prussian War, the German Empire incorporated the province of Alsace-Lorraine upon its founding in 1871. However, the province was still claimed by French revanchists, leading to its recession to France at the Treaty of Versailles.",
"title": "Combatants"
},
{
"paragraph_id": 8,
"text": "The German Empire was late to colonization, only beginning overseas expansion in the 1870s and 1880s. Support for colonization was opposed by much of the government, including chancellor Otto von Bismarck, but it became a colonial power after participating in the Berlin Conference. Then, private companies were founded and began settling parts of Africa, the Pacific, and China. Later these groups became German protectorates and colonies.",
"title": "Combatants"
},
{
"paragraph_id": 9,
"text": "Cameroon was a German colony existing from 1884 until its complete occupation in 1915. It was ceded to France as a League of Nations Mandate at the war's end.",
"title": "Combatants"
},
{
"paragraph_id": 10,
"text": "German East Africa was founded in 1885 and expanded to include modern-day Tanzania (except Zanzibar), Rwanda, Burundi, and parts of Mozambique. It was the only German colony to not be fully conquered during the war, with resistance by commander Paul von Lettow-Vorbeck lasting until November 1918. Later it was surrendered to the Allies in 1919 and split between the Belgian Congo, Portuguese Mozambique, and the newly founded colony of Tanganyika.",
"title": "Combatants"
},
{
"paragraph_id": 11,
"text": "South West Africa, modern-day Namibia, became under German rule in 1885 and was absorbed into South Africa following its invasion in 1915. Members of the local German army and police fought with South Africans seeking independence in the Maritz Rebellion.",
"title": "Combatants"
},
{
"paragraph_id": 12,
"text": "Togoland, now part of Ghana, was made a German protectorate in 1884. However, after a swift campaign, it was occupied by the Allies in 1915 and divided between French Togoland and British Togoland.",
"title": "Combatants"
},
{
"paragraph_id": 13,
"text": "The Jiaozhou Bay Leased Territory was a German dependency in East Asia leased from China in 1898. Japanese forces occupied it following the Siege of Tsingtao.",
"title": "Combatants"
},
{
"paragraph_id": 14,
"text": "German New Guinea was a German protectorate in the Pacific. It was occupied by Australian forces in 1914.",
"title": "Combatants"
},
{
"paragraph_id": 15,
"text": "German Samoa was a German protectorate following the Tripartite Convention. It was occupied by the New Zealand Expeditionary Force in 1914.",
"title": "Combatants"
},
{
"paragraph_id": 16,
"text": "Austria-Hungary regarded the assassination of Archduke Franz Ferdinand as having been orchestrated with the assistance of Serbia. The country viewed the assassination as setting a dangerous precedent of encouraging the country's South Slav population to rebel and threaten to tear apart the multinational country. Austria-Hungary sent a formal ultimatum to Serbia demanding a full-scale investigation of Serbian government complicity in the assassination and complete compliance by Serbia in agreeing to the terms demanded by Austria-Hungary. Serbia submitted to accept most of the demands. However, Austria-Hungary viewed this as insufficient and used this lack of full compliance to justify military intervention. These demands have been viewed as a diplomatic cover for an inevitable Austro-Hungarian declaration of war on Serbia.",
"title": "Combatants"
},
{
"paragraph_id": 17,
"text": "Russia had warned Austria-Hungary that the Russian government would not tolerate Austria-Hungary invading Serbia. However, with Germany supporting Austria-Hungary's actions, the Austro-Hungarian government hoped that Russia would not intervene and that the conflict with Serbia would remain a regional conflict.",
"title": "Combatants"
},
{
"paragraph_id": 18,
"text": "Austria-Hungary's invasion of Serbia resulted in Russia declaring war on the country, and Germany, in turn, declared war on Russia, setting off the beginning of the clash of alliances that resulted in the World War.",
"title": "Combatants"
},
{
"paragraph_id": 19,
"text": "Austria-Hungary was internally divided into two states with their own governments, joined through the Habsburg throne. Austria, also known as Cisleithania, contained various duchies and principalities but also the Kingdom of Bohemia, the Kingdom of Dalmatia, and the Kingdom of Galicia and Lodomeria. Hungary (Transleithania) comprised the Kingdom of Hungary and the Kingdom of Croatia-Slavonia. In Bosnia and Herzegovina, sovereign authority was shared by both Austria and Hungary.",
"title": "Combatants"
},
{
"paragraph_id": 20,
"text": "The Ottoman Empire joined the war on the side of the Central Powers in November 1914. The Ottoman Empire had gained strong economic connections with Germany through the Berlin-to-Baghdad railway project that was still incomplete at the time. The Ottoman Empire made a formal alliance with Germany signed on 2 August 1914. The alliance treaty expected that the Ottoman Empire would become involved in the conflict in a short amount of time. However, for the first several months of the war, the Ottoman Empire maintained neutrality though it allowed a German naval squadron to enter and stay near the strait of Bosphorus. Ottoman officials informed the German government that the country needed time to prepare for conflict. Germany provided financial aid and weapons shipments to the Ottoman Empire.",
"title": "Combatants"
},
{
"paragraph_id": 21,
"text": "After pressure escalated from the German government demanding that the Ottoman Empire fulfill its treaty obligations, or else Germany would expel the country from the alliance and terminate economic and military assistance, the Ottoman government entered the war with the recently acquired cruisers from Germany, the Yavuz Sultan Selim (formerly SMS Goeben) and the Midilli (formerly SMS Breslau) launching a naval raid on the Russian port of Odesa, thus engaging in military action in accordance with its alliance obligations with Germany. Russia and the Triple Entente declared war on the Ottoman Empire.",
"title": "Combatants"
},
{
"paragraph_id": 22,
"text": "Bulgaria was still resentful after its defeat in July 1913 at the hands of Serbia, Greece and Romania. It signed a treaty of defensive alliance with the Ottoman Empire on 19 August 1914. Bulgaria was the last country to join the Central Powers, which it did in October 1915 by declaring war on Serbia. It invaded Serbia in conjunction with German and Austro-Hungarian forces. Bulgaria held claims on the region of Vardar Macedonia then held by Serbia following the Balkan Wars of 1912–1913 and the Treaty of Bucharest (1913). As a condition of entering the war on the side of the Central Powers, Bulgaria was granted the right to reclaim that territory.",
"title": "Combatants"
},
{
"paragraph_id": 23,
"text": "In opposition to offensive operations by Union of South Africa, which had joined the war, Boer army officers of what is now known as the Maritz Rebellion \"refounded\" the South African Republic in September 1914. Germany assisted the rebels, some rebels operating in and out of the German colony of German South-West Africa. The rebels were all defeated or captured by South African government forces by 4 February 1915.",
"title": "Co-belligerents"
},
{
"paragraph_id": 24,
"text": "The Senussi Order was a Muslim political-religious tariqa (Sufi order) and clan in Libya, previously under Ottoman control, which had been lost to Italy in 1912. In 1915, they were courted by the Ottoman Empire and Germany, and Grand Senussi Ahmed Sharif as-Senussi declared jihad and attacked the Italians in Libya and British controlled Egypt in the Senussi Campaign.",
"title": "Co-belligerents"
},
{
"paragraph_id": 25,
"text": "In 1915 the Sultanate of Darfur renounced allegiance to the Sudan government and aligned with the Ottomans. The Anglo-Egyptian Darfur Expedition preemptively acted in March 1916 to prevent an attack on Sudan and took control of the Sultanate by November 1916.",
"title": "Co-belligerents"
},
{
"paragraph_id": 26,
"text": "The Zaian Confederation began to fight against France in the Zaian War to prevent French expansion into Morocco. The fighting lasted from 1914 and continued after the First World War ended, to 1921. The Central Powers (mainly the Germans) began to attempt to incite unrest to hopefully divert French resources from Europe.",
"title": "Co-belligerents"
},
{
"paragraph_id": 27,
"text": "The Dervish State fought against the British Empire, Ethiopian Empire, Italian Empire, and the French Empire between 1896 and 1925. During World War I, the Dervish State received many supplies from the German Empire and the Ottoman Empire to carry on fighting the Allies. However, looting from other Somali tribes in the Korahe raid eventually led to its collapse in 1925.",
"title": "Co-belligerents"
},
{
"paragraph_id": 28,
"text": "With the Bolshevik attack of late 1917, the General Secretariat of Ukraine sought military protection first from the Central Powers and later from the armed forces of the Entente.",
"title": "Client states"
},
{
"paragraph_id": 29,
"text": "The Ottoman Empire also had its own allies in Azerbaijan and the Northern Caucasus. The three nations fought alongside each other under the Army of Islam in the Battle of Baku.",
"title": "Client states"
},
{
"paragraph_id": 30,
"text": "The Kingdom of Poland was a client state of Germany proclaimed in 1916 and established on 14 January 1917. This government was recognized by the emperors of Germany and Austria-Hungary in November 1916, and it adopted a constitution in 1917. The decision to create a Polish State was taken by Germany in order to attempt to legitimize its military occupation amongst the Polish inhabitants, following upon German propaganda sent to Polish inhabitants in 1915 that German soldiers were arriving as liberators to free Poland from subjugation by Russia. The German government utilized the state alongside punitive threats to induce Polish landowners living in the German-occupied Baltic territories to move to the state and sell their Baltic property to Germans in exchange for moving to Poland. Efforts were made to induce similar emigration of Poles from Prussia to the state.",
"title": "Client states"
},
{
"paragraph_id": 31,
"text": "The Kingdom of Lithuania was a client state of Germany created on 16 February 1918.",
"title": "Client states"
},
{
"paragraph_id": 32,
"text": "The Belarusian Democratic Republic was a client state of Germany created on 9 March 1918.",
"title": "Client states"
},
{
"paragraph_id": 33,
"text": "The Ukrainian State was a client state of Germany led by Hetman Pavlo Skoropadskyi from 29 April 1918, after the government of the Ukrainian People's Republic was overthrown.",
"title": "Client states"
},
{
"paragraph_id": 34,
"text": "The Crimean Regional Government was a client state of Germany created on 25 June 1918. It was officially part of the Ukrainian State but acted separate from the central government. The Kuban People's Republic eventually voted to join the Ukrainian State.",
"title": "Client states"
},
{
"paragraph_id": 35,
"text": "The Duchy of Courland and Semigallia was a client state of Germany created on 8 March 1918.",
"title": "Client states"
},
{
"paragraph_id": 36,
"text": "The Baltic State also known as the \"United Baltic Duchy\", was proclaimed on 22 September 1918 by the Baltic German ruling class. It was to encompass the former Estonian governorates and incorporate the recently established Courland and Semigallia into a unified state. An armed force in the form of the Baltische Landeswehr was created in November 1918, just before the surrender of Germany, which would participate in the Russian Civil War in the Baltics.",
"title": "Client states"
},
{
"paragraph_id": 37,
"text": "Finland had been an autonomous Grand Duchy within the Russian Empire since 1809, and the collapse of the Russian Empire in 1917 gave it its independence. Following the end of the Finnish Civil War, in which Germany supported the \"Whites\" against the Soviet-backed labour movement, in May 1918, there were moves to create a Kingdom of Finland. A German prince was elected. However, the Armistice intervened.",
"title": "Client states"
},
{
"paragraph_id": 38,
"text": "The Democratic Republic of Georgia declared independence in 1918 which then led to border conflicts between the newly formed republic and the Ottoman Empire. Soon after, the Ottoman Empire invaded the republic and quickly reached Borjomi. This forced Georgia to ask for help from Germany, which they were granted. Germany forced the Ottomans to withdraw from Georgian territories and recognize Georgian sovereignty. Germany, Georgia and the Ottomans signed a peace treaty, the Treaty of Batum which ended the conflict with the last two. In return, Georgia became a German \"ally\". This time period of Georgian-German friendship was known as German Caucasus expedition.",
"title": "Client states"
},
{
"paragraph_id": 39,
"text": "The Don Republic was founded on 18 May 1918. Their ataman Pyotr Krasnov portrayed himself as willing to serve as a pro-German warlord.",
"title": "Client states"
},
{
"paragraph_id": 40,
"text": "Jabal Shammar was an Arab state in the Middle East that was closely associated with the Ottoman Empire.",
"title": "Client states"
},
{
"paragraph_id": 41,
"text": "In 1918, the Azerbaijan Democratic Republic, facing Bolshevik revolution and opposition from the Muslim Musavat Party, was then occupied by the Ottoman Empire, which expelled the Bolsheviks while supporting the Musavat Party. The Ottoman Empire maintained a presence in Azerbaijan until the end of the war in November 1918.",
"title": "Client states"
},
{
"paragraph_id": 42,
"text": "The Mountainous Republic of the Northern Caucasus was associated with the Central Powers.",
"title": "Client states"
},
{
"paragraph_id": 43,
"text": "States listed in this section were not officially members of the Central Powers. Still, during the war, they cooperated with one or more Central Powers members on a level that makes their neutrality disputable.",
"title": "Controversial cases"
},
{
"paragraph_id": 44,
"text": "The Ethiopian Empire was officially neutral throughout World War I but widely suspected of sympathy for the Central Powers between 1915 and 1916. At the time, Ethiopia was one of only two fully independent states in Africa (the other being Liberia) and a major power in the Horn of Africa. Its ruler, Lij Iyasu, was widely suspected of harbouring pro-Islamic sentiments and being sympathetic to the Ottoman Empire. The German Empire also attempted to reach out to Iyasu, dispatching several unsuccessful expeditions to the region to attempt to encourage it to collaborate in an Arab Revolt-style uprising in East Africa. One of the unsuccessful expeditions was led by Leo Frobenius, a celebrated ethnographer and personal friend of Kaiser Wilhelm II. Under Iyasu's directions, Ethiopia probably supplied weapons to the Muslim Dervish rebels during the Somaliland Campaign of 1915 to 1916, indirectly helping the Central Powers' cause.",
"title": "Controversial cases"
},
{
"paragraph_id": 45,
"text": "Fearing the rising influence of Iyasu and the Ottoman Empire, the Christian nobles of Ethiopia conspired against Iyasu over 1915. Iyasu was first excommunicated by the Ethiopian Orthodox Patriarch and eventually deposed in a coup d'état on 27 September 1916. A less pro-Ottoman regent, Ras Tafari Makonnen, was installed on the throne.",
"title": "Controversial cases"
},
{
"paragraph_id": 46,
"text": "Other movements supported the efforts of the Central Powers for their own reasons, such as the radical Irish Nationalists who launched the Easter Rising in Dublin in April 1916; they referred to their \"gallant allies in Europe\". However, most Irish Nationalists supported the British and allied war effort up until 1916, when the Irish political landscape was changing. In 1914, Józef Piłsudski was permitted by Germany and Austria-Hungary to form independent Polish legions. Piłsudski wanted his legions to help the Central Powers defeat Russia and then side with France and the UK and win the war with them.",
"title": "Non-state combatants"
},
{
"paragraph_id": 47,
"text": "Bulgaria signed an armistice with the Allies on 29 September 1918, following a successful Allied advance in Macedonia. The Ottoman Empire followed suit on 30 October 1918 in the face of British and Arab gains in Palestine and Syria. Austria and Hungary concluded ceasefires separately during the first week of November following the disintegration of the Habsburg Empire and the Italian offensive at Vittorio Veneto; Germany signed the armistice ending the war on the morning of 11 November 1918 after the Hundred Days Offensive, and a succession of advances by New Zealand, Australian, Canadian, Belgian, British, French and US forces in north-eastern France and Belgium. There was no unified treaty ending the war; the Central Powers were dealt with in separate treaties.",
"title": "Armistice and treaties"
}
] | The Central Powers, also known as the Central Empires, were one of the two main coalitions that fought in World War I (1914–1918). It consisted of Germany, Austria-Hungary, the Ottoman Empire, and Bulgaria; this was also known as the Quadruple Alliance. The Central Powers' origin was the alliance of Germany and Austria-Hungary in 1879. Despite having nominally joined the Triple Alliance before, Italy did not take part in World War I on the side of the Central Powers. The Ottoman Empire and Bulgaria did not join until after World War I had begun. The Central Powers faced, and were defeated by, the Allied Powers, which themselves had formed around the Triple Entente. | 2001-10-03T02:50:01Z | 2023-12-29T18:26:23Z | [
"Template:Use dmy dates",
"Template:SMS",
"Template:Flagu",
"Template:Portal",
"Template:Reflist",
"Template:Infobox geopolitical organization",
"Template:Flagicon",
"Template:Dts",
"Template:Cite book",
"Template:Webarchive",
"Template:See also",
"Template:World War I",
"Template:More citations needed",
"Template:Div col",
"Template:Lang-de",
"Template:Lang-hu",
"Template:Lang-ota",
"Template:Smaller",
"Template:Short description",
"Template:For",
"Template:Ublist",
"Template:Flag",
"Template:Main",
"Template:Lang-bg",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite news",
"Template:WWI history by nation",
"Template:Authority control",
"Template:Lang"
] | https://en.wikipedia.org/wiki/Central_Powers |
6,675 | Conservatism | Conservatism is a cultural, social, and political philosophy that seeks to promote and to preserve traditional institutions, customs, and values. The central tenets of conservatism may vary in relation to the culture and civilization in which it appears. In Western culture, depending on the particular nation, conservatives seek to promote a range of social institutions, such as the nuclear family, organized religion, the military, the nation-state, property rights, rule of law, aristocracy, and monarchy. Conservatives tend to favour institutions and practices that guarantee social order and historical continuity.
Edmund Burke, an 18th-century Anglo-Irish statesman who opposed the French Revolution but supported the American Revolution, is credited as one of the main philosophers of conservatism in the 1790s. The first established use of the term in a political context originated in 1818 with François-René de Chateaubriand during the period of Bourbon Restoration that sought to roll back the policies of the French Revolution and establish social order.
Conservative thought has varied considerably as it has adapted itself to existing traditions and national cultures. Thus, conservatives from different parts of the world—each upholding their respective traditions—may disagree on a wide range of issues. Historically associated with right-wing politics, the term has been used to describe a wide range of views. Conservatism may be either more libertarian or more authoritarian; more populist or more elitist; more progressive or more reactionary; more moderate or more extreme.
Some political scientists, such as Samuel P. Huntington, have seen conservatism as situational. Under this definition, conservatives are seen as defending the established institutions of their time. According to Quintin Hogg, the chairman of the British Conservative Party in 1959: "Conservatism is not so much a philosophy as an attitude, a constant force, performing a timeless function in the development of a free society, and corresponding to a deep and permanent requirement of human nature itself." Conservatism is often used as a generic term to describe a "right-wing viewpoint occupying the political spectrum between [classical] liberalism and fascism".
Despite the lack of a universal definition, certain themes can be recognised as common across conservative thought. According to Michael Oakeshott:
To be conservative […] is to prefer the familiar to the unknown, to prefer the tried to the untried, fact to mystery, the actual to the possible, the limited to the unbounded, the near to the distant, the sufficient to the superabundant, the convenient to the perfect, present laughter to utopian bliss.
Such traditionalism may be a reflection of trust in time-tested methods of social organisation, giving 'votes to the dead'. Traditions may also be steeped in a sense of identity.
In contrast to the tradition-based definition of conservatism, some left-wing political theorists like Corey Robin define conservatism primarily in terms of a general defense of social and economic inequality. From this perspective, conservatism is less an attempt to uphold old institutions and more "a meditation on—and theoretical rendition of—the felt experience of having power, seeing it threatened, and trying to win it back". On another occasion, Robin argues for a more complex relation:
Conservatism is a defense of established hierarchies, but it is also fearful of those established hierarchies. It sees in their assuredness of power the source of corruption, decadence and decline. Ruling regimes require some kind of irritant, a grain of sand in the oyster, to reactivate their latent powers, to exercise their atrophied muscles, to make their pearls.
The political philosopher Yoram Hazony argues that, in a traditional conservative community, members have importance and influence to the degree they are honoured within the social hierarchy, which includes factors such as age, experience, and wisdom. The word hierarchy has religious roots and translates to 'rule of a high priest.'
Conservatism has been called a "philosophy of human imperfection" by Noël O'Sullivan, reflecting among its adherents a negative view of human nature and pessimism of the potential to improve it through 'utopian' schemes. The "intellectual godfather of the realist right", Thomas Hobbes, argued that the state of nature for humans was "poor, nasty, brutish, and short", requiring centralised authority.
Authority is a core tenet of conservatism. More specifically, conservatives tend to believe in traditional authority. This form of authority, according to Max Weber, is "resting on an established belief in the sanctity of immemorial traditions and the legitimacy of those exercising authority under them". Alexandre Kojève distinguishes between two different forms of traditional authority:
Robert Nisbet acknowledges that the decline of traditional authority in the modern world is partly linked with the retreat of old institutions such as guild, order, parish, and family—institutions that formerly acted as intermediaries between the state and the individual. Hannah Arendt claims that the modern world suffers an existential crisis with a "dramatic breakdown of all traditional authorities," which are needed for the continuity of an established civilisation.
Reactionism is a tradition in right-wing politics that opposes policies for the social transformation of society. In popular usage, reactionism refers to a staunch traditionalist conservative political perspective of a person opposed to social, political, and economic change. Adherents of conservatism often oppose certain aspects of modernity (for example mass culture and secularism) and seek a return to traditional values, though different groups of conservatives may choose different traditional values to preserve.
Some political scientists, such as Corey Robin, treat the words reactionary and conservative as synonyms. Others, such as Mark Lilla, argue that reactionism and conservatism are distinct worldviews. Francis Wilson defines conservatism as "a philosophy of social evolution, in which certain lasting values are defended within the framework of the tension of political conflict".
A reactionary is a person who holds political views that favor a return to the status quo ante, the previous political state of society, which that person believes possessed positive characteristics absent from contemporary society. An early example of a powerful reactionary movement was German Romanticism, which centered around concepts of organicism, medievalism, and traditionalism against the forces of rationalism, secularism, and individualism that were unleashed in the French Revolution.
In political discourse, being a reactionary is generally regarded as negative; Peter King observed that it is "an unsought-for label, used as a torment rather than a badge of honor". Despite this, the descriptor has been adopted by writers such as the Italian esoteric traditionalist Julius Evola, the Austrian monarchist Erik von Kuehnelt-Leddihn, the Colombian political theologian Nicolás Gómez Dávila, and the American historian John Lukacs.
In Great Britain, the Tory movement during the Restoration period (1660–1688) was a form of proto-conservatism that supported a hierarchical society with a monarch who ruled by divine right. However, Tories differ from most later, more moderate, mainstream conservatives in that they opposed the idea of popular sovereignty and rejected the authority of parliament and freedom of religion. Robert Filmer's royalist treatise Patriarcha (published in 1680 but written before the English Civil War of 1642–1651) became accepted as the statement of their doctrine.
However, the Glorious Revolution of 1688 damaged this principle by establishing a constitutional government in England, leading to the hegemony of the Tory-opposed Whig ideology. Faced with defeat, the Tories reformed their movement. They adopted more moderate conservative positions, such as holding that sovereignty was vested in the three estates of Crown, Lords, and Commons rather than solely in the Crown. Richard Hooker (1554–1600), Marquess of Halifax (1633–1695) and David Hume (1711–1776) were proto-conservatives of the period. Halifax promoted pragmatism in government whilst Hume argued against political rationalism and utopianism.
Edmund Burke (1729–1797) has been widely regarded as the philosophical founder of modern conservatism as we know it today. Burke served as the private secretary to the Marquis of Rockingham and as official pamphleteer to the Rockingham branch of the Whig party. Together with the Tories, they were the conservatives in the late 18th century United Kingdom.
Burke's views were a mixture of conservatism and republicanism. He supported the American Revolution of 1775–1783 but abhorred the violence of the French Revolution of 1789–1799. He accepted the conservative ideals of private property and the economics of Adam Smith (1723–1790), but thought that economics should remain subordinate to the conservative social ethic, that capitalism should be subordinate to the medieval social tradition, and that the business class should be subordinate to aristocracy. He insisted on standards of honour derived from the medieval aristocratic tradition and saw the aristocracy as the nation's natural leaders. That meant limits on the powers of the Crown, since he found the institutions of Parliament to be better informed than commissions appointed by the executive. He favored an established church, but allowed for a degree of religious toleration. Burke ultimately justified the social order on the basis of tradition: tradition represented the wisdom of the species, and he valued community and social harmony over social reforms.
Another form of conservatism developed in France in parallel to conservatism in Britain. It was influenced by Counter-Enlightenment works by philosophers such as Joseph de Maistre (1753–1821) and Louis de Bonald (1754–1840). Many continental conservatives do not support separation of church and state, with most supporting state recognition of and cooperation with the Catholic Church, such as had existed in France before the Revolution. Conservatives were also early to embrace nationalism, which was previously associated with liberalism and the Revolution in France. Another early French conservative, François-René de Chateaubriand (1768–1848), espoused a romantic opposition to modernity, contrasting its emptiness with the 'full heart' of traditional faith and loyalty. Elsewhere on the continent, German thinkers Justus Möser (1720–1794) and Friedrich von Gentz (1764–1832) criticized the Declaration of the Rights of Man and of the Citizen that came of the Revolution. Opposition was also expressed by German idealists such as Adam Müller (1779–1829) and Georg Wilhelm Friedrich Hegel (1771–1830), the latter inspiring both leftist and rightist followers.
Both Burke and Maistre were critical of democracy in general, though their reasons differed. Maistre was pessimistic about humans being able to follow rules, while Burke was skeptical about humans' innate ability to make rules. For Maistre, rules had a divine origin, while Burke believed they arose from custom. The lack of custom for Burke, and the lack of divine guidance for Maistre, meant that people would act in terrible ways. Both also believed that liberty of the wrong kind led to bewilderment and political breakdown. Their ideas would together flow into a stream of anti-rationalist, romantic conservatism, but would still stay separate. Whereas Burke was more open to argumentation and disagreement, Maistre wanted faith and authority, leading to a more illiberal strain of thought.
Liberal conservatism is a variant of conservatism that is strongly influenced by liberal stances. It incorporates the classical liberal view of minimal economic interventionism, meaning that individuals should be free to participate in the market and generate wealth without government interference. However, individuals cannot be thoroughly depended on to act responsibly in other spheres of life; therefore, liberal conservatives believe that a strong state is necessary to ensure law and order, and social institutions are needed to nurture a sense of duty and responsibility to the nation.
Originally opposed to capitalism and the industrial revolution, the conservative ideology in many countries adopted economic liberalism. This is also the case in countries where liberal economic ideas have been the tradition such as the United States and are thus considered conservative. A secondary meaning for the term liberal conservatism, which developed in Europe, is a combination of more modern conservative views with those of social liberalism. Often this involves stressing views of free market economics and belief in individual responsibility, with a defence of civil rights, environmentalism and support for a limited welfare state. In continental Europe, this is sometimes also translated into English as social conservatism.
Libertarian conservatism describes certain political ideologies most prominently within the United States which combine libertarian economic issues with aspects of conservatism. Its four main branches are constitutionalism, paleolibertarianism, small government conservatism and Christian libertarianism. They generally differ from paleoconservatives, in that they favor more personal and economic freedom. The agorist philosopher Samuel Edward Konkin III labeled libertarian conservatism right-libertarianism. In the United States, fusionism is the combination of traditionalist and social conservatism with political and economic right-libertarianism.
In contrast to paleoconservatives, libertarian conservatives support strict laissez-faire policies such as free trade and opposition to business regulations and any national bank. They are often opposed to environmental regulations, corporate welfare, subsidies and other areas of economic intervention. Many conservatives, especially in the United States, believe that the government should not play a major role in regulating business and managing the economy. They typically oppose high taxes and income redistribution. Such efforts, they argue, only serve to exacerbate the scourge of unemployment and poverty by lessening the ability for businesses to hire employees due to higher tax impositions.
Fiscal conservatism is the economic philosophy of prudence in government spending and debt. In Reflections on the Revolution in France (1790), Edmund Burke argued that a government does not have the right to run up large debts and then throw the burden on the taxpayer:
[I]t is to the property of the citizen, and not to the demands of the creditor of the state, that the first and original faith of civil society is pledged. The claim of the citizen is prior in time, paramount in title, superior in equity. The fortunes of individuals, whether possessed by acquisition or by descent or in virtue of a participation in the goods of some community, were no part of the creditor's security, expressed or implied...[T]he public, whether represented by a monarch or by a senate, can pledge nothing but the public estate; and it can have no public estate except in what it derives from a just and proportioned imposition upon the citizens at large.
National conservatism is a political term used primarily in Europe to describe a variant of conservatism which concentrates more on national interests than standard conservatism as well as upholding cultural and ethnic identity, while not being outspokenly ultra-nationalist or supporting a far-right approach.
National conservatism is oriented towards upholding national sovereignty, which includes limited immigration and a strong national defence. In Europe, national conservatives are usually eurosceptics. Yoram Hazony has argued for national conservatism in his work The Virtue of Nationalism (2018).
Traditionalist conservatism, also known as classical conservatism, emphasizes the need for the principles of natural law, transcendent moral order, tradition, hierarchy, organic unity, agrarianism, classicism, and high culture as well as the intersecting spheres of loyalty. Some traditionalists have embraced the labels reactionary and counter-revolutionary, defying the stigma that has attached to these terms since the Enlightenment. Having a hierarchical view of society, many traditionalist conservatives, including a few notable Americans such as Ralph Adams Cram, William S. Lind, and Charles A. Coulombe, defend the monarchical political structure as the most natural and beneficial social arrangement.
Cultural conservatives support the preservation of a cultural heritage, either of a nation or a shared culture that is not defined by national boundaries. The shared culture may be as divergent as Western culture or Chinese culture. In the United States, the term "cultural conservative" may imply a conservative position in the culture war. Cultural conservatives hold fast to traditional ways of thinking even in the face of monumental change, strongly believing in traditional values and traditional politics.
Social conservatives believe that society is built upon a fragile network of relationships which need to be upheld through duty, traditional values, and established institutions; and that the government has a role in encouraging or enforcing traditional values or practices. A social conservative wants to preserve traditional morality and social mores, often by opposing what they consider radical policies or social engineering. Some social-conservative stances are the following:
Religious conservatism principally applies the teachings of particular religions to politics—sometimes by merely proclaiming the value of those teachings, at other times by having those teachings influence laws.
In most democracies, political conservatism seeks to uphold traditional family structures and social values. Religious conservatives typically oppose abortion, LGBT behaviour (or, in certain cases, identity), drug use, and sexual activity outside of marriage. In some cases, conservative values are grounded in religious beliefs, and conservatives seek to increase the role of religion in public life.
Paternalistic conservatism is a strand in conservatism which reflects the belief that societies exist and develop organically and that members within them have obligations towards each other. There is particular emphasis on the paternalistic obligation (noblesse oblige) of those who are privileged and wealthy to the poorer parts of society, which is consistent with principles such as duty, organicism, and hierarchy. Paternal conservatives support neither the individual nor the state in principle, but are instead prepared to support either or recommend a balance between the two depending on what is most practical. Paternalistic conservatives historically favour a more aristocratic view and are ideologically related to High Tories.
In more contemporary times, its proponents stress the importance of a social safety net to deal with poverty, supporting limited redistribution of wealth along with government regulation of markets in the interests of both consumers and producers. Paternalistic conservatism first arose as a distinct ideology in the United Kingdom under Prime Minister Benjamin Disraeli's "One Nation" Toryism. There have been a variety of one-nation conservative governments in the United Kingdom with exponents such as Prime Ministers Disraeli, Stanley Baldwin, Neville Chamberlain, Winston Churchill, and Harold Macmillan.
In 19th-century Germany, Chancellor Otto von Bismarck adopted a set of social programs, known as state socialism, which included insurance for workers against sickness, accident, incapacity, and old age. The goal of this conservative state-building strategy was to make ordinary Germans, not just the Junker aristocracy, more loyal to state and Emperor. Chancellor Leo von Caprivi promoted a conservative agenda called the "New Course".
In the United States, Theodore Roosevelt has been identified as the main exponent of progressive conservatism. Roosevelt stated that he had "always believed that wise progressivism and wise conservatism go hand in hand". The Republican administration of President William Howard Taft was progressive conservative, and he described himself as a believer in progressive conservatism. President Dwight D. Eisenhower also declared himself an advocate of progressive conservatism.
In Canada, a variety of conservative governments have been part of the Red Tory tradition, with Canada's former major conservative party being named the Progressive Conservative Party of Canada from 1942 to 2003. Prime Ministers Arthur Meighen, R. B. Bennett, John Diefenbaker, Joe Clark, Brian Mulroney, and Kim Campbell led Red tory federal governments.
Authoritarian conservatism refers to autocratic regimes that center their ideology around national conservatism, rather than ethnic nationalism, though certain racial components such as antisemitism may exist. Authoritarian conservative movements show strong devotion towards religion, tradition, and culture while also expressing fervent nationalism akin to other far-right nationalist movements. Examples of authoritarian conservative statesmen include Miklós Horthy in Hungary, Ioannis Metaxas in Greece, António de Oliveira Salazar in Portugal, Engelbert Dollfuss in Austria, and Francisco Franco in Spain.
Authoritarian conservative movements were prominent in the same era as fascism, with which it sometimes clashed. Although both ideologies shared core values such as nationalism and had common enemies such as communism and materialism, there was nonetheless a contrast between the traditionalist nature of authoritarian conservatism and the revolutionary, palingenetic, and populist nature of fascism—thus it was common for authoritarian conservative regimes to suppress rising fascist and Nazi movements. The hostility between the two ideologies is highlighted by the struggle for power in Austria, which was marked by the assassination of the ultra-Catholic statesman Engelbert Dollfuss by Austrian Nazis. Likewise, Croatian fascists assassinated King Alexander I of Yugoslavia. In Romania, as the fascist Iron Guard was gaining popularity and Nazi Germany was making advances in Europe with the Anschluss and the Munich Agreement, King Carol II ordered the execution of Corneliu Zelea Codreanu and other top-ranking Romanian fascists. The exiled German Emperor Wilhelm II was an enemy of Adolf Hitler, stating that Nazism made him ashamed to be a German for the first time in his life and referring to the Nazis as ”a bunch of shirted gangsters" and "a mob … led by a thousand liars or fanatics”.
The political scientist Seymour Martin Lipset has examined the class basis of right-wing extremist politics in the 1920–1960 era. He reports:
Conservative or rightist extremist movements have arisen at different periods in modern history, ranging from the Horthyites in Hungary, the Christian Social Party of Dollfuss in Austria, Der Stahlhelm and other nationalists in pre-Hitler Germany, and Salazar in Portugal, to the pre-1966 Gaullist movements and the monarchists in contemporary France and Italy. The right extremists are conservative, not revolutionary. They seek to change political institutions in order to preserve or restore cultural and economic ones, while extremists of the centre [fascists/nazis] and left [communists/anarchists] seek to use political means for cultural and social revolution. The ideal of the right extremist is not a totalitarian ruler, but a monarch, or a traditionalist who acts like one. Many such movements in Spain, Austria, Hungary, Germany, and Italy have been explicitly monarchist […] The supporters of these movements differ from those of the centrists, tending to be wealthier, and more religious, which is more important in terms of a potential for mass support.
Edmund Fawcett claims that fascism is totalitarian, populist, and anti-pluralist, whereas authoritarian conservatism is somewhat pluralist but most of all elitist and anti-populist. He concludes: "The fascist is a nonconservative who takes anti-liberalism to extremes. The right-wing authoritarian is a conservative who takes fear of democracy to extremes."
During the Cold War, right-wing military dictatorships were prominent in Latin America, with most nations being under military rule by the middle of the 1970s. One example of this was Augusto Pinochet, who ruled over Chile from 1973 to 1990. In the 21st century, the authoritarian style of government experienced a worldwide renaissance with conservative statesmen such as Vladimir Putin in Russia, Recep Tayyip Erdoğan in Turkey, Viktor Orbán in Hungary, Narendra Modi in India, and Donald Trump in the United States.
Conservative parties vary widely from country to country in the goals they wish to achieve. Both conservative and classical liberal parties tend to favour private ownership of property, in opposition to communist, socialist, and green parties, which favour communal ownership or laws regulating responsibility on the part of property owners. Where conservatives and liberals differ is primarily on social issues, where conservatives tend to reject behaviour that does not conform to some social norm. Modern conservative parties often define themselves by their opposition to liberal or socialist parties.
The United States usage of the term conservative is unique to that country.
In India, the Bharatiya Janata Party (BJP), led by Narendra Modi, represent conservative politics. With over 170 million members as of October 2022, the BJP is by far the world's largest political party. It promotes Hindu nationalism, quasi-fascist Hindutva, a hostile foreign policy against Pakistan, and a conservative social and fiscal policy.
Singapore's only conservative party is the People's Action Party (PAP). It is currently in government and has been since independence in 1965. It promotes conservative values in the form of Asian democracy and Asian values.
South Korea's major conservative party, the People Power Party, has changed its form throughout its history. First it was the Democratic-Liberal Party and its first head was Roh Tae-woo, who was the first President of the Sixth Republic of South Korea. Democratic-Liberal Party was founded by the merging of Roh Tae-woo's Democratic Justice Party, Kim Young Sam's Reunification Democratic Party and Kim Jong-pil's New Democratic Republican Party. Kim Young-sam became the fourteenth President of Korea.
When the conservative party was beaten by the opposition party in the general election, it changed its form again to follow the party members' demand for reforms. It became the New Korea Party, but it changed again one year later since the President Kim Young-sam was blamed by the citizen for the International Monetary Fund. It changed its name to Grand National Party (GNP). Since the late Kim Dae-jung assumed the presidency in 1998, GNP had been the opposition party until Lee Myung-bak won the presidential election of 2007.
European conservatism has taken many different expressions. In Italy, which was united by liberals and radicals in the Risorgimento, liberals, not conservatives, emerged as the party of the right. During the first half of the 20th century, when socialism was gaining power around the world, conservatism in countries such as Austria, Germany, Greece, Hungary, Portugal, and Spain transformed into the far-right, becoming more authoritarian and extreme.
Austrian conservatism originated with Prince Klemens von Metternich (1773–1859), who was the architect behind the monarchist and imperialist Conservative Order that was enacted at the Congress of Vienna in the aftermath of the French Revolution and the Napoleonic Wars. The goal was to establish a European balance of power that could guarantee peace and suppress republican and nationalist movements. During its existence, the Austrian Empire was the third most populous monarchy in Europe after the Russian Empire and the United Kingdom. Following its defeat in the Austro-Prussian War, it transformed into the Austro-Hungarian Empire, which was the most diverse state in Europe with twelve nationalities living under a unifying monarch. The Empire was fragmented in the aftermath of World War I, ushering in the democratic First Austrian Republic.
The Austrian Civil War in 1934 saw a series of skirmishes between the right-wing government and socialist forces. When the insurgents were defeated, the government declared martial law and held mass trials, forcing leading socialist politicians, such as Otto Bauer, into exile. The conservatives banned the Social Democratic Party and its affiliated trade unions, and replaced parliamentary democracy with a corporatist and clerical constitution. The Patriotic Front, into which the paramilitary Heimwehr and the Christian Social Party were merged, became the only legal political party in the resulting authoritarian regime, the Federal State of Austria.
While having close ties to Fascist Italy, which was still a monarchy as well as a fellow Catholic nation, rightist Austria harboured strong anti-Prussian and anti-Nazi sentiments. Austria’s most influential conservative philosopher, the Catholic aristocrat Erik von Kuehnelt-Leddihn, published many books in which he interpreted Nazism as a leftist, ochlocratic, and demagogic ideology, opposed to the traditional rightist ideals of aristocracy, monarchy, and Christianity. Austria's dictator Engelbert Dollfuss saw Nazism as another form of totalitarian communism, and he saw Adolf Hitler as the German version of Joseph Stalin. The conservatives banned the Austrian Nazi Party and arrested many of its activists. In 1934, Dollfus was assassinated by Nazi enemies who sought revenge. In response, Benito Mussolini mobilised a part of the Italian army on the Austrian border and threatened Hitler with war in the event of a German invasion of Austria. In 1938, when Nazi Germany annexed Austria in the Anschluss, conservative groups were suppressed: members of the Austrian nobility and the Catholic clergy were arrested and their properties were confiscated.
Following World War II and the return to democracy, Austrian conservatives abandoned the authoritarianism of its past, believing in the principles of class collaboration and political compromise, while Austrian socialists also abandoned their extremism and distanced themselves from the totalitarianism of the Soviet Union. The conservatives formed the Austrian People's Party, which has been the major conservative party in Austria ever since. In contemporary politics, the party was led by Sebastian Kurz, whom the Frankfurter Allgemeine Zeitung nicknamed the "young Metternich".
Having its roots in the conservative Catholic Party, the Christian People's Party retained a conservative edge through the 20th century, supporting the king in the Royal Question, supporting nuclear family as the cornerstone of society, defending Christian education, and opposing euthanasia. The Christian People's Party dominated politics in post-war Belgium. In 1999, the party's support collapsed, and it became the country's fifth-largest party. Since 2014, the Flemish nationalist and conservative New Flemish Alliance is the largest party in Belgium.
Danish conservatism emerged with the political grouping Højre (literally "Right"), which due to its alliance with king Christian IX of Denmark dominated Danish politics and formed all governments from 1865 to 1901. When a constitutional reform in 1915 stripped the landed gentry of political power, Højre was succeeded by the Conservative People's Party of Denmark, which has since then been the main Danish conservative party. Another Danish conservative party was the Free Conservatives, who were active between 1902 and 1920. Traditionally and historically, conservatism in Denmark has been more populist and agrarian than in Sweden and Norway, where conservatism has been more elitist and urban.
The Conservative People's Party led the government coalition from 1982 to 1993. The party had previously been member of various governments from 1916 to 1917, 1940 to 1945, 1950 to 1953, and 1968 to 1971. The party was a junior partner in governments led by the Liberals from 2001 to 2011 and again from 2016 to 2019. The party is preceded by 11 years by the Young Conservatives (KU), today the youth movement of the party.
The Conservative People's Party had a stable electoral support close to 15 to 20% at almost all general elections from 1918 to 1971. In the 1970s it declined to around 5%, but then under the leadership of Poul Schlüter reached its highest popularity level ever in 1984, receiving 23% of the votes. Since the late 1990s the party has obtained around 5 to 10% of the vote. In 2022, the party received 5.5% of the vote.
Conservative thinking has also influenced other Danish political parties. In 1995, the Danish People's Party was founded, based on a mixture of conservative, nationalist, and social-democratic ideas. In 2015, the party New Right was established, professing a national-conservative attitude.
The conservative parties in Denmark have always considered the monarchy as a central institution in Denmark.
The conservative party in Finland is the National Coalition Party. The party was founded in 1918, when several monarchist parties united. Although right-wing in the past, today it is a moderate liberal-conservative party. While advocating economic liberalism, it is committed to the social market economy.
Early conservatism in France focused on the rejection of the secularism of the French Revolution, support for the role of the Catholic Church, and the restoration of the monarchy. After the first fall of Napoleon in 1814, the House of Bourbon returned to power in the Bourbon Restoration. Louis XVIII and Charles X, brothers of the executed King Louis XVI, successively mounted the throne and instituted a conservative government intended to restore the proprieties, if not all the institutions, of the Ancien Régime.
After the July Revolution of 1830, Louis Philippe I, a member of the more liberal Orléans branch of the House of Bourbon, proclaimed himself as King of the French. The Second French Empire saw an Imperial Bonapartist regime of Napoleon III from 1852 to 1870. The Bourbon monarchist cause was on the verge of victory in the 1870s, but then collapsed because the proposed king, Henri, Count of Chambord, refused to fly the tri-colored flag. The turn of the century saw the rise of Action Française – an ultraconservative, reactionary, nationalist, and royalist movement that advocated the restoration of the monarchy.
Religious tensions between Christian rightists and secular leftists heightened in the 1890–1910 era, but moderated after the spirit of unity in fighting the First World War. An authoritarian form of conservatism characterized the Vichy regime of 1940–1944 with heightened antisemitism, opposition to individualism, emphasis on family life, and national direction of the economy.
Conservatism has been the major political force in France since the Second World War, although the number of conservative groups and their lack of stability defy simple categorization. Following the war, conservatives supported Gaullist groups and parties, espoused nationalism, and emphasized tradition, order, and the regeneration of France. Unusually, post-war conservatism in France was formed around the personality of a leader—army officer Charles de Gaulle who led the Free French Forces against Nazi Germany—and it did not draw on traditional French conservatism, but on the Bonapartist tradition. Gaullism in France continues under The Republicans (formerly Union for a Popular Movement); it was previously led by Nicolas Sarkozy, who served as President of France from 2007 to 2012 and whose ideology is known as Sarkozysm.
In 2021, the French intellectual Éric Zemmour founded the nationalist party Reconquête, which has been described as a more elitist and conservative version of Marine Le Pen’s National Rally.
Conservatism developed alongside nationalism in Germany, culminating in Germany's victory over France in the Franco-Prussian War, the creation of the unified German Empire in 1871 and the simultaneous rise of Otto von Bismarck on the European political stage. Bismarck's "balance of power" model maintained peace in Europe for decades at the end of the 19th century. His "revolutionary conservatism" was a conservative state-building strategy, based on class collaboration and designed to make ordinary Germans—not just the Junker aristocracy—more loyal to state and Emperor. He created the modern welfare state in Germany in the 1880s. According to scholars, his strategy was:
granting social rights to enhance the integration of a hierarchical society, to forge a bond between workers and the state so as to strengthen the latter, to maintain traditional relations of authority between social and status groups, and to provide a countervailing power against the modernist forces of liberalism and socialism.
Bismarck also enacted universal male suffrage in the new German Empire in 1871. He became a great hero to German conservatives, who erected many monuments to his memory after he left office in 1890.
With the rise of Nazism in 1933, traditional agrarian movements faded and were supplanted by a more command-based economy and forced social integration. Adolf Hitler succeeded in garnering the support of many German industrialists; but prominent traditionalists, including military officers Claus von Stauffenberg and Henning von Tresckow, pastor Dietrich Bonhoeffer, Bishop Clemens August Graf von Galen, and monarchist Carl Friedrich Goerdeler, openly and secretly opposed his policies of euthanasia, genocide, and attacks on organized religion.
More recently, the work of conservative Christian Democratic Union leader and Chancellor Helmut Kohl helped bring about German reunification, along with the closer European integration in the form of the Maastricht Treaty. Today, German conservatism is often associated with politicians such as Chancellor Angela Merkel, whose tenure has been marked by attempts to save the common European currency (Euro) from demise. The German conservatives are divided under Merkel due to the refugee crisis in Germany and many conservatives in the CDU/CSU oppose the refugee and migrant policies developed under Merkel. The 2020s also saw the rise of right-wing populist Alternative for Germany.
The main inter-war conservative party was called the People's Party (PP), which supported constitutional monarchy and opposed the republican Liberal Party. Both parties were suppressed by the authoritarian, arch-conservative, and royalist 4th of August Regime of Ioannis Metaxas in 1936–1941. The PP was able to re-group after the Second World War as part of a United Nationalist Front which achieved power campaigning on a simple anti-communist, nationalist platform during the Greek Civil War (1946–1949). However, the vote received by the PP declined during the so-called "Centrist Interlude" in 1950–1952.
In 1952, Marshal Alexandros Papagos created the Greek Rally as an umbrella for the right-wing forces. The Greek Rally came to power in 1952 and remained the leading party in Greece until 1963. After Papagos' death in 1955, it was reformed as the National Radical Union under Konstantinos Karamanlis. Right-wing governments backed by the palace and the army overthrew the Centre Union government in 1965 and governed the country until the establishment of the far-right Greek junta (1967–1974). After the regime's collapse in August 1974, Karamanlis returned from exile to lead the government and founded the New Democracy party. The new conservative party had four objectives: to confront Turkish expansionism in Cyprus, to reestablish and solidify democratic rule, to give the country a strong government, and to make a powerful moderate party a force in Greek politics.
The Independent Greeks, a newly formed political party in Greece, has also supported conservatism, particularly national and religious conservatism. The Founding Declaration of the Independent Greeks strongly emphasises in the preservation of the Greek state and its sovereignty, the Greek people, and the Greek Orthodox Church.
Founded in 1924 as the Conservative Party, Iceland's Independence Party adopted its current name in 1929 after the merger with the Liberal Party. From the beginning, they have been the largest vote-winning party, averaging around 40%. They combined liberalism and conservatism, supported nationalization of infrastructure, and advocated class collaboration. While mostly in opposition during the 1930s, they embraced economic liberalism, but accepted the welfare state after the war and participated in governments supportive of state intervention and protectionism. Unlike other Scandanivian conservative (and liberal) parties, it has always had a large working-class following. After the financial crisis in 2008, the party has sunk to a lower support level at around 20–25%.
After unification, Italy was governed successively by the Historical Right, which represented conservative, liberal-conservative, and conservative-liberal positions, and the Historical Left.
After World War I, the country saw the emergence of its first mass parties, notably including the Italian People's Party (PPI), a Christian-democratic party that sought to represent the Catholic majority, which had long refrained from politics. The PPI and the Italian Socialist Party decisively contributed to the loss of strength and authority of the old liberal ruling class, which had not been able to structure itself into a proper party: the Liberal Union was not coherent and the Italian Liberal Party came too late. In 1921 Benito Mussolini gave birth to the National Fascist Party (PNF), and the next year, through the March on Rome, he was appointed Prime Minister. In 1926 all parties were dissolved except the PNF, which thus remained the only legal party in the Kingdom of Italy until the fall of the regime in July 1943. By 1945, fascists were discredited, disbanded, and outlawed, while Mussolini was executed in April that year.
After World War II, the centre-right was dominated by the centrist party Christian Democracy (DC), which included both conservative and centre-left elements. With its landslide victory over the Italian Socialist Party and the Italian Communist Party in 1948, the political centre was in power. In Denis Mack Smith's words, it was "moderately conservative, reasonably tolerant of everything which did not touch religion or property, but above all Catholic and sometimes clerical". It dominated politics until DC's dissolution in 1994. Among DC's frequent allies, there was the conservative-liberal Italian Liberal Party. At the right of the DC stood parties like the royalist Monarchist National Party and the post-fascist Italian Social Movement.
In 1994, entrepreneur and media tycoon Silvio Berlusconi founded the liberal-conservative party Forza Italia (FI). He won three elections in 1994, 2001, and 2008, governing the country for almost ten years as Prime Minister. FI formed a coalitions with several parties, including the national-conservative National Alliance (AN), heir of the MSI, and the regionalist Lega Nord (LN). FI was briefly incorporated, along with AN, in The People of Freedom party and later revived in the new Forza Italia. After the 2018 general election, the LN and the Five Star Movement formed a populist government, which lasted about a year. In the 2022 general election, a centre-right coalition came to power, this time dominated by Brothers of Italy (FdI), a new national-conservative party born on the ashes of AN. Consequently, FdI, the re-branded Lega, and FI formed a government under FdI leader Giorgia Meloni.
Luxembourg's major conservative party, the Christian Social People's Party, was formed as the Party of the Right in 1914 and adopted its present name in 1945. It was consistently the largest political party in Luxembourg, and dominated politics throughout the 20th century.
Liberalism has been strong in the Netherlands. Thus, rightist parties are often liberal-conservative or conservative-liberal. One example of this is the People's Party for Freedom and Democracy. Even the right-wing populist or far-right Party for Freedom, which dominated the 2023 election, supports liberal positions such as women's and gay rights, abortion, and euthanasia.
The Conservative Party of Norway (Norwegian: Høyre, literally "Right") was formed by the old upper-class of state officials and wealthy merchants to fight the populist democracy of the Liberal Party, but it lost power in 1884, when parliamentarian government was first practiced. It formed its first government under parliamentarism in 1889 and continued to alternate in power with the Liberals until the 1930s, when Labour became the dominant party. It has elements both of paternalism, stressing the responsibilities of the state, and of economic liberalism. It first returned to power in the 1960s. During Kåre Willoch's premiership in the 1980s, much emphasis was laid on liberalizing the credit and housing market and abolishing the NRK TV and radio monopoly, while supporting law and order in criminal justice and traditional norms in education.
Under Vladimir Putin, the dominant leader since 1999, Russia has promoted explicitly conservative policies in social, cultural, and political matters, both at home and abroad. Putin has criticized globalism and economic liberalism, claiming that "liberalism has become obsolete" and that the vast majority of people in the world oppose multiculturalism, free immigration, and rights for LGBT people. Russian conservatism is special in some respects as it supports a mixed economy with economic intervention, combined with a strong nationalist sentiment and social conservatism which is largely populist. As a result, Russian conservatism opposes right-libertarian ideals such as the aforementioned concept of economic liberalism found in other conservative movements around the world.
Putin has also promoted new think tanks that bring together like-minded intellectuals and writers. For example, the Izborsky Club, founded in 2012 by Alexander Prokhanov, stresses Russian nationalism, the restoration of Russia's historical greatness, and systematic opposition to liberal ideas and policies. Vladislav Surkov, a senior government official, has been one of the key ideologues during Putin's presidency.
In cultural and social affairs, Putin has collaborated closely with the Russian Orthodox Church. Under Patriarch Kirill of Moscow, the Church has backed the expansion of Russian power into Crimea and eastern Ukraine. More broadly, The New York Times reports in September 2016 how the Church's policy prescriptions support the Kremlin's appeal to social conservatives:
"A fervent foe of homosexuality and any attempt to put individual rights above those of family, community, or nation, the Russian Orthodox Church helps project Russia as the natural ally of all those who pine for a more secure, illiberal world free from the tradition-crushing rush of globalization, multiculturalism, and women's and gay rights."
In the early 19th century, Swedish conservatism developed alongside Swedish Romanticism. The historian Erik Gustaf Geijer, an exponent of Gothicism, glorified the Viking Age and the Swedish Empire, and the idealist philosopher Christopher Jacob Boström became the chief ideologue of the official state doctrine, which dominated Swedish politics for almost a century. Other influential Swedish conservative Romantics were Esaias Tegnér and Per Daniel Amadeus Atterbom.
Early parliamentary conservatism in Sweden was explicitly elitist. Indeed, the Conservative Party was formed in 1904 with one major goal in mind: to stop the advent of universal suffrage, which they feared would result in socialism. Yet, it was a Swedish admiral, the conservative politican Arvid Lindman, who first extended democracy by enacting male suffrage, despite the protests of more traditionalist voices, such as the later Prime Minister, the arch-conservative and authoritarian Ernst Trygger, who railed at progressive policies such as the abolition of the death penalty.
Once a democratic system was in place, Swedish conservatives sought to combine traditional elitism with modern populism. Sweden’s most renowned political scientist, the conservative politician Rudolf Kjellén, coined the terms geopolitics and biopolitics in relation to his organic theory of the state. He also developed the corporatist-nationalist concept of Folkhemmet (’the home of the people’), which became the single most powerful political concept in Sweden throughout the 20th century, although it was adopted by the Social Democratic Party who gave it a more socialist interpretation.
After a brief grand coalition between Left and Right during WWII, the centre-right parties struggled to cooperate due to their ideological differences: the agrarian populism of the Centre Party, the urban liberalism of the Liberal People’s Party, and the liberal-conservative elitism of the Moderate Party (the old Conservative Party). However, in 1976 and in 1979, the three parties managed to form a government under Thorbjörn Fälldin—and again in 1991, this time under the aristocrat Carl Bildt and with support from the newly founded Christian Democrats, the most conservative party in contemporary Sweden.
In modern times, mass immigration from distant cultures caused a large populist dissatisfaction, which was not channeled through any of the established parties, who generally espoused multiculturalism. Instead, the 2010s saw the rise of the right-wing populist Sweden Democrats, who were surging as the largest party in the polls on several occasions. The party was ostracized by the other parties until 2019, when Christian Democrat leader Ebba Busch reached out for collaboration, after which the Moderate Party followed suit. In 2022, the centre-right parties formed a government with support from the Sweden Democrats as the biggest party. The subsequent Tidö Agreement, negotiated in Tidö Castle, incorporated authoritarian policies such as a stricter stance on immigration and a harsher stance on law and order.
In some aspects, Swiss conservatism is unique. While most European nations have had a long monarchical tradition, been relatively homogenous ethnically, and engaged in many wars, Switzerland is an old republic with a multicultural mosaic of three major nationalities, adhering to the principle of Swiss neutrality.
There are a number of conservative parties in Switzerland's parliament, the Federal Assembly. These include the largest ones: the Swiss People's Party (SVP), the Christian Democratic People's Party (CVP), and the Conservative Democratic Party of Switzerland (BDP), which is a splinter of the SVP created in the aftermath to the election of Eveline Widmer-Schlumpf as Federal Council.
The SVP was formed from the 1971 merger of the Party of Farmers, Traders and Citizens, formed in 1917, and the smaller Democratic Party, formed in 1942. The SVP emphasized agricultural policy and was strong among farmers in German-speaking Protestant areas. As Switzerland considered closer relations with the European Union in the 1990s, the SVP adopted a more militant protectionist and isolationist stance. This stance has allowed it to expand into German-speaking Catholic mountainous areas. The Anti-Defamation League, a non-Swiss lobby group based in the United States has accused them of manipulating issues such as immigration, Swiss neutrality, and welfare benefits, awakening antisemitism and racism. The Council of Europe has called the SVP "extreme right", although some scholars dispute this classification. For instance, Hans-Georg Betz describes it as "populist radical right". The SVP has been the largest party since 2003.
The authoritarian Ukrainian State was headed by the Cossack aristocrat Pavlo Skoropadskyi and represented the conservative movement. The 1918 Hetman government, which appealed to the tradition of the 17th–18th century Cossack Hetman state, represented the conservative strand in Ukraine's struggle for independence. It had the support of the proprietary classes and of conservative and moderate political groups. Vyacheslav Lypynsky was a main ideologue of Ukrainian conservatism.
Modern English conservatives celebrate Anglo-Irish statesman Edmund Burke as their intellectual father. Burke was affiliated with the Whig Party, which eventually split amongst the Liberal Party and the Conservative Party, but the modern Conservative Party is generally thought to derive primarily from the Tories, and the MPs of the modern conservative party are still frequently referred to as Tories.
Shortly after Burke's death in 1797, conservatism was revived as a mainstream political force as the Whigs suffered a series of internal divisions. This new generation of conservatives derived their politics not from Burke, but from his predecessor, the Viscount Bolingbroke (1678–1751), who was a Jacobite and traditional Tory, lacking Burke's sympathies for Whiggish policies such as Catholic emancipation and American independence (famously attacked by Samuel Johnson in "Taxation No Tyranny").
In the first half of the 19th century, many newspapers, magazines, and journals promoted loyalist or right-wing attitudes in religion, politics, and international affairs. Burke was seldom mentioned, but William Pitt the Younger (1759–1806) became a conspicuous hero. The most prominent journals included The Quarterly Review, founded in 1809 as a counterweight to the Whigs' Edinburgh Review, and the even more conservative Blackwood's Magazine. The Quarterly Review promoted a balanced Canningite Toryism, as it was neutral on Catholic emancipation and only mildly critical of Nonconformist dissent; it opposed slavery and supported the current poor laws; and it was "aggressively imperialist". The high-church clergy of the Church of England read the Orthodox Churchman's Magazine, which was equally hostile to Jewish, Catholic, Jacobin, Methodist and Unitarian spokesmen. Anchoring the ultra-Tories, Blackwood's Edinburgh Magazine stood firmly against Catholic emancipation and favoured slavery, cheap money, mercantilism, the Navigation Acts, and the Holy Alliance.
Conservatism evolved after 1820, embracing free trade in 1846 and a commitment to democracy, especially under Benjamin Disraeli. The effect was to significantly strengthen conservatism as a grassroots political force. Conservatism no longer was the philosophical defense of the landed aristocracy, but had been refreshed into redefining its commitment to the ideals of order, both secular and religious, expanding imperialism, strengthened monarchy, and a more generous vision of the welfare state as opposed to the punitive vision of the Whigs and liberals. As early as 1835, Disraeli attacked the Whigs and utilitarians as slavishly devoted to an industrial oligarchy, while he described his fellow Tories as the only "really democratic party of England", devoted to the interests of the whole people. Nevertheless, inside the party there was a tension between the growing numbers of wealthy businessmen on the one side and the aristocracy and rural gentry on the other. The aristocracy gained strength as businessmen discovered they could use their wealth to buy a peerage and a country estate.
Some conservatives lamented the passing of a pastoral world where the ethos of noblesse oblige had promoted respect from the lower classes. They saw the Anglican Church and the aristocracy as balances against commercial wealth. They worked toward legislation for improved working conditions and urban housing. This viewpoint would later be called Tory democracy. However, since Burke, there has always been tension between traditional aristocratic conservatism and the wealthy liberal business class.
In 1834, Tory Prime Minister Robert Peel issued the "Tamworth Manifesto", in which he pledged to endorse moderate political reform. This marked the beginning of the transformation from High Tory reactionism towards a more modern form of conservatism. As a result, the party became known as the Conservative Party—a name it has retained to this day. However, Peel would also be the root of a split in the party between the traditional Tories (by the Earl of Derby and Benjamin Disraeli) and the "Peelites" (led first by Peel himself, then by the Earl of Aberdeen). The split occurred in 1846 over the issue of free trade, which Peel supported, versus protectionism, supported by Derby. The majority of the party sided with Derby whilst about a third split away, eventually merging with the Whigs and the radicals to form the Liberal Party. Despite the split, the mainstream Conservative Party accepted the doctrine of free trade in 1852.
In the second half of the 19th century, the Liberal Party faced political schisms, especially over Irish Home Rule. Leader William Gladstone (himself a former Peelite) sought to give Ireland a degree of autonomy, a move that elements in both the left and right-wings of his party opposed. These split off to become the Liberal Unionists (led by Joseph Chamberlain), forming a coalition with the Conservatives before merging with them in 1912. The Liberal Unionist influence dragged the Conservative Party towards the left as Conservative governments passed a number of progressive reforms at the turn of the 20th century. By the late 19th century, the traditional business supporters of the Liberal Party had joined the Conservatives, making them the party of business and commerce as well.
After a period of Liberal dominance before the First World War, the Conservatives gradually became more influential in government, regaining full control of the cabinet in 1922. In the inter-war period, conservatism was the major ideology in Britain as the Liberal Party vied with the Labour Party for control of the left. After the Second World War, the first Labour government (1945–1951) under Clement Attlee embarked on a program of nationalization of industry and the promotion of social welfare. The Conservatives generally accepted those policies until the 1980s.
In the 1980s, the Conservative government of Margaret Thatcher, guided by neoliberal economics, reversed many of Labour's social programmes, privatised large parts of the UK economy, and sold state-owned assets. The Conservative Party also adopted soft eurosceptic politics and opposed Federal Europe. Other conservative political parties, such as the Democratic Unionist Party (DUP, founded in 1971), and the United Kingdom Independence Party (UKIP, founded in 1993), began to appear, although they have yet to make any significant impact at Westminster. As of 2014, the DUP comprises the largest political party in the ruling coalition in the Northern Ireland Assembly), and from 2017–2019 the DUP provided support for the Conservative minority government under a confidence-and-supply arrangement.
Conservative elites have long dominated Latin American nations. Mostly, this has been achieved through control of civil institutions, the Catholic Church, and the military, rather than through party politics. Typically, the Church was exempt from taxes and its employees immune from civil prosecution. Where conservative parties were weak or non-existent, conservatives were more likely to rely on military dictatorship as a preferred form of government.
However, in some nations where the elites were able to mobilize popular support for conservative parties, longer periods of political stability were achieved. Chile, Colombia, and Venezuela are examples of nations that developed strong conservative parties. Argentina, Brazil, El Salvador, and Peru are examples of nations where this did not occur. The Conservative Party of Venezuela disappeared following the Federal Wars of 1858–1863. Chile's conservative party, the National Party, disbanded in 1973 following a military coup and did not re-emerge as a political force following the subsequent return to democracy.
Louis Hartz explained conservatism in Latin American nations as a result of their settlement as feudal societies.
Conservatism in Brazil originates from the cultural and historical tradition of Brazil, whose cultural roots are Luso-Iberian and Roman Catholic. More traditional conservative historical views and features include belief in political federalism and monarchism.
In cultural life, Brazilian conservatism from the 20th century on includes names such as Mário Ferreira dos Santos and Vicente Ferreira da Silva in philosophy; Gerardo Melo Mourão and Otto Maria Carpeaux in literature; Bruno Tolentino in poetry; Olavo de Carvalho, Paulo Francis and Luís Ernesto Lacombe in journalism; Manuel de Oliveira Lima and João Camilo de Oliveira Torres in historiography; Sobral Pinto and Miguel Reale in law; Gustavo Corção, Plinio Corrêa de Oliveira, Father Léo and Father Paulo Ricardo in religion; and Roberto Campos and Mario Henrique Simonsen in economics.
In contemporary politics, a conservative wave began roughly around the 2014 Brazilian presidential election. According to commentators, the National Congress of Brazil elected in 2014 may be considered the most conservative since the re-democratization movement, citing an increase in the number of parliamentarians linked to more conservative segments, such as ruralists, the military, the police, and religious conservatives. The subsequent economic crisis of 2015 and investigations of corruption scandals led to a right-wing movement that sought to rescue ideas from economic liberalism and conservatism in opposition to socialism. At the same time, fiscal conservatives such as those that make up the Free Brazil Movement emerged among many others. National-conservative candidate Jair Bolsonaro of the Social Liberal Party was the winner of the 2018 Brazilian presidential election.
The active conservative parties in Brazil are Brazil Union, Progressistas, Republicans, Liberal Party, Brazilian Labour Renewal Party, Patriota, Brazilian Labour Party, Social Christian Party and Brasil 35.
The Colombian Conservative Party, founded in 1849, traces its origins to opponents of General Francisco de Paula Santander's 1833–1837 administration. While the term "liberal" had been used to describe all political forces in Colombia, the conservatives began describing themselves as "conservative liberals" and their opponents as "red liberals". From the 1860s until the present, the party has supported strong central government and the Catholic Church, especially its role as protector of the sanctity of the family, and opposed separation of church and state. Its policies include the legal equality of all men, the citizen's right to own property, and opposition to dictatorship. It has usually been Colombia's second largest party, with the Colombian Liberal Party being the largest.
Canada's conservatives had their roots in the Tory loyalists who left America after the American Revolution. They developed in the socio-economic and political cleavages that existed during the first three decades of the 19th century and had the support of the mercantile, professional, and religious elites in Ontario and to a lesser extent in Quebec. Holding a monopoly over administrative and judicial offices, they were called the Family Compact in Ontario and the Chateau Clique in Quebec. John A. Macdonald's successful leadership of the movement to confederate the provinces and his subsequent tenure as prime minister for most of the late 19th century rested on his ability to bring together the English-speaking Protestant aristocracy and the ultramontane Catholic hierarchy of Quebec and to keep them united in a conservative coalition.
The conservatives combined pro-market liberalism and Toryism. They generally supported an activist government and state intervention in the marketplace, and their policies were marked by noblesse oblige, a paternalistic responsibility of the elites for the less well-off. The party was known as the Progressive Conservatives from 1942 until 2003, when the party merged with the Canadian Alliance to form the Conservative Party of Canada.
The conservative and autonomist Union Nationale, led by Maurice Duplessis, governed the province of Quebec in periods from 1936 to 1960 and in a close alliance with the Catholic Church, small rural elites, farmers, and business elites. This period, known by liberals as the Great Darkness, ended with the Quiet Revolution and the party went into terminal decline.
By the end of the 1960s, the political debate in Quebec centered around the question of independence, opposing the social democratic and sovereignist Parti Québécois and the centrist and federalist Quebec Liberal Party, therefore marginalizing the conservative movement. Most French Canadian conservatives rallied either the Quebec Liberal Party or the Parti Québécois, while some of them still tried to offer an autonomist third-way with what was left of the Union Nationale or the more populists Ralliement créditiste du Québec and Parti national populaire, but by the 1981 provincial election politically organized conservatism had been obliterated in Quebec. It slowly started to revive at the 1994 provincial election with the Action démocratique du Québec, who served as Official opposition in the National Assembly from 2007 to 2008, before merging in 2012 with François Legault's Coalition Avenir Québec, which took power in 2018. The modern Conservative Party of Canada has rebranded conservatism and, under the leadership of Stephen Harper, added more conservative policies. Yoram Hazony, a scholar on the history and ideology of conservatism, identified the Canadian psychologist Jordan Peterson as the most significant conservative thinker to appear in the English-speaking world in a generation.
The meaning of conservatism in the United States is different from the way the word is used elsewhere. As historian Leo P. Ribuffo notes, "what Americans now call conservatism much of the world calls liberalism or neoliberalism". However, the prominent American conservative writer Russell Kirk, in his influential work The Conservative Mind (1953), argued that conservatism had been brought to the United States and he interpreted the American Revolution as a "conservative revolution" against royalist innovation.
American conservatism is a broad system of political beliefs in the United States, which is characterized by respect for American traditions, support for Judeo-Christian values, economic liberalism, anti-communism, and a defense of Western culture. Liberty within the bounds of conformity to conservatism is a core value, with a particular emphasis on strengthening the free market, limiting the size and scope of government, and opposing high taxes as well as government or labor union encroachment on the entrepreneur.
The 1830s Democratic Party became divided between Southern Democrats, who supported slavery, secession, and later segregation, and the Northern Democrats, who tended to support the abolition of slavery, union, and equality. Many Democrats were conservative in the sense that they wanted things to be like they were in the past, especially as far as race was concerned. They generally favored poorer farmers and urban workers, and were hostile to banks, industrialization, and high tariffs.
The post-Civil War Republican Party elected the first People of Color to serve in both local and national political office. The Southern Democrats united with pro-segregation Northern Republicans to form the Conservative Coalition, which successfully put an end to Blacks being elected to national political office until 1967, when Edward Brooke was elected Senator from Massachusetts. Conservative Democrats influenced US politics until 1994's Republican Revolution, when the American South shifted from solid Democrat to solid Republican, while maintaining its conservative values.
In late 19th century, the Democratic Party split into two factions; the more conservative Eastern business faction (led by Grover Cleveland) favored gold, while the South and West (led by William Jennings Bryan) wanted more silver in order to raise prices for their crops. In 1892, Cleveland won the election on a conservative platform, which supported maintaining the gold standard, reducing tariffs, and taking a laissez-faire approach to government intervention. A severe nationwide depression ruined his plans. Many of his supporters in 1896 supported the Gold Democrats when liberal William Jennings Bryan won the nomination and campaigned for bimetalism, money backed by both gold and silver. The conservative wing nominated Alton B. Parker in 1904, but he got very few votes.
The major conservative party in the United States today is the Republican Party, also known as the GOP (Grand Old Party). Modern American conservatives often consider individual liberty as the fundamental trait of democracy, as long as it conforms to conservative values, small government, deregulation of the government, economic liberalism, and free trade—which contrasts with modern American liberals, who generally place a greater value on social equality and social justice. Other major priorities within American conservatism include support for the traditional family, law and order, the right to bear arms, Christian values, anti-communism, and a defense of "Western civilization from the challenges of modernist culture and totalitarian governments". Economic conservatives and libertarians favor small government, low taxes, limited regulation, and free enterprise. Some social conservatives see traditional social values threatened by secularism, so they support school prayer and oppose abortion and homosexuality. Neoconservatives want to expand American ideals throughout the world and show a strong support for Israel. Paleoconservatives oppose multiculturalism and press for restrictions on immigration. Most US conservatives prefer Republicans over Democrats, and most factions favor a strong foreign policy and a strong military.
The conservative movement of the 1950s attempted to bring together the divergent conservative strands, stressing the need for unity to prevent the spread of "godless communism", which Reagan later labeled an "evil empire". During the Reagan administration, conservatives also supported the so-called Reagan Doctrine, under which the US as part of a Cold War strategy provided military and other support to guerrilla insurgencies that were fighting governments identified as socialist or communist. The Reagan administration also adopted neoliberalism and Reaganomics (pejoratively referred to as trickle-down economics), resulting in the 1980s economic growth and trillion-dollar deficits. Other modern conservative positions include anti-environmentalism. On average, American conservatives desire tougher foreign policies than liberals do.
The Tea Party movement, founded in 2009, proved a large outlet for populist American conservative ideas. Their stated goals included rigorous adherence to the US constitution, lower taxes, and opposition to a growing role for the federal government in health care. Electorally, it was considered a key force in Republicans reclaiming control of the US House of Representatives in 2010.
The Liberal Party of Australia adheres to the principles of social conservatism and liberal conservatism. It is liberal in the sense of economics. Commentators explain: "In America, 'liberal' means left-of-center, and it is a pejorative term when used by conservatives in adversarial political debate. In Australia, of course, the conservatives are in the Liberal Party." The National Right is the most organised and reactionary of the three faction within the party.
Other conservative parties are the National Party of Australia (a sister party of the Liberals), Family First Party, Democratic Labor Party, Shooters, Fishers and Farmers Party, Australian Conservatives, and the Katter's Australian Party.
The largest party in the country is the Australian Labor Party, and its dominant faction is Labor Right, a socially conservative element. Australia undertook significant economic reform under the Labor Party in the mid-1980s. Consequently, issues like protectionism, welfare reform, privatization, and deregulation are no longer debated in the political space as they are in Europe or North America.
Political scientist James Jupp writes that "[the] decline in English influences on Australian reformism and radicalism, and appropriation of the symbols of Empire by conservatives continued under the Liberal Party leadership of Sir Robert Menzies, which lasted until 1966".
Historic conservatism in New Zealand traces its roots to the unorganised conservative opposition to the New Zealand Liberal Party in the late 19th century. In 1909 this ideological strand found a more organised expression in the Reform Party, a forerunner to the contemporary New Zealand National Party, which absorbed historic conservative elements. The National Party, established in 1936, embodies a spectrum of tendencies, including conservative and liberal. Throughout its history, the party has oscillated between periods of conservative emphasis and liberal reform. Its stated values include "individual freedom and choice" and "limited government".
In the 1980s and 1990s both the National Party and its main opposing party, the traditionally left-wing Labour Party, implemented free-market reforms.
The New Zealand First party, which split from the National Party in 1993, espouses nationalist and conservative principles.
The Big Five Personality Model has applications in the study of political psychology. It has been found by several studies that individuals who score high in Conscientiousness (the quality of working hard and being careful) are more likely to possess a right-wing political identification. On the opposite end of the spectrum, a strong correlation was identified between high scores in Openness to Experience and a left-leaning ideology. Because conscientiousness is positively related to job performance, a 2021 study found that conservative service workers earn higher ratings, evaluations, and tips than social liberal ones.
A number of studies have found that disgust is tightly linked to political orientation. People who are highly sensitive to disgusting images are more likely to align with the political right and value traditional ideals of bodily and spiritual purity, tending to oppose, for example, abortion and gay marriage.
Research has also found that people who are more disgust sensitive tend to favour their own in-group over out-groups. The reason behind this may be that people begin to associate outsiders with disease while associating health with people similar to themselves.
The higher one's disgust sensitivity is, the greater the tendency to make more conservative moral judgments. Disgust sensitivity is associated with moral hypervigilance, which means that people who have higher disgust sensitivity are more likely to think that suspects of a crime are guilty. They also tend to view them as evil, if found guilty, thus endorsing them to harsher punishment in the setting of a court.
The right-wing authoritarian personality (RWA) is a personality type that describes somebody who is highly submissive to their authority figures, acts aggressively in the name of said authorities, and is conformist in thought and behaviour. According to psychologist Bob Altemeyer, individuals who are politically conservative tend to rank high in RWA. This finding was echoed by Theodor W. Adorno in The Authoritarian Personality (1950) based on the F-scale personality test.
A study done on Israeli and Palestinian students in Israel found that RWA scores of right-wing party supporters were significantly higher than those of left-wing party supporters. However, a 2005 study by H. Michael Crowson and colleagues suggested a moderate gap between RWA and other conservative positions, stating that their "results indicated that conservatism is not synonymous with RWA".
In 1973, British psychologist Glenn Wilson published an influential book providing evidence that a general factor underlying conservative beliefs is "fear of uncertainty". A meta-analysis of research literature by Jost, Glaser, Kruglanski, and Sulloway in 2003 found that many factors, such as intolerance of ambiguity and need for cognitive closure, contribute to the degree of one's political conservatism and its manifestations in decision-making. A study by Kathleen Maclay stated these traits "might be associated with such generally valued characteristics as personal commitment and unwavering loyalty". The research also suggested that while most people are resistant to change, social liberals are more tolerant of it.
Social dominance orientation (SDO) is a personality trait measuring an individual's support for social hierarchy and the extent to which they desire their in-group be superior to out-groups. Psychologist Felicia Pratto and her colleagues have found evidence to support the claim that a high SDO is strongly correlated with conservative views and opposition to social engineering to promote equality. Pratto and her colleagues also found that high SDO scores were highly correlated with measures of prejudice.
However, David J. Schneider argued for a more complex relationships between the three factors, writing that "correlations between prejudice and political conservatism are reduced virtually to zero when controls for SDO are instituted, suggesting that the conservatism–prejudice link is caused by SDO". Conservative political theorist Kenneth Minogue criticized Pratto's work, saying:
It is characteristic of the conservative temperament to value established identities, to praise habit and to respect prejudice, not because it is irrational, but because such things anchor the darting impulses of human beings in solidities of custom which we do not often begin to value until we are already losing them. Radicalism often generates youth movements, while conservatism is a condition found among the mature, who have discovered what it is in life they most value.
A 1996 study by Pratto and her colleagues examined the topic of racism. Contrary to what these theorists predicted, correlations between conservatism and racism were strongest among the most educated individuals, and weakest among the least educated. They also found that the correlation between racism and conservatism could be accounted for by their mutual relationship with SDO.
In his book Gross National Happiness (2008), Arthur C. Brooks presents the finding that conservatives are roughly twice as happy as social liberals. A 2008 study suggested that conservatives tend to be happier than social liberals because of their tendency to justify the current state of affairs and to remain unbothered by inequalities in society. A 2012 study disputed this, demonstrating that conservatives expressed greater personal agency (e.g., personal control, responsibility), more positive outlook (e.g., optimism, self-worth), and more transcendent moral beliefs (e.g., greater religiosity, greater moral clarity).
General
Conservatism and fascism
Conservatism and liberalism
Conservatism and reactionism
Conservatism and women
Conservatism in Europe
Conservatism in Germany
Conservatism in Latin America
Conservatism in Russia
Conservatism in the United Kingdom
Conservatism in the United States
Psychology
Other | [
{
"paragraph_id": 0,
"text": "Conservatism is a cultural, social, and political philosophy that seeks to promote and to preserve traditional institutions, customs, and values. The central tenets of conservatism may vary in relation to the culture and civilization in which it appears. In Western culture, depending on the particular nation, conservatives seek to promote a range of social institutions, such as the nuclear family, organized religion, the military, the nation-state, property rights, rule of law, aristocracy, and monarchy. Conservatives tend to favour institutions and practices that guarantee social order and historical continuity.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Edmund Burke, an 18th-century Anglo-Irish statesman who opposed the French Revolution but supported the American Revolution, is credited as one of the main philosophers of conservatism in the 1790s. The first established use of the term in a political context originated in 1818 with François-René de Chateaubriand during the period of Bourbon Restoration that sought to roll back the policies of the French Revolution and establish social order.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Conservative thought has varied considerably as it has adapted itself to existing traditions and national cultures. Thus, conservatives from different parts of the world—each upholding their respective traditions—may disagree on a wide range of issues. Historically associated with right-wing politics, the term has been used to describe a wide range of views. Conservatism may be either more libertarian or more authoritarian; more populist or more elitist; more progressive or more reactionary; more moderate or more extreme.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Some political scientists, such as Samuel P. Huntington, have seen conservatism as situational. Under this definition, conservatives are seen as defending the established institutions of their time. According to Quintin Hogg, the chairman of the British Conservative Party in 1959: \"Conservatism is not so much a philosophy as an attitude, a constant force, performing a timeless function in the development of a free society, and corresponding to a deep and permanent requirement of human nature itself.\" Conservatism is often used as a generic term to describe a \"right-wing viewpoint occupying the political spectrum between [classical] liberalism and fascism\".",
"title": "Themes"
},
{
"paragraph_id": 4,
"text": "Despite the lack of a universal definition, certain themes can be recognised as common across conservative thought. According to Michael Oakeshott:",
"title": "Themes"
},
{
"paragraph_id": 5,
"text": "To be conservative […] is to prefer the familiar to the unknown, to prefer the tried to the untried, fact to mystery, the actual to the possible, the limited to the unbounded, the near to the distant, the sufficient to the superabundant, the convenient to the perfect, present laughter to utopian bliss.",
"title": "Themes"
},
{
"paragraph_id": 6,
"text": "Such traditionalism may be a reflection of trust in time-tested methods of social organisation, giving 'votes to the dead'. Traditions may also be steeped in a sense of identity.",
"title": "Themes"
},
{
"paragraph_id": 7,
"text": "In contrast to the tradition-based definition of conservatism, some left-wing political theorists like Corey Robin define conservatism primarily in terms of a general defense of social and economic inequality. From this perspective, conservatism is less an attempt to uphold old institutions and more \"a meditation on—and theoretical rendition of—the felt experience of having power, seeing it threatened, and trying to win it back\". On another occasion, Robin argues for a more complex relation:",
"title": "Themes"
},
{
"paragraph_id": 8,
"text": "Conservatism is a defense of established hierarchies, but it is also fearful of those established hierarchies. It sees in their assuredness of power the source of corruption, decadence and decline. Ruling regimes require some kind of irritant, a grain of sand in the oyster, to reactivate their latent powers, to exercise their atrophied muscles, to make their pearls.",
"title": "Themes"
},
{
"paragraph_id": 9,
"text": "The political philosopher Yoram Hazony argues that, in a traditional conservative community, members have importance and influence to the degree they are honoured within the social hierarchy, which includes factors such as age, experience, and wisdom. The word hierarchy has religious roots and translates to 'rule of a high priest.'",
"title": "Themes"
},
{
"paragraph_id": 10,
"text": "Conservatism has been called a \"philosophy of human imperfection\" by Noël O'Sullivan, reflecting among its adherents a negative view of human nature and pessimism of the potential to improve it through 'utopian' schemes. The \"intellectual godfather of the realist right\", Thomas Hobbes, argued that the state of nature for humans was \"poor, nasty, brutish, and short\", requiring centralised authority.",
"title": "Themes"
},
{
"paragraph_id": 11,
"text": "Authority is a core tenet of conservatism. More specifically, conservatives tend to believe in traditional authority. This form of authority, according to Max Weber, is \"resting on an established belief in the sanctity of immemorial traditions and the legitimacy of those exercising authority under them\". Alexandre Kojève distinguishes between two different forms of traditional authority:",
"title": "Themes"
},
{
"paragraph_id": 12,
"text": "Robert Nisbet acknowledges that the decline of traditional authority in the modern world is partly linked with the retreat of old institutions such as guild, order, parish, and family—institutions that formerly acted as intermediaries between the state and the individual. Hannah Arendt claims that the modern world suffers an existential crisis with a \"dramatic breakdown of all traditional authorities,\" which are needed for the continuity of an established civilisation.",
"title": "Themes"
},
{
"paragraph_id": 13,
"text": "Reactionism is a tradition in right-wing politics that opposes policies for the social transformation of society. In popular usage, reactionism refers to a staunch traditionalist conservative political perspective of a person opposed to social, political, and economic change. Adherents of conservatism often oppose certain aspects of modernity (for example mass culture and secularism) and seek a return to traditional values, though different groups of conservatives may choose different traditional values to preserve.",
"title": "Themes"
},
{
"paragraph_id": 14,
"text": "Some political scientists, such as Corey Robin, treat the words reactionary and conservative as synonyms. Others, such as Mark Lilla, argue that reactionism and conservatism are distinct worldviews. Francis Wilson defines conservatism as \"a philosophy of social evolution, in which certain lasting values are defended within the framework of the tension of political conflict\".",
"title": "Themes"
},
{
"paragraph_id": 15,
"text": "A reactionary is a person who holds political views that favor a return to the status quo ante, the previous political state of society, which that person believes possessed positive characteristics absent from contemporary society. An early example of a powerful reactionary movement was German Romanticism, which centered around concepts of organicism, medievalism, and traditionalism against the forces of rationalism, secularism, and individualism that were unleashed in the French Revolution.",
"title": "Themes"
},
{
"paragraph_id": 16,
"text": "In political discourse, being a reactionary is generally regarded as negative; Peter King observed that it is \"an unsought-for label, used as a torment rather than a badge of honor\". Despite this, the descriptor has been adopted by writers such as the Italian esoteric traditionalist Julius Evola, the Austrian monarchist Erik von Kuehnelt-Leddihn, the Colombian political theologian Nicolás Gómez Dávila, and the American historian John Lukacs.",
"title": "Themes"
},
{
"paragraph_id": 17,
"text": "In Great Britain, the Tory movement during the Restoration period (1660–1688) was a form of proto-conservatism that supported a hierarchical society with a monarch who ruled by divine right. However, Tories differ from most later, more moderate, mainstream conservatives in that they opposed the idea of popular sovereignty and rejected the authority of parliament and freedom of religion. Robert Filmer's royalist treatise Patriarcha (published in 1680 but written before the English Civil War of 1642–1651) became accepted as the statement of their doctrine.",
"title": "Intellectual history"
},
{
"paragraph_id": 18,
"text": "However, the Glorious Revolution of 1688 damaged this principle by establishing a constitutional government in England, leading to the hegemony of the Tory-opposed Whig ideology. Faced with defeat, the Tories reformed their movement. They adopted more moderate conservative positions, such as holding that sovereignty was vested in the three estates of Crown, Lords, and Commons rather than solely in the Crown. Richard Hooker (1554–1600), Marquess of Halifax (1633–1695) and David Hume (1711–1776) were proto-conservatives of the period. Halifax promoted pragmatism in government whilst Hume argued against political rationalism and utopianism.",
"title": "Intellectual history"
},
{
"paragraph_id": 19,
"text": "Edmund Burke (1729–1797) has been widely regarded as the philosophical founder of modern conservatism as we know it today. Burke served as the private secretary to the Marquis of Rockingham and as official pamphleteer to the Rockingham branch of the Whig party. Together with the Tories, they were the conservatives in the late 18th century United Kingdom.",
"title": "Intellectual history"
},
{
"paragraph_id": 20,
"text": "Burke's views were a mixture of conservatism and republicanism. He supported the American Revolution of 1775–1783 but abhorred the violence of the French Revolution of 1789–1799. He accepted the conservative ideals of private property and the economics of Adam Smith (1723–1790), but thought that economics should remain subordinate to the conservative social ethic, that capitalism should be subordinate to the medieval social tradition, and that the business class should be subordinate to aristocracy. He insisted on standards of honour derived from the medieval aristocratic tradition and saw the aristocracy as the nation's natural leaders. That meant limits on the powers of the Crown, since he found the institutions of Parliament to be better informed than commissions appointed by the executive. He favored an established church, but allowed for a degree of religious toleration. Burke ultimately justified the social order on the basis of tradition: tradition represented the wisdom of the species, and he valued community and social harmony over social reforms.",
"title": "Intellectual history"
},
{
"paragraph_id": 21,
"text": "Another form of conservatism developed in France in parallel to conservatism in Britain. It was influenced by Counter-Enlightenment works by philosophers such as Joseph de Maistre (1753–1821) and Louis de Bonald (1754–1840). Many continental conservatives do not support separation of church and state, with most supporting state recognition of and cooperation with the Catholic Church, such as had existed in France before the Revolution. Conservatives were also early to embrace nationalism, which was previously associated with liberalism and the Revolution in France. Another early French conservative, François-René de Chateaubriand (1768–1848), espoused a romantic opposition to modernity, contrasting its emptiness with the 'full heart' of traditional faith and loyalty. Elsewhere on the continent, German thinkers Justus Möser (1720–1794) and Friedrich von Gentz (1764–1832) criticized the Declaration of the Rights of Man and of the Citizen that came of the Revolution. Opposition was also expressed by German idealists such as Adam Müller (1779–1829) and Georg Wilhelm Friedrich Hegel (1771–1830), the latter inspiring both leftist and rightist followers.",
"title": "Intellectual history"
},
{
"paragraph_id": 22,
"text": "Both Burke and Maistre were critical of democracy in general, though their reasons differed. Maistre was pessimistic about humans being able to follow rules, while Burke was skeptical about humans' innate ability to make rules. For Maistre, rules had a divine origin, while Burke believed they arose from custom. The lack of custom for Burke, and the lack of divine guidance for Maistre, meant that people would act in terrible ways. Both also believed that liberty of the wrong kind led to bewilderment and political breakdown. Their ideas would together flow into a stream of anti-rationalist, romantic conservatism, but would still stay separate. Whereas Burke was more open to argumentation and disagreement, Maistre wanted faith and authority, leading to a more illiberal strain of thought.",
"title": "Intellectual history"
},
{
"paragraph_id": 23,
"text": "Liberal conservatism is a variant of conservatism that is strongly influenced by liberal stances. It incorporates the classical liberal view of minimal economic interventionism, meaning that individuals should be free to participate in the market and generate wealth without government interference. However, individuals cannot be thoroughly depended on to act responsibly in other spheres of life; therefore, liberal conservatives believe that a strong state is necessary to ensure law and order, and social institutions are needed to nurture a sense of duty and responsibility to the nation.",
"title": "Ideological variants"
},
{
"paragraph_id": 24,
"text": "Originally opposed to capitalism and the industrial revolution, the conservative ideology in many countries adopted economic liberalism. This is also the case in countries where liberal economic ideas have been the tradition such as the United States and are thus considered conservative. A secondary meaning for the term liberal conservatism, which developed in Europe, is a combination of more modern conservative views with those of social liberalism. Often this involves stressing views of free market economics and belief in individual responsibility, with a defence of civil rights, environmentalism and support for a limited welfare state. In continental Europe, this is sometimes also translated into English as social conservatism.",
"title": "Ideological variants"
},
{
"paragraph_id": 25,
"text": "Libertarian conservatism describes certain political ideologies most prominently within the United States which combine libertarian economic issues with aspects of conservatism. Its four main branches are constitutionalism, paleolibertarianism, small government conservatism and Christian libertarianism. They generally differ from paleoconservatives, in that they favor more personal and economic freedom. The agorist philosopher Samuel Edward Konkin III labeled libertarian conservatism right-libertarianism. In the United States, fusionism is the combination of traditionalist and social conservatism with political and economic right-libertarianism.",
"title": "Ideological variants"
},
{
"paragraph_id": 26,
"text": "In contrast to paleoconservatives, libertarian conservatives support strict laissez-faire policies such as free trade and opposition to business regulations and any national bank. They are often opposed to environmental regulations, corporate welfare, subsidies and other areas of economic intervention. Many conservatives, especially in the United States, believe that the government should not play a major role in regulating business and managing the economy. They typically oppose high taxes and income redistribution. Such efforts, they argue, only serve to exacerbate the scourge of unemployment and poverty by lessening the ability for businesses to hire employees due to higher tax impositions.",
"title": "Ideological variants"
},
{
"paragraph_id": 27,
"text": "Fiscal conservatism is the economic philosophy of prudence in government spending and debt. In Reflections on the Revolution in France (1790), Edmund Burke argued that a government does not have the right to run up large debts and then throw the burden on the taxpayer:",
"title": "Ideological variants"
},
{
"paragraph_id": 28,
"text": "[I]t is to the property of the citizen, and not to the demands of the creditor of the state, that the first and original faith of civil society is pledged. The claim of the citizen is prior in time, paramount in title, superior in equity. The fortunes of individuals, whether possessed by acquisition or by descent or in virtue of a participation in the goods of some community, were no part of the creditor's security, expressed or implied...[T]he public, whether represented by a monarch or by a senate, can pledge nothing but the public estate; and it can have no public estate except in what it derives from a just and proportioned imposition upon the citizens at large.",
"title": "Ideological variants"
},
{
"paragraph_id": 29,
"text": "National conservatism is a political term used primarily in Europe to describe a variant of conservatism which concentrates more on national interests than standard conservatism as well as upholding cultural and ethnic identity, while not being outspokenly ultra-nationalist or supporting a far-right approach.",
"title": "Ideological variants"
},
{
"paragraph_id": 30,
"text": "National conservatism is oriented towards upholding national sovereignty, which includes limited immigration and a strong national defence. In Europe, national conservatives are usually eurosceptics. Yoram Hazony has argued for national conservatism in his work The Virtue of Nationalism (2018).",
"title": "Ideological variants"
},
{
"paragraph_id": 31,
"text": "Traditionalist conservatism, also known as classical conservatism, emphasizes the need for the principles of natural law, transcendent moral order, tradition, hierarchy, organic unity, agrarianism, classicism, and high culture as well as the intersecting spheres of loyalty. Some traditionalists have embraced the labels reactionary and counter-revolutionary, defying the stigma that has attached to these terms since the Enlightenment. Having a hierarchical view of society, many traditionalist conservatives, including a few notable Americans such as Ralph Adams Cram, William S. Lind, and Charles A. Coulombe, defend the monarchical political structure as the most natural and beneficial social arrangement.",
"title": "Ideological variants"
},
{
"paragraph_id": 32,
"text": "Cultural conservatives support the preservation of a cultural heritage, either of a nation or a shared culture that is not defined by national boundaries. The shared culture may be as divergent as Western culture or Chinese culture. In the United States, the term \"cultural conservative\" may imply a conservative position in the culture war. Cultural conservatives hold fast to traditional ways of thinking even in the face of monumental change, strongly believing in traditional values and traditional politics.",
"title": "Ideological variants"
},
{
"paragraph_id": 33,
"text": "Social conservatives believe that society is built upon a fragile network of relationships which need to be upheld through duty, traditional values, and established institutions; and that the government has a role in encouraging or enforcing traditional values or practices. A social conservative wants to preserve traditional morality and social mores, often by opposing what they consider radical policies or social engineering. Some social-conservative stances are the following:",
"title": "Ideological variants"
},
{
"paragraph_id": 34,
"text": "Religious conservatism principally applies the teachings of particular religions to politics—sometimes by merely proclaiming the value of those teachings, at other times by having those teachings influence laws.",
"title": "Ideological variants"
},
{
"paragraph_id": 35,
"text": "In most democracies, political conservatism seeks to uphold traditional family structures and social values. Religious conservatives typically oppose abortion, LGBT behaviour (or, in certain cases, identity), drug use, and sexual activity outside of marriage. In some cases, conservative values are grounded in religious beliefs, and conservatives seek to increase the role of religion in public life.",
"title": "Ideological variants"
},
{
"paragraph_id": 36,
"text": "Paternalistic conservatism is a strand in conservatism which reflects the belief that societies exist and develop organically and that members within them have obligations towards each other. There is particular emphasis on the paternalistic obligation (noblesse oblige) of those who are privileged and wealthy to the poorer parts of society, which is consistent with principles such as duty, organicism, and hierarchy. Paternal conservatives support neither the individual nor the state in principle, but are instead prepared to support either or recommend a balance between the two depending on what is most practical. Paternalistic conservatives historically favour a more aristocratic view and are ideologically related to High Tories.",
"title": "Ideological variants"
},
{
"paragraph_id": 37,
"text": "In more contemporary times, its proponents stress the importance of a social safety net to deal with poverty, supporting limited redistribution of wealth along with government regulation of markets in the interests of both consumers and producers. Paternalistic conservatism first arose as a distinct ideology in the United Kingdom under Prime Minister Benjamin Disraeli's \"One Nation\" Toryism. There have been a variety of one-nation conservative governments in the United Kingdom with exponents such as Prime Ministers Disraeli, Stanley Baldwin, Neville Chamberlain, Winston Churchill, and Harold Macmillan.",
"title": "Ideological variants"
},
{
"paragraph_id": 38,
"text": "In 19th-century Germany, Chancellor Otto von Bismarck adopted a set of social programs, known as state socialism, which included insurance for workers against sickness, accident, incapacity, and old age. The goal of this conservative state-building strategy was to make ordinary Germans, not just the Junker aristocracy, more loyal to state and Emperor. Chancellor Leo von Caprivi promoted a conservative agenda called the \"New Course\".",
"title": "Ideological variants"
},
{
"paragraph_id": 39,
"text": "In the United States, Theodore Roosevelt has been identified as the main exponent of progressive conservatism. Roosevelt stated that he had \"always believed that wise progressivism and wise conservatism go hand in hand\". The Republican administration of President William Howard Taft was progressive conservative, and he described himself as a believer in progressive conservatism. President Dwight D. Eisenhower also declared himself an advocate of progressive conservatism.",
"title": "Ideological variants"
},
{
"paragraph_id": 40,
"text": "In Canada, a variety of conservative governments have been part of the Red Tory tradition, with Canada's former major conservative party being named the Progressive Conservative Party of Canada from 1942 to 2003. Prime Ministers Arthur Meighen, R. B. Bennett, John Diefenbaker, Joe Clark, Brian Mulroney, and Kim Campbell led Red tory federal governments.",
"title": "Ideological variants"
},
{
"paragraph_id": 41,
"text": "Authoritarian conservatism refers to autocratic regimes that center their ideology around national conservatism, rather than ethnic nationalism, though certain racial components such as antisemitism may exist. Authoritarian conservative movements show strong devotion towards religion, tradition, and culture while also expressing fervent nationalism akin to other far-right nationalist movements. Examples of authoritarian conservative statesmen include Miklós Horthy in Hungary, Ioannis Metaxas in Greece, António de Oliveira Salazar in Portugal, Engelbert Dollfuss in Austria, and Francisco Franco in Spain.",
"title": "Ideological variants"
},
{
"paragraph_id": 42,
"text": "Authoritarian conservative movements were prominent in the same era as fascism, with which it sometimes clashed. Although both ideologies shared core values such as nationalism and had common enemies such as communism and materialism, there was nonetheless a contrast between the traditionalist nature of authoritarian conservatism and the revolutionary, palingenetic, and populist nature of fascism—thus it was common for authoritarian conservative regimes to suppress rising fascist and Nazi movements. The hostility between the two ideologies is highlighted by the struggle for power in Austria, which was marked by the assassination of the ultra-Catholic statesman Engelbert Dollfuss by Austrian Nazis. Likewise, Croatian fascists assassinated King Alexander I of Yugoslavia. In Romania, as the fascist Iron Guard was gaining popularity and Nazi Germany was making advances in Europe with the Anschluss and the Munich Agreement, King Carol II ordered the execution of Corneliu Zelea Codreanu and other top-ranking Romanian fascists. The exiled German Emperor Wilhelm II was an enemy of Adolf Hitler, stating that Nazism made him ashamed to be a German for the first time in his life and referring to the Nazis as ”a bunch of shirted gangsters\" and \"a mob … led by a thousand liars or fanatics”.",
"title": "Ideological variants"
},
{
"paragraph_id": 43,
"text": "The political scientist Seymour Martin Lipset has examined the class basis of right-wing extremist politics in the 1920–1960 era. He reports:",
"title": "Ideological variants"
},
{
"paragraph_id": 44,
"text": "Conservative or rightist extremist movements have arisen at different periods in modern history, ranging from the Horthyites in Hungary, the Christian Social Party of Dollfuss in Austria, Der Stahlhelm and other nationalists in pre-Hitler Germany, and Salazar in Portugal, to the pre-1966 Gaullist movements and the monarchists in contemporary France and Italy. The right extremists are conservative, not revolutionary. They seek to change political institutions in order to preserve or restore cultural and economic ones, while extremists of the centre [fascists/nazis] and left [communists/anarchists] seek to use political means for cultural and social revolution. The ideal of the right extremist is not a totalitarian ruler, but a monarch, or a traditionalist who acts like one. Many such movements in Spain, Austria, Hungary, Germany, and Italy have been explicitly monarchist […] The supporters of these movements differ from those of the centrists, tending to be wealthier, and more religious, which is more important in terms of a potential for mass support.",
"title": "Ideological variants"
},
{
"paragraph_id": 45,
"text": "Edmund Fawcett claims that fascism is totalitarian, populist, and anti-pluralist, whereas authoritarian conservatism is somewhat pluralist but most of all elitist and anti-populist. He concludes: \"The fascist is a nonconservative who takes anti-liberalism to extremes. The right-wing authoritarian is a conservative who takes fear of democracy to extremes.\"",
"title": "Ideological variants"
},
{
"paragraph_id": 46,
"text": "During the Cold War, right-wing military dictatorships were prominent in Latin America, with most nations being under military rule by the middle of the 1970s. One example of this was Augusto Pinochet, who ruled over Chile from 1973 to 1990. In the 21st century, the authoritarian style of government experienced a worldwide renaissance with conservative statesmen such as Vladimir Putin in Russia, Recep Tayyip Erdoğan in Turkey, Viktor Orbán in Hungary, Narendra Modi in India, and Donald Trump in the United States.",
"title": "Ideological variants"
},
{
"paragraph_id": 47,
"text": "Conservative parties vary widely from country to country in the goals they wish to achieve. Both conservative and classical liberal parties tend to favour private ownership of property, in opposition to communist, socialist, and green parties, which favour communal ownership or laws regulating responsibility on the part of property owners. Where conservatives and liberals differ is primarily on social issues, where conservatives tend to reject behaviour that does not conform to some social norm. Modern conservative parties often define themselves by their opposition to liberal or socialist parties.",
"title": "National variants"
},
{
"paragraph_id": 48,
"text": "The United States usage of the term conservative is unique to that country.",
"title": "National variants"
},
{
"paragraph_id": 49,
"text": "In India, the Bharatiya Janata Party (BJP), led by Narendra Modi, represent conservative politics. With over 170 million members as of October 2022, the BJP is by far the world's largest political party. It promotes Hindu nationalism, quasi-fascist Hindutva, a hostile foreign policy against Pakistan, and a conservative social and fiscal policy.",
"title": "National variants"
},
{
"paragraph_id": 50,
"text": "Singapore's only conservative party is the People's Action Party (PAP). It is currently in government and has been since independence in 1965. It promotes conservative values in the form of Asian democracy and Asian values.",
"title": "National variants"
},
{
"paragraph_id": 51,
"text": "South Korea's major conservative party, the People Power Party, has changed its form throughout its history. First it was the Democratic-Liberal Party and its first head was Roh Tae-woo, who was the first President of the Sixth Republic of South Korea. Democratic-Liberal Party was founded by the merging of Roh Tae-woo's Democratic Justice Party, Kim Young Sam's Reunification Democratic Party and Kim Jong-pil's New Democratic Republican Party. Kim Young-sam became the fourteenth President of Korea.",
"title": "National variants"
},
{
"paragraph_id": 52,
"text": "When the conservative party was beaten by the opposition party in the general election, it changed its form again to follow the party members' demand for reforms. It became the New Korea Party, but it changed again one year later since the President Kim Young-sam was blamed by the citizen for the International Monetary Fund. It changed its name to Grand National Party (GNP). Since the late Kim Dae-jung assumed the presidency in 1998, GNP had been the opposition party until Lee Myung-bak won the presidential election of 2007.",
"title": "National variants"
},
{
"paragraph_id": 53,
"text": "European conservatism has taken many different expressions. In Italy, which was united by liberals and radicals in the Risorgimento, liberals, not conservatives, emerged as the party of the right. During the first half of the 20th century, when socialism was gaining power around the world, conservatism in countries such as Austria, Germany, Greece, Hungary, Portugal, and Spain transformed into the far-right, becoming more authoritarian and extreme.",
"title": "National variants"
},
{
"paragraph_id": 54,
"text": "Austrian conservatism originated with Prince Klemens von Metternich (1773–1859), who was the architect behind the monarchist and imperialist Conservative Order that was enacted at the Congress of Vienna in the aftermath of the French Revolution and the Napoleonic Wars. The goal was to establish a European balance of power that could guarantee peace and suppress republican and nationalist movements. During its existence, the Austrian Empire was the third most populous monarchy in Europe after the Russian Empire and the United Kingdom. Following its defeat in the Austro-Prussian War, it transformed into the Austro-Hungarian Empire, which was the most diverse state in Europe with twelve nationalities living under a unifying monarch. The Empire was fragmented in the aftermath of World War I, ushering in the democratic First Austrian Republic.",
"title": "National variants"
},
{
"paragraph_id": 55,
"text": "The Austrian Civil War in 1934 saw a series of skirmishes between the right-wing government and socialist forces. When the insurgents were defeated, the government declared martial law and held mass trials, forcing leading socialist politicians, such as Otto Bauer, into exile. The conservatives banned the Social Democratic Party and its affiliated trade unions, and replaced parliamentary democracy with a corporatist and clerical constitution. The Patriotic Front, into which the paramilitary Heimwehr and the Christian Social Party were merged, became the only legal political party in the resulting authoritarian regime, the Federal State of Austria.",
"title": "National variants"
},
{
"paragraph_id": 56,
"text": "While having close ties to Fascist Italy, which was still a monarchy as well as a fellow Catholic nation, rightist Austria harboured strong anti-Prussian and anti-Nazi sentiments. Austria’s most influential conservative philosopher, the Catholic aristocrat Erik von Kuehnelt-Leddihn, published many books in which he interpreted Nazism as a leftist, ochlocratic, and demagogic ideology, opposed to the traditional rightist ideals of aristocracy, monarchy, and Christianity. Austria's dictator Engelbert Dollfuss saw Nazism as another form of totalitarian communism, and he saw Adolf Hitler as the German version of Joseph Stalin. The conservatives banned the Austrian Nazi Party and arrested many of its activists. In 1934, Dollfus was assassinated by Nazi enemies who sought revenge. In response, Benito Mussolini mobilised a part of the Italian army on the Austrian border and threatened Hitler with war in the event of a German invasion of Austria. In 1938, when Nazi Germany annexed Austria in the Anschluss, conservative groups were suppressed: members of the Austrian nobility and the Catholic clergy were arrested and their properties were confiscated.",
"title": "National variants"
},
{
"paragraph_id": 57,
"text": "Following World War II and the return to democracy, Austrian conservatives abandoned the authoritarianism of its past, believing in the principles of class collaboration and political compromise, while Austrian socialists also abandoned their extremism and distanced themselves from the totalitarianism of the Soviet Union. The conservatives formed the Austrian People's Party, which has been the major conservative party in Austria ever since. In contemporary politics, the party was led by Sebastian Kurz, whom the Frankfurter Allgemeine Zeitung nicknamed the \"young Metternich\".",
"title": "National variants"
},
{
"paragraph_id": 58,
"text": "Having its roots in the conservative Catholic Party, the Christian People's Party retained a conservative edge through the 20th century, supporting the king in the Royal Question, supporting nuclear family as the cornerstone of society, defending Christian education, and opposing euthanasia. The Christian People's Party dominated politics in post-war Belgium. In 1999, the party's support collapsed, and it became the country's fifth-largest party. Since 2014, the Flemish nationalist and conservative New Flemish Alliance is the largest party in Belgium.",
"title": "National variants"
},
{
"paragraph_id": 59,
"text": "Danish conservatism emerged with the political grouping Højre (literally \"Right\"), which due to its alliance with king Christian IX of Denmark dominated Danish politics and formed all governments from 1865 to 1901. When a constitutional reform in 1915 stripped the landed gentry of political power, Højre was succeeded by the Conservative People's Party of Denmark, which has since then been the main Danish conservative party. Another Danish conservative party was the Free Conservatives, who were active between 1902 and 1920. Traditionally and historically, conservatism in Denmark has been more populist and agrarian than in Sweden and Norway, where conservatism has been more elitist and urban.",
"title": "National variants"
},
{
"paragraph_id": 60,
"text": "The Conservative People's Party led the government coalition from 1982 to 1993. The party had previously been member of various governments from 1916 to 1917, 1940 to 1945, 1950 to 1953, and 1968 to 1971. The party was a junior partner in governments led by the Liberals from 2001 to 2011 and again from 2016 to 2019. The party is preceded by 11 years by the Young Conservatives (KU), today the youth movement of the party.",
"title": "National variants"
},
{
"paragraph_id": 61,
"text": "The Conservative People's Party had a stable electoral support close to 15 to 20% at almost all general elections from 1918 to 1971. In the 1970s it declined to around 5%, but then under the leadership of Poul Schlüter reached its highest popularity level ever in 1984, receiving 23% of the votes. Since the late 1990s the party has obtained around 5 to 10% of the vote. In 2022, the party received 5.5% of the vote.",
"title": "National variants"
},
{
"paragraph_id": 62,
"text": "Conservative thinking has also influenced other Danish political parties. In 1995, the Danish People's Party was founded, based on a mixture of conservative, nationalist, and social-democratic ideas. In 2015, the party New Right was established, professing a national-conservative attitude.",
"title": "National variants"
},
{
"paragraph_id": 63,
"text": "The conservative parties in Denmark have always considered the monarchy as a central institution in Denmark.",
"title": "National variants"
},
{
"paragraph_id": 64,
"text": "The conservative party in Finland is the National Coalition Party. The party was founded in 1918, when several monarchist parties united. Although right-wing in the past, today it is a moderate liberal-conservative party. While advocating economic liberalism, it is committed to the social market economy.",
"title": "National variants"
},
{
"paragraph_id": 65,
"text": "Early conservatism in France focused on the rejection of the secularism of the French Revolution, support for the role of the Catholic Church, and the restoration of the monarchy. After the first fall of Napoleon in 1814, the House of Bourbon returned to power in the Bourbon Restoration. Louis XVIII and Charles X, brothers of the executed King Louis XVI, successively mounted the throne and instituted a conservative government intended to restore the proprieties, if not all the institutions, of the Ancien Régime.",
"title": "National variants"
},
{
"paragraph_id": 66,
"text": "After the July Revolution of 1830, Louis Philippe I, a member of the more liberal Orléans branch of the House of Bourbon, proclaimed himself as King of the French. The Second French Empire saw an Imperial Bonapartist regime of Napoleon III from 1852 to 1870. The Bourbon monarchist cause was on the verge of victory in the 1870s, but then collapsed because the proposed king, Henri, Count of Chambord, refused to fly the tri-colored flag. The turn of the century saw the rise of Action Française – an ultraconservative, reactionary, nationalist, and royalist movement that advocated the restoration of the monarchy.",
"title": "National variants"
},
{
"paragraph_id": 67,
"text": "Religious tensions between Christian rightists and secular leftists heightened in the 1890–1910 era, but moderated after the spirit of unity in fighting the First World War. An authoritarian form of conservatism characterized the Vichy regime of 1940–1944 with heightened antisemitism, opposition to individualism, emphasis on family life, and national direction of the economy.",
"title": "National variants"
},
{
"paragraph_id": 68,
"text": "Conservatism has been the major political force in France since the Second World War, although the number of conservative groups and their lack of stability defy simple categorization. Following the war, conservatives supported Gaullist groups and parties, espoused nationalism, and emphasized tradition, order, and the regeneration of France. Unusually, post-war conservatism in France was formed around the personality of a leader—army officer Charles de Gaulle who led the Free French Forces against Nazi Germany—and it did not draw on traditional French conservatism, but on the Bonapartist tradition. Gaullism in France continues under The Republicans (formerly Union for a Popular Movement); it was previously led by Nicolas Sarkozy, who served as President of France from 2007 to 2012 and whose ideology is known as Sarkozysm.",
"title": "National variants"
},
{
"paragraph_id": 69,
"text": "In 2021, the French intellectual Éric Zemmour founded the nationalist party Reconquête, which has been described as a more elitist and conservative version of Marine Le Pen’s National Rally.",
"title": "National variants"
},
{
"paragraph_id": 70,
"text": "Conservatism developed alongside nationalism in Germany, culminating in Germany's victory over France in the Franco-Prussian War, the creation of the unified German Empire in 1871 and the simultaneous rise of Otto von Bismarck on the European political stage. Bismarck's \"balance of power\" model maintained peace in Europe for decades at the end of the 19th century. His \"revolutionary conservatism\" was a conservative state-building strategy, based on class collaboration and designed to make ordinary Germans—not just the Junker aristocracy—more loyal to state and Emperor. He created the modern welfare state in Germany in the 1880s. According to scholars, his strategy was:",
"title": "National variants"
},
{
"paragraph_id": 71,
"text": "granting social rights to enhance the integration of a hierarchical society, to forge a bond between workers and the state so as to strengthen the latter, to maintain traditional relations of authority between social and status groups, and to provide a countervailing power against the modernist forces of liberalism and socialism.",
"title": "National variants"
},
{
"paragraph_id": 72,
"text": "Bismarck also enacted universal male suffrage in the new German Empire in 1871. He became a great hero to German conservatives, who erected many monuments to his memory after he left office in 1890.",
"title": "National variants"
},
{
"paragraph_id": 73,
"text": "With the rise of Nazism in 1933, traditional agrarian movements faded and were supplanted by a more command-based economy and forced social integration. Adolf Hitler succeeded in garnering the support of many German industrialists; but prominent traditionalists, including military officers Claus von Stauffenberg and Henning von Tresckow, pastor Dietrich Bonhoeffer, Bishop Clemens August Graf von Galen, and monarchist Carl Friedrich Goerdeler, openly and secretly opposed his policies of euthanasia, genocide, and attacks on organized religion.",
"title": "National variants"
},
{
"paragraph_id": 74,
"text": "More recently, the work of conservative Christian Democratic Union leader and Chancellor Helmut Kohl helped bring about German reunification, along with the closer European integration in the form of the Maastricht Treaty. Today, German conservatism is often associated with politicians such as Chancellor Angela Merkel, whose tenure has been marked by attempts to save the common European currency (Euro) from demise. The German conservatives are divided under Merkel due to the refugee crisis in Germany and many conservatives in the CDU/CSU oppose the refugee and migrant policies developed under Merkel. The 2020s also saw the rise of right-wing populist Alternative for Germany.",
"title": "National variants"
},
{
"paragraph_id": 75,
"text": "The main inter-war conservative party was called the People's Party (PP), which supported constitutional monarchy and opposed the republican Liberal Party. Both parties were suppressed by the authoritarian, arch-conservative, and royalist 4th of August Regime of Ioannis Metaxas in 1936–1941. The PP was able to re-group after the Second World War as part of a United Nationalist Front which achieved power campaigning on a simple anti-communist, nationalist platform during the Greek Civil War (1946–1949). However, the vote received by the PP declined during the so-called \"Centrist Interlude\" in 1950–1952.",
"title": "National variants"
},
{
"paragraph_id": 76,
"text": "In 1952, Marshal Alexandros Papagos created the Greek Rally as an umbrella for the right-wing forces. The Greek Rally came to power in 1952 and remained the leading party in Greece until 1963. After Papagos' death in 1955, it was reformed as the National Radical Union under Konstantinos Karamanlis. Right-wing governments backed by the palace and the army overthrew the Centre Union government in 1965 and governed the country until the establishment of the far-right Greek junta (1967–1974). After the regime's collapse in August 1974, Karamanlis returned from exile to lead the government and founded the New Democracy party. The new conservative party had four objectives: to confront Turkish expansionism in Cyprus, to reestablish and solidify democratic rule, to give the country a strong government, and to make a powerful moderate party a force in Greek politics.",
"title": "National variants"
},
{
"paragraph_id": 77,
"text": "The Independent Greeks, a newly formed political party in Greece, has also supported conservatism, particularly national and religious conservatism. The Founding Declaration of the Independent Greeks strongly emphasises in the preservation of the Greek state and its sovereignty, the Greek people, and the Greek Orthodox Church.",
"title": "National variants"
},
{
"paragraph_id": 78,
"text": "Founded in 1924 as the Conservative Party, Iceland's Independence Party adopted its current name in 1929 after the merger with the Liberal Party. From the beginning, they have been the largest vote-winning party, averaging around 40%. They combined liberalism and conservatism, supported nationalization of infrastructure, and advocated class collaboration. While mostly in opposition during the 1930s, they embraced economic liberalism, but accepted the welfare state after the war and participated in governments supportive of state intervention and protectionism. Unlike other Scandanivian conservative (and liberal) parties, it has always had a large working-class following. After the financial crisis in 2008, the party has sunk to a lower support level at around 20–25%.",
"title": "National variants"
},
{
"paragraph_id": 79,
"text": "After unification, Italy was governed successively by the Historical Right, which represented conservative, liberal-conservative, and conservative-liberal positions, and the Historical Left.",
"title": "National variants"
},
{
"paragraph_id": 80,
"text": "After World War I, the country saw the emergence of its first mass parties, notably including the Italian People's Party (PPI), a Christian-democratic party that sought to represent the Catholic majority, which had long refrained from politics. The PPI and the Italian Socialist Party decisively contributed to the loss of strength and authority of the old liberal ruling class, which had not been able to structure itself into a proper party: the Liberal Union was not coherent and the Italian Liberal Party came too late. In 1921 Benito Mussolini gave birth to the National Fascist Party (PNF), and the next year, through the March on Rome, he was appointed Prime Minister. In 1926 all parties were dissolved except the PNF, which thus remained the only legal party in the Kingdom of Italy until the fall of the regime in July 1943. By 1945, fascists were discredited, disbanded, and outlawed, while Mussolini was executed in April that year.",
"title": "National variants"
},
{
"paragraph_id": 81,
"text": "After World War II, the centre-right was dominated by the centrist party Christian Democracy (DC), which included both conservative and centre-left elements. With its landslide victory over the Italian Socialist Party and the Italian Communist Party in 1948, the political centre was in power. In Denis Mack Smith's words, it was \"moderately conservative, reasonably tolerant of everything which did not touch religion or property, but above all Catholic and sometimes clerical\". It dominated politics until DC's dissolution in 1994. Among DC's frequent allies, there was the conservative-liberal Italian Liberal Party. At the right of the DC stood parties like the royalist Monarchist National Party and the post-fascist Italian Social Movement.",
"title": "National variants"
},
{
"paragraph_id": 82,
"text": "In 1994, entrepreneur and media tycoon Silvio Berlusconi founded the liberal-conservative party Forza Italia (FI). He won three elections in 1994, 2001, and 2008, governing the country for almost ten years as Prime Minister. FI formed a coalitions with several parties, including the national-conservative National Alliance (AN), heir of the MSI, and the regionalist Lega Nord (LN). FI was briefly incorporated, along with AN, in The People of Freedom party and later revived in the new Forza Italia. After the 2018 general election, the LN and the Five Star Movement formed a populist government, which lasted about a year. In the 2022 general election, a centre-right coalition came to power, this time dominated by Brothers of Italy (FdI), a new national-conservative party born on the ashes of AN. Consequently, FdI, the re-branded Lega, and FI formed a government under FdI leader Giorgia Meloni.",
"title": "National variants"
},
{
"paragraph_id": 83,
"text": "Luxembourg's major conservative party, the Christian Social People's Party, was formed as the Party of the Right in 1914 and adopted its present name in 1945. It was consistently the largest political party in Luxembourg, and dominated politics throughout the 20th century.",
"title": "National variants"
},
{
"paragraph_id": 84,
"text": "Liberalism has been strong in the Netherlands. Thus, rightist parties are often liberal-conservative or conservative-liberal. One example of this is the People's Party for Freedom and Democracy. Even the right-wing populist or far-right Party for Freedom, which dominated the 2023 election, supports liberal positions such as women's and gay rights, abortion, and euthanasia.",
"title": "National variants"
},
{
"paragraph_id": 85,
"text": "The Conservative Party of Norway (Norwegian: Høyre, literally \"Right\") was formed by the old upper-class of state officials and wealthy merchants to fight the populist democracy of the Liberal Party, but it lost power in 1884, when parliamentarian government was first practiced. It formed its first government under parliamentarism in 1889 and continued to alternate in power with the Liberals until the 1930s, when Labour became the dominant party. It has elements both of paternalism, stressing the responsibilities of the state, and of economic liberalism. It first returned to power in the 1960s. During Kåre Willoch's premiership in the 1980s, much emphasis was laid on liberalizing the credit and housing market and abolishing the NRK TV and radio monopoly, while supporting law and order in criminal justice and traditional norms in education.",
"title": "National variants"
},
{
"paragraph_id": 86,
"text": "Under Vladimir Putin, the dominant leader since 1999, Russia has promoted explicitly conservative policies in social, cultural, and political matters, both at home and abroad. Putin has criticized globalism and economic liberalism, claiming that \"liberalism has become obsolete\" and that the vast majority of people in the world oppose multiculturalism, free immigration, and rights for LGBT people. Russian conservatism is special in some respects as it supports a mixed economy with economic intervention, combined with a strong nationalist sentiment and social conservatism which is largely populist. As a result, Russian conservatism opposes right-libertarian ideals such as the aforementioned concept of economic liberalism found in other conservative movements around the world.",
"title": "National variants"
},
{
"paragraph_id": 87,
"text": "Putin has also promoted new think tanks that bring together like-minded intellectuals and writers. For example, the Izborsky Club, founded in 2012 by Alexander Prokhanov, stresses Russian nationalism, the restoration of Russia's historical greatness, and systematic opposition to liberal ideas and policies. Vladislav Surkov, a senior government official, has been one of the key ideologues during Putin's presidency.",
"title": "National variants"
},
{
"paragraph_id": 88,
"text": "In cultural and social affairs, Putin has collaborated closely with the Russian Orthodox Church. Under Patriarch Kirill of Moscow, the Church has backed the expansion of Russian power into Crimea and eastern Ukraine. More broadly, The New York Times reports in September 2016 how the Church's policy prescriptions support the Kremlin's appeal to social conservatives:",
"title": "National variants"
},
{
"paragraph_id": 89,
"text": "\"A fervent foe of homosexuality and any attempt to put individual rights above those of family, community, or nation, the Russian Orthodox Church helps project Russia as the natural ally of all those who pine for a more secure, illiberal world free from the tradition-crushing rush of globalization, multiculturalism, and women's and gay rights.\"",
"title": "National variants"
},
{
"paragraph_id": 90,
"text": "In the early 19th century, Swedish conservatism developed alongside Swedish Romanticism. The historian Erik Gustaf Geijer, an exponent of Gothicism, glorified the Viking Age and the Swedish Empire, and the idealist philosopher Christopher Jacob Boström became the chief ideologue of the official state doctrine, which dominated Swedish politics for almost a century. Other influential Swedish conservative Romantics were Esaias Tegnér and Per Daniel Amadeus Atterbom.",
"title": "National variants"
},
{
"paragraph_id": 91,
"text": "Early parliamentary conservatism in Sweden was explicitly elitist. Indeed, the Conservative Party was formed in 1904 with one major goal in mind: to stop the advent of universal suffrage, which they feared would result in socialism. Yet, it was a Swedish admiral, the conservative politican Arvid Lindman, who first extended democracy by enacting male suffrage, despite the protests of more traditionalist voices, such as the later Prime Minister, the arch-conservative and authoritarian Ernst Trygger, who railed at progressive policies such as the abolition of the death penalty.",
"title": "National variants"
},
{
"paragraph_id": 92,
"text": "Once a democratic system was in place, Swedish conservatives sought to combine traditional elitism with modern populism. Sweden’s most renowned political scientist, the conservative politician Rudolf Kjellén, coined the terms geopolitics and biopolitics in relation to his organic theory of the state. He also developed the corporatist-nationalist concept of Folkhemmet (’the home of the people’), which became the single most powerful political concept in Sweden throughout the 20th century, although it was adopted by the Social Democratic Party who gave it a more socialist interpretation.",
"title": "National variants"
},
{
"paragraph_id": 93,
"text": "After a brief grand coalition between Left and Right during WWII, the centre-right parties struggled to cooperate due to their ideological differences: the agrarian populism of the Centre Party, the urban liberalism of the Liberal People’s Party, and the liberal-conservative elitism of the Moderate Party (the old Conservative Party). However, in 1976 and in 1979, the three parties managed to form a government under Thorbjörn Fälldin—and again in 1991, this time under the aristocrat Carl Bildt and with support from the newly founded Christian Democrats, the most conservative party in contemporary Sweden.",
"title": "National variants"
},
{
"paragraph_id": 94,
"text": "In modern times, mass immigration from distant cultures caused a large populist dissatisfaction, which was not channeled through any of the established parties, who generally espoused multiculturalism. Instead, the 2010s saw the rise of the right-wing populist Sweden Democrats, who were surging as the largest party in the polls on several occasions. The party was ostracized by the other parties until 2019, when Christian Democrat leader Ebba Busch reached out for collaboration, after which the Moderate Party followed suit. In 2022, the centre-right parties formed a government with support from the Sweden Democrats as the biggest party. The subsequent Tidö Agreement, negotiated in Tidö Castle, incorporated authoritarian policies such as a stricter stance on immigration and a harsher stance on law and order.",
"title": "National variants"
},
{
"paragraph_id": 95,
"text": "In some aspects, Swiss conservatism is unique. While most European nations have had a long monarchical tradition, been relatively homogenous ethnically, and engaged in many wars, Switzerland is an old republic with a multicultural mosaic of three major nationalities, adhering to the principle of Swiss neutrality.",
"title": "National variants"
},
{
"paragraph_id": 96,
"text": "There are a number of conservative parties in Switzerland's parliament, the Federal Assembly. These include the largest ones: the Swiss People's Party (SVP), the Christian Democratic People's Party (CVP), and the Conservative Democratic Party of Switzerland (BDP), which is a splinter of the SVP created in the aftermath to the election of Eveline Widmer-Schlumpf as Federal Council.",
"title": "National variants"
},
{
"paragraph_id": 97,
"text": "The SVP was formed from the 1971 merger of the Party of Farmers, Traders and Citizens, formed in 1917, and the smaller Democratic Party, formed in 1942. The SVP emphasized agricultural policy and was strong among farmers in German-speaking Protestant areas. As Switzerland considered closer relations with the European Union in the 1990s, the SVP adopted a more militant protectionist and isolationist stance. This stance has allowed it to expand into German-speaking Catholic mountainous areas. The Anti-Defamation League, a non-Swiss lobby group based in the United States has accused them of manipulating issues such as immigration, Swiss neutrality, and welfare benefits, awakening antisemitism and racism. The Council of Europe has called the SVP \"extreme right\", although some scholars dispute this classification. For instance, Hans-Georg Betz describes it as \"populist radical right\". The SVP has been the largest party since 2003.",
"title": "National variants"
},
{
"paragraph_id": 98,
"text": "The authoritarian Ukrainian State was headed by the Cossack aristocrat Pavlo Skoropadskyi and represented the conservative movement. The 1918 Hetman government, which appealed to the tradition of the 17th–18th century Cossack Hetman state, represented the conservative strand in Ukraine's struggle for independence. It had the support of the proprietary classes and of conservative and moderate political groups. Vyacheslav Lypynsky was a main ideologue of Ukrainian conservatism.",
"title": "National variants"
},
{
"paragraph_id": 99,
"text": "Modern English conservatives celebrate Anglo-Irish statesman Edmund Burke as their intellectual father. Burke was affiliated with the Whig Party, which eventually split amongst the Liberal Party and the Conservative Party, but the modern Conservative Party is generally thought to derive primarily from the Tories, and the MPs of the modern conservative party are still frequently referred to as Tories.",
"title": "National variants"
},
{
"paragraph_id": 100,
"text": "Shortly after Burke's death in 1797, conservatism was revived as a mainstream political force as the Whigs suffered a series of internal divisions. This new generation of conservatives derived their politics not from Burke, but from his predecessor, the Viscount Bolingbroke (1678–1751), who was a Jacobite and traditional Tory, lacking Burke's sympathies for Whiggish policies such as Catholic emancipation and American independence (famously attacked by Samuel Johnson in \"Taxation No Tyranny\").",
"title": "National variants"
},
{
"paragraph_id": 101,
"text": "In the first half of the 19th century, many newspapers, magazines, and journals promoted loyalist or right-wing attitudes in religion, politics, and international affairs. Burke was seldom mentioned, but William Pitt the Younger (1759–1806) became a conspicuous hero. The most prominent journals included The Quarterly Review, founded in 1809 as a counterweight to the Whigs' Edinburgh Review, and the even more conservative Blackwood's Magazine. The Quarterly Review promoted a balanced Canningite Toryism, as it was neutral on Catholic emancipation and only mildly critical of Nonconformist dissent; it opposed slavery and supported the current poor laws; and it was \"aggressively imperialist\". The high-church clergy of the Church of England read the Orthodox Churchman's Magazine, which was equally hostile to Jewish, Catholic, Jacobin, Methodist and Unitarian spokesmen. Anchoring the ultra-Tories, Blackwood's Edinburgh Magazine stood firmly against Catholic emancipation and favoured slavery, cheap money, mercantilism, the Navigation Acts, and the Holy Alliance.",
"title": "National variants"
},
{
"paragraph_id": 102,
"text": "Conservatism evolved after 1820, embracing free trade in 1846 and a commitment to democracy, especially under Benjamin Disraeli. The effect was to significantly strengthen conservatism as a grassroots political force. Conservatism no longer was the philosophical defense of the landed aristocracy, but had been refreshed into redefining its commitment to the ideals of order, both secular and religious, expanding imperialism, strengthened monarchy, and a more generous vision of the welfare state as opposed to the punitive vision of the Whigs and liberals. As early as 1835, Disraeli attacked the Whigs and utilitarians as slavishly devoted to an industrial oligarchy, while he described his fellow Tories as the only \"really democratic party of England\", devoted to the interests of the whole people. Nevertheless, inside the party there was a tension between the growing numbers of wealthy businessmen on the one side and the aristocracy and rural gentry on the other. The aristocracy gained strength as businessmen discovered they could use their wealth to buy a peerage and a country estate.",
"title": "National variants"
},
{
"paragraph_id": 103,
"text": "Some conservatives lamented the passing of a pastoral world where the ethos of noblesse oblige had promoted respect from the lower classes. They saw the Anglican Church and the aristocracy as balances against commercial wealth. They worked toward legislation for improved working conditions and urban housing. This viewpoint would later be called Tory democracy. However, since Burke, there has always been tension between traditional aristocratic conservatism and the wealthy liberal business class.",
"title": "National variants"
},
{
"paragraph_id": 104,
"text": "In 1834, Tory Prime Minister Robert Peel issued the \"Tamworth Manifesto\", in which he pledged to endorse moderate political reform. This marked the beginning of the transformation from High Tory reactionism towards a more modern form of conservatism. As a result, the party became known as the Conservative Party—a name it has retained to this day. However, Peel would also be the root of a split in the party between the traditional Tories (by the Earl of Derby and Benjamin Disraeli) and the \"Peelites\" (led first by Peel himself, then by the Earl of Aberdeen). The split occurred in 1846 over the issue of free trade, which Peel supported, versus protectionism, supported by Derby. The majority of the party sided with Derby whilst about a third split away, eventually merging with the Whigs and the radicals to form the Liberal Party. Despite the split, the mainstream Conservative Party accepted the doctrine of free trade in 1852.",
"title": "National variants"
},
{
"paragraph_id": 105,
"text": "In the second half of the 19th century, the Liberal Party faced political schisms, especially over Irish Home Rule. Leader William Gladstone (himself a former Peelite) sought to give Ireland a degree of autonomy, a move that elements in both the left and right-wings of his party opposed. These split off to become the Liberal Unionists (led by Joseph Chamberlain), forming a coalition with the Conservatives before merging with them in 1912. The Liberal Unionist influence dragged the Conservative Party towards the left as Conservative governments passed a number of progressive reforms at the turn of the 20th century. By the late 19th century, the traditional business supporters of the Liberal Party had joined the Conservatives, making them the party of business and commerce as well.",
"title": "National variants"
},
{
"paragraph_id": 106,
"text": "After a period of Liberal dominance before the First World War, the Conservatives gradually became more influential in government, regaining full control of the cabinet in 1922. In the inter-war period, conservatism was the major ideology in Britain as the Liberal Party vied with the Labour Party for control of the left. After the Second World War, the first Labour government (1945–1951) under Clement Attlee embarked on a program of nationalization of industry and the promotion of social welfare. The Conservatives generally accepted those policies until the 1980s.",
"title": "National variants"
},
{
"paragraph_id": 107,
"text": "In the 1980s, the Conservative government of Margaret Thatcher, guided by neoliberal economics, reversed many of Labour's social programmes, privatised large parts of the UK economy, and sold state-owned assets. The Conservative Party also adopted soft eurosceptic politics and opposed Federal Europe. Other conservative political parties, such as the Democratic Unionist Party (DUP, founded in 1971), and the United Kingdom Independence Party (UKIP, founded in 1993), began to appear, although they have yet to make any significant impact at Westminster. As of 2014, the DUP comprises the largest political party in the ruling coalition in the Northern Ireland Assembly), and from 2017–2019 the DUP provided support for the Conservative minority government under a confidence-and-supply arrangement.",
"title": "National variants"
},
{
"paragraph_id": 108,
"text": "Conservative elites have long dominated Latin American nations. Mostly, this has been achieved through control of civil institutions, the Catholic Church, and the military, rather than through party politics. Typically, the Church was exempt from taxes and its employees immune from civil prosecution. Where conservative parties were weak or non-existent, conservatives were more likely to rely on military dictatorship as a preferred form of government.",
"title": "National variants"
},
{
"paragraph_id": 109,
"text": "However, in some nations where the elites were able to mobilize popular support for conservative parties, longer periods of political stability were achieved. Chile, Colombia, and Venezuela are examples of nations that developed strong conservative parties. Argentina, Brazil, El Salvador, and Peru are examples of nations where this did not occur. The Conservative Party of Venezuela disappeared following the Federal Wars of 1858–1863. Chile's conservative party, the National Party, disbanded in 1973 following a military coup and did not re-emerge as a political force following the subsequent return to democracy.",
"title": "National variants"
},
{
"paragraph_id": 110,
"text": "Louis Hartz explained conservatism in Latin American nations as a result of their settlement as feudal societies.",
"title": "National variants"
},
{
"paragraph_id": 111,
"text": "Conservatism in Brazil originates from the cultural and historical tradition of Brazil, whose cultural roots are Luso-Iberian and Roman Catholic. More traditional conservative historical views and features include belief in political federalism and monarchism.",
"title": "National variants"
},
{
"paragraph_id": 112,
"text": "In cultural life, Brazilian conservatism from the 20th century on includes names such as Mário Ferreira dos Santos and Vicente Ferreira da Silva in philosophy; Gerardo Melo Mourão and Otto Maria Carpeaux in literature; Bruno Tolentino in poetry; Olavo de Carvalho, Paulo Francis and Luís Ernesto Lacombe in journalism; Manuel de Oliveira Lima and João Camilo de Oliveira Torres in historiography; Sobral Pinto and Miguel Reale in law; Gustavo Corção, Plinio Corrêa de Oliveira, Father Léo and Father Paulo Ricardo in religion; and Roberto Campos and Mario Henrique Simonsen in economics.",
"title": "National variants"
},
{
"paragraph_id": 113,
"text": "In contemporary politics, a conservative wave began roughly around the 2014 Brazilian presidential election. According to commentators, the National Congress of Brazil elected in 2014 may be considered the most conservative since the re-democratization movement, citing an increase in the number of parliamentarians linked to more conservative segments, such as ruralists, the military, the police, and religious conservatives. The subsequent economic crisis of 2015 and investigations of corruption scandals led to a right-wing movement that sought to rescue ideas from economic liberalism and conservatism in opposition to socialism. At the same time, fiscal conservatives such as those that make up the Free Brazil Movement emerged among many others. National-conservative candidate Jair Bolsonaro of the Social Liberal Party was the winner of the 2018 Brazilian presidential election.",
"title": "National variants"
},
{
"paragraph_id": 114,
"text": "The active conservative parties in Brazil are Brazil Union, Progressistas, Republicans, Liberal Party, Brazilian Labour Renewal Party, Patriota, Brazilian Labour Party, Social Christian Party and Brasil 35.",
"title": "National variants"
},
{
"paragraph_id": 115,
"text": "The Colombian Conservative Party, founded in 1849, traces its origins to opponents of General Francisco de Paula Santander's 1833–1837 administration. While the term \"liberal\" had been used to describe all political forces in Colombia, the conservatives began describing themselves as \"conservative liberals\" and their opponents as \"red liberals\". From the 1860s until the present, the party has supported strong central government and the Catholic Church, especially its role as protector of the sanctity of the family, and opposed separation of church and state. Its policies include the legal equality of all men, the citizen's right to own property, and opposition to dictatorship. It has usually been Colombia's second largest party, with the Colombian Liberal Party being the largest.",
"title": "National variants"
},
{
"paragraph_id": 116,
"text": "Canada's conservatives had their roots in the Tory loyalists who left America after the American Revolution. They developed in the socio-economic and political cleavages that existed during the first three decades of the 19th century and had the support of the mercantile, professional, and religious elites in Ontario and to a lesser extent in Quebec. Holding a monopoly over administrative and judicial offices, they were called the Family Compact in Ontario and the Chateau Clique in Quebec. John A. Macdonald's successful leadership of the movement to confederate the provinces and his subsequent tenure as prime minister for most of the late 19th century rested on his ability to bring together the English-speaking Protestant aristocracy and the ultramontane Catholic hierarchy of Quebec and to keep them united in a conservative coalition.",
"title": "National variants"
},
{
"paragraph_id": 117,
"text": "The conservatives combined pro-market liberalism and Toryism. They generally supported an activist government and state intervention in the marketplace, and their policies were marked by noblesse oblige, a paternalistic responsibility of the elites for the less well-off. The party was known as the Progressive Conservatives from 1942 until 2003, when the party merged with the Canadian Alliance to form the Conservative Party of Canada.",
"title": "National variants"
},
{
"paragraph_id": 118,
"text": "The conservative and autonomist Union Nationale, led by Maurice Duplessis, governed the province of Quebec in periods from 1936 to 1960 and in a close alliance with the Catholic Church, small rural elites, farmers, and business elites. This period, known by liberals as the Great Darkness, ended with the Quiet Revolution and the party went into terminal decline.",
"title": "National variants"
},
{
"paragraph_id": 119,
"text": "By the end of the 1960s, the political debate in Quebec centered around the question of independence, opposing the social democratic and sovereignist Parti Québécois and the centrist and federalist Quebec Liberal Party, therefore marginalizing the conservative movement. Most French Canadian conservatives rallied either the Quebec Liberal Party or the Parti Québécois, while some of them still tried to offer an autonomist third-way with what was left of the Union Nationale or the more populists Ralliement créditiste du Québec and Parti national populaire, but by the 1981 provincial election politically organized conservatism had been obliterated in Quebec. It slowly started to revive at the 1994 provincial election with the Action démocratique du Québec, who served as Official opposition in the National Assembly from 2007 to 2008, before merging in 2012 with François Legault's Coalition Avenir Québec, which took power in 2018. The modern Conservative Party of Canada has rebranded conservatism and, under the leadership of Stephen Harper, added more conservative policies. Yoram Hazony, a scholar on the history and ideology of conservatism, identified the Canadian psychologist Jordan Peterson as the most significant conservative thinker to appear in the English-speaking world in a generation.",
"title": "National variants"
},
{
"paragraph_id": 120,
"text": "The meaning of conservatism in the United States is different from the way the word is used elsewhere. As historian Leo P. Ribuffo notes, \"what Americans now call conservatism much of the world calls liberalism or neoliberalism\". However, the prominent American conservative writer Russell Kirk, in his influential work The Conservative Mind (1953), argued that conservatism had been brought to the United States and he interpreted the American Revolution as a \"conservative revolution\" against royalist innovation.",
"title": "National variants"
},
{
"paragraph_id": 121,
"text": "American conservatism is a broad system of political beliefs in the United States, which is characterized by respect for American traditions, support for Judeo-Christian values, economic liberalism, anti-communism, and a defense of Western culture. Liberty within the bounds of conformity to conservatism is a core value, with a particular emphasis on strengthening the free market, limiting the size and scope of government, and opposing high taxes as well as government or labor union encroachment on the entrepreneur.",
"title": "National variants"
},
{
"paragraph_id": 122,
"text": "The 1830s Democratic Party became divided between Southern Democrats, who supported slavery, secession, and later segregation, and the Northern Democrats, who tended to support the abolition of slavery, union, and equality. Many Democrats were conservative in the sense that they wanted things to be like they were in the past, especially as far as race was concerned. They generally favored poorer farmers and urban workers, and were hostile to banks, industrialization, and high tariffs.",
"title": "National variants"
},
{
"paragraph_id": 123,
"text": "The post-Civil War Republican Party elected the first People of Color to serve in both local and national political office. The Southern Democrats united with pro-segregation Northern Republicans to form the Conservative Coalition, which successfully put an end to Blacks being elected to national political office until 1967, when Edward Brooke was elected Senator from Massachusetts. Conservative Democrats influenced US politics until 1994's Republican Revolution, when the American South shifted from solid Democrat to solid Republican, while maintaining its conservative values.",
"title": "National variants"
},
{
"paragraph_id": 124,
"text": "In late 19th century, the Democratic Party split into two factions; the more conservative Eastern business faction (led by Grover Cleveland) favored gold, while the South and West (led by William Jennings Bryan) wanted more silver in order to raise prices for their crops. In 1892, Cleveland won the election on a conservative platform, which supported maintaining the gold standard, reducing tariffs, and taking a laissez-faire approach to government intervention. A severe nationwide depression ruined his plans. Many of his supporters in 1896 supported the Gold Democrats when liberal William Jennings Bryan won the nomination and campaigned for bimetalism, money backed by both gold and silver. The conservative wing nominated Alton B. Parker in 1904, but he got very few votes.",
"title": "National variants"
},
{
"paragraph_id": 125,
"text": "The major conservative party in the United States today is the Republican Party, also known as the GOP (Grand Old Party). Modern American conservatives often consider individual liberty as the fundamental trait of democracy, as long as it conforms to conservative values, small government, deregulation of the government, economic liberalism, and free trade—which contrasts with modern American liberals, who generally place a greater value on social equality and social justice. Other major priorities within American conservatism include support for the traditional family, law and order, the right to bear arms, Christian values, anti-communism, and a defense of \"Western civilization from the challenges of modernist culture and totalitarian governments\". Economic conservatives and libertarians favor small government, low taxes, limited regulation, and free enterprise. Some social conservatives see traditional social values threatened by secularism, so they support school prayer and oppose abortion and homosexuality. Neoconservatives want to expand American ideals throughout the world and show a strong support for Israel. Paleoconservatives oppose multiculturalism and press for restrictions on immigration. Most US conservatives prefer Republicans over Democrats, and most factions favor a strong foreign policy and a strong military.",
"title": "National variants"
},
{
"paragraph_id": 126,
"text": "The conservative movement of the 1950s attempted to bring together the divergent conservative strands, stressing the need for unity to prevent the spread of \"godless communism\", which Reagan later labeled an \"evil empire\". During the Reagan administration, conservatives also supported the so-called Reagan Doctrine, under which the US as part of a Cold War strategy provided military and other support to guerrilla insurgencies that were fighting governments identified as socialist or communist. The Reagan administration also adopted neoliberalism and Reaganomics (pejoratively referred to as trickle-down economics), resulting in the 1980s economic growth and trillion-dollar deficits. Other modern conservative positions include anti-environmentalism. On average, American conservatives desire tougher foreign policies than liberals do.",
"title": "National variants"
},
{
"paragraph_id": 127,
"text": "The Tea Party movement, founded in 2009, proved a large outlet for populist American conservative ideas. Their stated goals included rigorous adherence to the US constitution, lower taxes, and opposition to a growing role for the federal government in health care. Electorally, it was considered a key force in Republicans reclaiming control of the US House of Representatives in 2010.",
"title": "National variants"
},
{
"paragraph_id": 128,
"text": "The Liberal Party of Australia adheres to the principles of social conservatism and liberal conservatism. It is liberal in the sense of economics. Commentators explain: \"In America, 'liberal' means left-of-center, and it is a pejorative term when used by conservatives in adversarial political debate. In Australia, of course, the conservatives are in the Liberal Party.\" The National Right is the most organised and reactionary of the three faction within the party.",
"title": "National variants"
},
{
"paragraph_id": 129,
"text": "Other conservative parties are the National Party of Australia (a sister party of the Liberals), Family First Party, Democratic Labor Party, Shooters, Fishers and Farmers Party, Australian Conservatives, and the Katter's Australian Party.",
"title": "National variants"
},
{
"paragraph_id": 130,
"text": "The largest party in the country is the Australian Labor Party, and its dominant faction is Labor Right, a socially conservative element. Australia undertook significant economic reform under the Labor Party in the mid-1980s. Consequently, issues like protectionism, welfare reform, privatization, and deregulation are no longer debated in the political space as they are in Europe or North America.",
"title": "National variants"
},
{
"paragraph_id": 131,
"text": "Political scientist James Jupp writes that \"[the] decline in English influences on Australian reformism and radicalism, and appropriation of the symbols of Empire by conservatives continued under the Liberal Party leadership of Sir Robert Menzies, which lasted until 1966\".",
"title": "National variants"
},
{
"paragraph_id": 132,
"text": "",
"title": "National variants"
},
{
"paragraph_id": 133,
"text": "Historic conservatism in New Zealand traces its roots to the unorganised conservative opposition to the New Zealand Liberal Party in the late 19th century. In 1909 this ideological strand found a more organised expression in the Reform Party, a forerunner to the contemporary New Zealand National Party, which absorbed historic conservative elements. The National Party, established in 1936, embodies a spectrum of tendencies, including conservative and liberal. Throughout its history, the party has oscillated between periods of conservative emphasis and liberal reform. Its stated values include \"individual freedom and choice\" and \"limited government\".",
"title": "National variants"
},
{
"paragraph_id": 134,
"text": "In the 1980s and 1990s both the National Party and its main opposing party, the traditionally left-wing Labour Party, implemented free-market reforms.",
"title": "National variants"
},
{
"paragraph_id": 135,
"text": "The New Zealand First party, which split from the National Party in 1993, espouses nationalist and conservative principles.",
"title": "National variants"
},
{
"paragraph_id": 136,
"text": "The Big Five Personality Model has applications in the study of political psychology. It has been found by several studies that individuals who score high in Conscientiousness (the quality of working hard and being careful) are more likely to possess a right-wing political identification. On the opposite end of the spectrum, a strong correlation was identified between high scores in Openness to Experience and a left-leaning ideology. Because conscientiousness is positively related to job performance, a 2021 study found that conservative service workers earn higher ratings, evaluations, and tips than social liberal ones.",
"title": "Psychology"
},
{
"paragraph_id": 137,
"text": "A number of studies have found that disgust is tightly linked to political orientation. People who are highly sensitive to disgusting images are more likely to align with the political right and value traditional ideals of bodily and spiritual purity, tending to oppose, for example, abortion and gay marriage.",
"title": "Psychology"
},
{
"paragraph_id": 138,
"text": "Research has also found that people who are more disgust sensitive tend to favour their own in-group over out-groups. The reason behind this may be that people begin to associate outsiders with disease while associating health with people similar to themselves.",
"title": "Psychology"
},
{
"paragraph_id": 139,
"text": "The higher one's disgust sensitivity is, the greater the tendency to make more conservative moral judgments. Disgust sensitivity is associated with moral hypervigilance, which means that people who have higher disgust sensitivity are more likely to think that suspects of a crime are guilty. They also tend to view them as evil, if found guilty, thus endorsing them to harsher punishment in the setting of a court.",
"title": "Psychology"
},
{
"paragraph_id": 140,
"text": "The right-wing authoritarian personality (RWA) is a personality type that describes somebody who is highly submissive to their authority figures, acts aggressively in the name of said authorities, and is conformist in thought and behaviour. According to psychologist Bob Altemeyer, individuals who are politically conservative tend to rank high in RWA. This finding was echoed by Theodor W. Adorno in The Authoritarian Personality (1950) based on the F-scale personality test.",
"title": "Psychology"
},
{
"paragraph_id": 141,
"text": "A study done on Israeli and Palestinian students in Israel found that RWA scores of right-wing party supporters were significantly higher than those of left-wing party supporters. However, a 2005 study by H. Michael Crowson and colleagues suggested a moderate gap between RWA and other conservative positions, stating that their \"results indicated that conservatism is not synonymous with RWA\".",
"title": "Psychology"
},
{
"paragraph_id": 142,
"text": "In 1973, British psychologist Glenn Wilson published an influential book providing evidence that a general factor underlying conservative beliefs is \"fear of uncertainty\". A meta-analysis of research literature by Jost, Glaser, Kruglanski, and Sulloway in 2003 found that many factors, such as intolerance of ambiguity and need for cognitive closure, contribute to the degree of one's political conservatism and its manifestations in decision-making. A study by Kathleen Maclay stated these traits \"might be associated with such generally valued characteristics as personal commitment and unwavering loyalty\". The research also suggested that while most people are resistant to change, social liberals are more tolerant of it.",
"title": "Psychology"
},
{
"paragraph_id": 143,
"text": "Social dominance orientation (SDO) is a personality trait measuring an individual's support for social hierarchy and the extent to which they desire their in-group be superior to out-groups. Psychologist Felicia Pratto and her colleagues have found evidence to support the claim that a high SDO is strongly correlated with conservative views and opposition to social engineering to promote equality. Pratto and her colleagues also found that high SDO scores were highly correlated with measures of prejudice.",
"title": "Psychology"
},
{
"paragraph_id": 144,
"text": "However, David J. Schneider argued for a more complex relationships between the three factors, writing that \"correlations between prejudice and political conservatism are reduced virtually to zero when controls for SDO are instituted, suggesting that the conservatism–prejudice link is caused by SDO\". Conservative political theorist Kenneth Minogue criticized Pratto's work, saying:",
"title": "Psychology"
},
{
"paragraph_id": 145,
"text": "It is characteristic of the conservative temperament to value established identities, to praise habit and to respect prejudice, not because it is irrational, but because such things anchor the darting impulses of human beings in solidities of custom which we do not often begin to value until we are already losing them. Radicalism often generates youth movements, while conservatism is a condition found among the mature, who have discovered what it is in life they most value.",
"title": "Psychology"
},
{
"paragraph_id": 146,
"text": "A 1996 study by Pratto and her colleagues examined the topic of racism. Contrary to what these theorists predicted, correlations between conservatism and racism were strongest among the most educated individuals, and weakest among the least educated. They also found that the correlation between racism and conservatism could be accounted for by their mutual relationship with SDO.",
"title": "Psychology"
},
{
"paragraph_id": 147,
"text": "In his book Gross National Happiness (2008), Arthur C. Brooks presents the finding that conservatives are roughly twice as happy as social liberals. A 2008 study suggested that conservatives tend to be happier than social liberals because of their tendency to justify the current state of affairs and to remain unbothered by inequalities in society. A 2012 study disputed this, demonstrating that conservatives expressed greater personal agency (e.g., personal control, responsibility), more positive outlook (e.g., optimism, self-worth), and more transcendent moral beliefs (e.g., greater religiosity, greater moral clarity).",
"title": "Psychology"
},
{
"paragraph_id": 148,
"text": "General",
"title": "Further reading"
},
{
"paragraph_id": 149,
"text": "Conservatism and fascism",
"title": "Further reading"
},
{
"paragraph_id": 150,
"text": "Conservatism and liberalism",
"title": "Further reading"
},
{
"paragraph_id": 151,
"text": "Conservatism and reactionism",
"title": "Further reading"
},
{
"paragraph_id": 152,
"text": "Conservatism and women",
"title": "Further reading"
},
{
"paragraph_id": 153,
"text": "Conservatism in Europe",
"title": "Further reading"
},
{
"paragraph_id": 154,
"text": "Conservatism in Germany",
"title": "Further reading"
},
{
"paragraph_id": 155,
"text": "Conservatism in Latin America",
"title": "Further reading"
},
{
"paragraph_id": 156,
"text": "Conservatism in Russia",
"title": "Further reading"
},
{
"paragraph_id": 157,
"text": "Conservatism in the United Kingdom",
"title": "Further reading"
},
{
"paragraph_id": 158,
"text": "Conservatism in the United States",
"title": "Further reading"
},
{
"paragraph_id": 159,
"text": "Psychology",
"title": "Further reading"
},
{
"paragraph_id": 160,
"text": "Other",
"title": "Further reading"
}
] | Conservatism is a cultural, social, and political philosophy that seeks to promote and to preserve traditional institutions, customs, and values. The central tenets of conservatism may vary in relation to the culture and civilization in which it appears. In Western culture, depending on the particular nation, conservatives seek to promote a range of social institutions, such as the nuclear family, organized religion, the military, the nation-state, property rights, rule of law, aristocracy, and monarchy. Conservatives tend to favour institutions and practices that guarantee social order and historical continuity. Edmund Burke, an 18th-century Anglo-Irish statesman who opposed the French Revolution but supported the American Revolution, is credited as one of the main philosophers of conservatism in the 1790s. The first established use of the term in a political context originated in 1818 with François-René de Chateaubriand during the period of Bourbon Restoration that sought to roll back the policies of the French Revolution and establish social order. Conservative thought has varied considerably as it has adapted itself to existing traditions and national cultures. Thus, conservatives from different parts of the world—each upholding their respective traditions—may disagree on a wide range of issues. Historically associated with right-wing politics, the term has been used to describe a wide range of views. Conservatism may be either more libertarian or more authoritarian; more populist or more elitist; more progressive or more reactionary; more moderate or more extreme. | 2001-10-04T06:32:18Z | 2023-12-30T06:43:37Z | [
"Template:Refend",
"Template:Wikiquote",
"Template:Reflist",
"Template:Cite web",
"Template:Portal bar",
"Template:Conservatism in Greece sidebar",
"Template:Conservatism US",
"Template:Colend",
"Template:Harvnb",
"Template:Cite encyclopedia",
"Template:Conservatism in Austria",
"Template:Conservatism in Denmark",
"Template:About",
"Template:Conservatism UK",
"Template:Toryism",
"Template:Multiple image",
"Template:Cite book",
"Template:Dead link",
"Template:Citation needed",
"Template:Conservatism in Australia",
"Template:Quote",
"Template:Use mdy dates",
"Template:Use American English",
"Template:Conservatism navbox",
"Template:Redirect",
"Template:Refbegin",
"Template:Conservatism in Italy",
"Template:Conservatism in Switzerland",
"Template:Conservatism in Canada",
"Template:Navboxes",
"Template:Main",
"Template:Clarify",
"Template:Cbignore",
"Template:Conservatism New Zealand",
"Template:In lang",
"Template:Full citation needed",
"Template:Cite SEP",
"Template:Main article",
"Template:Conservatism in Sweden",
"Template:Lang",
"Template:Conservatism in France",
"Template:Conservatism in Germany",
"Template:Cite news",
"Template:Distinguish",
"Template:Sfn",
"Template:Doi",
"Template:Curlie",
"Template:See also",
"Template:Harvtxt",
"Template:Cite magazine",
"Template:Citation",
"Template:Short description",
"Template:Anchor",
"Template:Conservatism in Russia",
"Template:Conservatism in Brazil",
"Template:Cols",
"Template:Webarchive",
"Template:JSTOR",
"Template:Blockquote",
"Template:Explanation needed",
"Template:ISBN",
"Template:Cite journal",
"Template:Authority control",
"Template:Conservatism sidebar",
"Template:Conservatism in South Korea"
] | https://en.wikipedia.org/wiki/Conservatism |
6,677 | Classical liberalism | Classical liberalism is a political tradition and a branch of liberalism which advocates free market and laissez-faire economics; and civil liberties under the rule of law, with special emphasis on individual autonomy, limited government, economic freedom, political freedom and freedom of speech. Classical liberalism, contrary to liberal branches like social liberalism, looks more negatively on social policies, taxation and the state involvement in the lives of individuals, and it advocates deregulation.
Until the Great Depression and the rise of social liberalism, classical liberalism was called economic liberalism. Later, the term was applied as a retronym, to distinguish earlier 19th-century liberalism from social liberalism. By modern standards, in the United States, simple liberalism often means social liberalism, but in Europe and Australia, simple liberalism often means classical liberalism.
Classical liberalism gained full flowering in the early 18th century, building on ideas starting at least as far back as the 16th century, within the Iberian, Anglo-Saxon, and central European contexts, and it was foundational to the American Revolution and "American Project" more broadly. Notable liberal individuals whose ideas contributed to classical liberalism include John Locke, Jean-Baptiste Say, Thomas Malthus, and David Ricardo. It drew on classical economics, especially the economic ideas as espoused by Adam Smith in Book One of The Wealth of Nations, and on a belief in natural law, social progress, and utilitarianism. In contemporary times, Friedrich Hayek, Milton Friedman, Ludwig von Mises, Thomas Sowell, George Stigler and Larry Arnhart are seen as the most prominent advocates of classical liberalism. However, other scholars have made reference to these contemporary thoughts as neoclassical liberalism, distinguishing them from 18th-century classical liberalism.
In the context of American politics, "classical liberalism" may be described as "fiscally conservative" and "socially liberal". Despite this, classical liberals tend to reject the right's higher tolerance for economic protectionism and the left's inclination for collective group rights due to classical liberalism's central principle of individualism. Additionally, in the United States, classical liberalism is considered closely tied to, or synonymous with, American libertarianism.
Core beliefs of classical liberals included new ideas – which departed from both the older conservative idea of society as a family and from the later sociological concept of society as a complex set of social networks.
Classical liberals agreed with Thomas Hobbes that individuals created government to protect themselves from each other and to minimize conflict between individuals that would otherwise arise in a state of nature. These beliefs were complemented by a belief that financial incentive could be best motivate labourers. This belief led to the passage of the Poor Law Amendment Act 1834, which limited the provision of social assistance, based on the idea that markets are the mechanism that most efficiently leads to wealth.
Drawing on ideas of Adam Smith, classical liberals believed that it is in the common interest that all individuals be able to secure their own economic self-interest. They were critical of what would come to be the idea of the welfare state as interfering in a free market. Despite Smith's resolute recognition of the importance and value of labour and of labourers, classical liberals criticized labour's group rights being pursued at the expense of individual rights while accepting corporations' rights, which led to inequality of bargaining power. Classical liberals argued that individuals should be free to obtain work from the highest-paying employers, while the profit motive would ensure that products that people desired were produced at prices they would pay. In a free market, both labour and capital would receive the greatest possible reward, while production would be organized efficiently to meet consumer demand. Classical liberals argued for what they called a minimal state and government, limited to the following functions:
Classical liberals asserted that rights are of a negative nature and therefore stipulate that other individuals and governments are to refrain from interfering with the free market, opposing social liberals who assert that individuals have positive rights, such as the right to vote, the right to an education, the right to health care, and the right to a minimum wage. For society to guarantee positive rights, it requires taxation over and above the minimum needed to enforce negative rights.
Core beliefs of classical liberals did not necessarily include democracy nor government by a majority vote by citizens because "there is nothing in the bare idea of majority rule to show that majorities will always respect the rights of property or maintain rule of law". For example, James Madison argued for a constitutional republic with protections for individual liberty over a pure democracy, reasoning that in a pure democracy a "common passion or interest will, in almost every case, be felt by a majority of the whole ... and there is nothing to check the inducements to sacrifice the weaker party".
In the late 19th century, classical liberalism developed into neoclassical liberalism, which argued for government to be as small as possible to allow the exercise of individual freedom. In its most extreme form, neoclassical liberalism advocated social Darwinism. Right-libertarianism is a modern form of neoclassical liberalism. However, Edwin Van de Haar states although classical liberal thought influenced libertarianism, there are significant differences between them. Classical liberalism refuses to give priority to liberty over order and therefore does not exhibit the hostility to the state which is the defining feature of libertarianism. As such, right-libertarians believe classical liberals do not have enough respect for individual property rights and lack sufficient trust in the free market's workings and spontaneous order leading to their support of a much larger state. Right-libertarians also disagree with classical liberals as being too supportive of central banks and monetarist policies.
Friedrich Hayek identified two different traditions within classical liberalism, namely the British tradition and the French tradition:
Hayek conceded that the national labels did not exactly correspond to those belonging to each tradition since he saw the Frenchmen Montesquieu, Benjamin Constant, Joseph De Maistre and Alexis de Tocqueville as belonging to the British tradition and the British Thomas Hobbes, Joseph Priestley, Richard Price, Edward Gibbon, Benjamin Franklin, Thomas Jefferson and Thomas Paine as belonging to the French tradition. Hayek also rejected the label laissez-faire as originating from the French tradition and alien to the beliefs of Hume and Smith.
Guido De Ruggiero also identified differences between "Montesquieu and Rousseau, the English and the democratic types of liberalism" and argued that there was a "profound contrast between the two Liberal systems". He claimed that the spirit of "authentic English Liberalism" had "built up its work piece by piece without ever destroying what had once been built, but basing upon it every new departure". This liberalism had "insensibly adapted ancient institutions to modern needs" and "instinctively recoiled from all abstract proclamations of principles and rights". Ruggiero claimed that this liberalism was challenged by what he called the "new Liberalism of France" that was characterised by egalitarianism and a "rationalistic consciousness".
In 1848, Francis Lieber distinguished between what he called "Anglican and Gallican Liberty". Lieber asserted that "independence in the highest degree, compatible with safety and broad national guarantees of liberty, is the great aim of Anglican liberty, and self-reliance is the chief source from which it draws its strength". On the other hand, Gallican liberty "is sought in government ... . [T]he French look for the highest degree of political civilisation in organisation, that is, in the highest degree of interference by public power".
French physiocracy heavily influenced British classical liberalism, which traces its roots to the Whigs and Radicals. Whiggery had become a dominant ideology following the Glorious Revolution of 1688 and was associated with supporting the British Parliament, upholding the rule of law, defending landed property and sometimes included freedom of the press and freedom of speech. The origins of rights were seen as being in an ancient constitution existing from time immemorial. Custom rather than as natural rights justified these rights. Whigs believed that executive power had to be constrained. While they supported limited suffrage, they saw voting as a privilege rather than as a right. However, there was no consistency in Whig ideology and diverse writers including John Locke, David Hume, Adam Smith and Edmund Burke were all influential among Whigs, although none of them were universally accepted.
From the 1790s to the 1820s, British radicals concentrated on parliamentary and electoral reform, emphasising natural rights and popular sovereignty. Richard Price and Joseph Priestley adapted the language of Locke to the ideology of radicalism. The radicals saw parliamentary reform as a first step toward dealing with their many grievances, including the treatment of Protestant Dissenters, the slave trade, high prices, and high taxes. There was greater unity among classical liberals than there had been among Whigs. Classical liberals were committed to individualism, liberty, and equal rights, as well as some other important tenants of leftism, since classical liberalism was introduced in the late 18th century as a leftist movement. They believed these goals required a free economy with minimal government interference. Some elements of Whiggery were uncomfortable with the commercial nature of classical liberalism. These elements became associated with conservatism.
Classical liberalism was the dominant political theory in Britain from the early 19th century until the First World War. Its notable victories were the Roman Catholic Relief Act 1829, the Reform Act of 1832 and the repeal of the Corn Laws in 1846. The Anti-Corn Law League brought together a coalition of liberal and radical groups in support of free trade under the leadership of Richard Cobden and John Bright, who opposed aristocratic privilege, militarism, and public expenditure and believed that the backbone of Great Britain was the yeoman farmer. Their policies of low public expenditure and low taxation were adopted by William Gladstone when he became Chancellor of the Exchequer and later Prime Minister. Classical liberalism was often associated with religious dissent and nonconformism.
Although classical liberals aspired to a minimum of state activity, they accepted the principle of government intervention in the economy from the early 19th century on, with passage of the Factory Acts. From around 1840 to 1860, laissez-faire advocates of the Manchester School and writers in The Economist were confident that their early victories would lead to a period of expanding economic and personal liberty and world peace, but would face reversals as government intervention and activity continued to expand from the 1850s. Jeremy Bentham and James Mill, although advocates of laissez-faire, non-intervention in foreign affairs, and individual liberty, believed that social institutions could be rationally redesigned through the principles of utilitarianism. The Conservative Prime Minister Benjamin Disraeli rejected classical liberalism altogether and advocated Tory democracy. By the 1870s, Herbert Spencer and other classical liberals concluded that historical development was turning against them. By the First World War, the Liberal Party had largely abandoned classical liberal principles.
The changing economic and social conditions of the 19th century led to a division between neo-classical and social (or welfare) liberals, who while agreeing on the importance of individual liberty differed on the role of the state. Neo-classical liberals, who called themselves "true liberals", saw Locke's Second Treatise as the best guide and emphasised "limited government" while social liberals supported government regulation and the welfare state. Herbert Spencer in Britain and William Graham Sumner were the leading neo-classical liberal theorists of the 19th century. The evolution from classical to social/welfare liberalism is for example reflected in Britain in the evolution of the thought of John Maynard Keynes.
The Ottoman Empire had liberal free trade policies by the 18th century, with origins in capitulations of the Ottoman Empire, dating back to the first commercial treaties signed with France in 1536 and taken further with capitulations in 1673, in 1740 which lowered duties to only 3% for imports and exports and in 1790. Ottoman free trade policies were praised by British economists advocating free trade such as J. R. McCulloch in his Dictionary of Commerce (1834) but criticized by British politicians opposing free trade such as Prime Minister Benjamin Disraeli, who cited the Ottoman Empire as "an instance of the injury done by unrestrained competition" in the 1846 Corn Laws debate, arguing that it destroyed what had been "some of the finest manufactures of the world" in 1812.
In the United States, liberalism took a strong root because it had little opposition to its ideals, whereas in Europe liberalism was opposed by many reactionary or feudal interests such as the nobility; the aristocracy, including army officers; the landed gentry; and the established church. Thomas Jefferson adopted many of the ideals of liberalism, but in the Declaration of Independence changed Locke's "life, liberty and property" to the more socially liberal "Life, Liberty and the pursuit of Happiness". As the United States grew, industry became a larger and larger part of American life; and during the term of its first populist President, Andrew Jackson, economic questions came to the forefront. The economic ideas of the Jacksonian era were almost universally the ideas of classical liberalism. Freedom, according to classical liberals, was maximised when the government took a "hands off" attitude toward the economy. Historian Kathleen G. Donohue argues:
[A]t the center of classical liberal theory [in Europe] was the idea of laissez-faire. To the vast majority of American classical liberals, however, laissez-faire did not mean no government intervention at all. On the contrary, they were more than willing to see government provide tariffs, railroad subsidies, and internal improvements, all of which benefited producers. What they condemned was intervention on behalf of consumers.
The Nation magazine espoused liberalism every week starting in 1865 under the influential editor Edwin Lawrence Godkin (1831–1902). The ideas of classical liberalism remained essentially unchallenged until a series of depressions, thought to be impossible according to the tenets of classical economics, led to economic hardship from which the voters demanded relief. In the words of William Jennings Bryan, "You shall not crucify this nation on a cross of gold". Classical liberalism remained the orthodox belief among American businessmen until the Great Depression. The Great Depression in the United States saw a sea change in liberalism, with priority shifting from the producers to consumers. Franklin D. Roosevelt's New Deal represented the dominance of modern liberalism in politics for decades. In the words of Arthur Schlesinger Jr.:
When the growing complexity of industrial conditions required increasing government intervention in order to assure more equal opportunities, the liberal tradition, faithful to the goal rather than to the dogma, altered its view of the state. ... There emerged the conception of a social welfare state, in which the national government had the express obligation to maintain high levels of employment in the economy, to supervise standards of life and labour, to regulate the methods of business competition, and to establish comprehensive patterns of social security.
Alan Wolfe summarizes the viewpoint that there is a continuous liberal understanding that includes both Adam Smith and John Maynard Keynes:
The idea that liberalism comes in two forms assumes that the most fundamental question facing mankind is how much government intervenes into the economy. ... When instead we discuss human purpose and the meaning of life, Adam Smith and John Maynard Keynes are on the same side. Both of them possessed an expansive sense of what we are put on this earth to accomplish. ... For Smith, mercantilism was the enemy of human liberty. For Keynes, monopolies were. It makes perfect sense for an eighteenth-century thinker to conclude that humanity would flourish under the market. For a twentieth century thinker committed to the same ideal, government was an essential tool to the same end.
The view that modern liberalism is a continuation of classical liberalism is controversial and disputed by many. James Kurth, Robert E. Lerner, John Micklethwait, Adrian Wooldridge and several other political scholars have argued that classical liberalism still exists today, but in the form of American conservatism. According to Deepak Lal, only in the United States does classical liberalism continue to be a significant political force through American conservatism. American libertarians also claim to be the true continuation of the classical liberal tradition.
Central to classical liberal ideology was their interpretation of John Locke's Second Treatise of Government and A Letter Concerning Toleration, which had been written as a defence of the Glorious Revolution of 1688. Although these writings were considered too radical at the time for Britain's new rulers, Whigs, radicals and supporters of the American Revolution later came to cite them. However, much of later liberal thought was absent in Locke's writings or scarcely mentioned and his writings have been subject to various interpretations. For example, there is little mention of constitutionalism, the separation of powers and limited government.
James L. Richardson identified five central themes in Locke's writing:
Although Locke did not develop a theory of natural rights, he envisioned individuals in the state of nature as being free and equal. The individual, rather than the community or institutions, was the point of reference. Locke believed that individuals had given consent to government and therefore authority derived from the people rather than from above. This belief would influence later revolutionary movements.
As a trustee, government was expected to serve the interests of the people, not the rulers; and rulers were expected to follow the laws enacted by legislatures. Locke also held that the main purpose of men uniting into commonwealths and governments was for the preservation of their property. Despite the ambiguity of Locke's definition of property, which limited property to "as much land as a man tills, plants, improves, cultivates, and can use the product of", this principle held great appeal to individuals possessed of great wealth.
Locke held that the individual had the right to follow his own religious beliefs and that the state should not impose a religion against Dissenters, but there were limitations. No tolerance should be shown for atheists, who were seen as amoral, or to Catholics, who were seen as owing allegiance to the Pope over their own national government.
Adam Smith's The Wealth of Nations, published in 1776, was to provide most of the ideas of economics, at least until the publication of John Stuart Mill's Principles of Political Economy in 1848. Smith addressed the motivation for economic activity, the causes of prices and the distribution of wealth and the policies the state should follow to maximise wealth.
Smith wrote that as long as supply, demand, prices and competition were left free of government regulation, the pursuit of material self-interest, rather than altruism, would maximise the wealth of a society through profit-driven production of goods and services. An "invisible hand" directed individuals and firms to work toward the public good as an unintended consequence of efforts to maximise their own gain. This provided a moral justification for the accumulation of wealth, which had previously been viewed by some as sinful.
He assumed that workers could be paid wages as low as was necessary for their survival, which was later transformed by David Ricardo and Thomas Robert Malthus into the "iron law of wages". His main emphasis was on the benefit of free internal and international trade, which he thought could increase wealth through specialisation in production. He also opposed restrictive trade preferences, state grants of monopolies and employers' organisations and trade unions. Government should be limited to defence, public works and the administration of justice, financed by taxes based on income.
Smith's economics was carried into practice in the nineteenth century with the lowering of tariffs in the 1820s, the repeal of the Poor Relief Act that had restricted the mobility of labour in 1834 and the end of the rule of the East India Company over India in 1858.
In addition to Smith's legacy, Say's law, Thomas Robert Malthus' theories of population and David Ricardo's iron law of wages became central doctrines of classical economics. The pessimistic nature of these theories provided a basis for criticism of capitalism by its opponents and helped perpetuate the tradition of calling economics the "dismal science".
Jean-Baptiste Say was a French economist who introduced Smith's economic theories into France and whose commentaries on Smith were read in both France and Britain. Say challenged Smith's labour theory of value, believing that prices were determined by utility and also emphasised the critical role of the entrepreneur in the economy. However, neither of those observations became accepted by British economists at the time. His most important contribution to economic thinking was Say's law, which was interpreted by classical economists that there could be no overproduction in a market and that there would always be a balance between supply and demand. This general belief influenced government policies until the 1930s. Following this law, since the economic cycle was seen as self-correcting, government did not intervene during periods of economic hardship because it was seen as futile.
Malthus wrote two books, An Essay on the Principle of Population (published in 1798) and Principles of Political Economy (published in 1820). The second book which was a rebuttal of Say's law had little influence on contemporary economists. However, his first book became a major influence on classical liberalism. In that book, Malthus claimed that population growth would outstrip food production because population grew geometrically while food production grew arithmetically. As people were provided with food, they would reproduce until their growth outstripped the food supply. Nature would then provide a check to growth in the forms of vice and misery. No gains in income could prevent this and any welfare for the poor would be self-defeating. The poor were in fact responsible for their own problems which could have been avoided through self-restraint.
Ricardo, who was an admirer of Smith, covered many of the same topics, but while Smith drew conclusions from broadly empirical observations he used deduction, drawing conclusions by reasoning from basic assumptions While Ricardo accepted Smith's labour theory of value, he acknowledged that utility could influence the price of some rare items. Rents on agricultural land were seen as the production that was surplus to the subsistence required by the tenants. Wages were seen as the amount required for workers' subsistence and to maintain current population levels. According to his iron law of wages, wages could never rise beyond subsistence levels. Ricardo explained profits as a return on capital, which itself was the product of labour, but a conclusion many drew from his theory was that profit was a surplus appropriated by capitalists to which they were not entitled.
The central concept of utilitarianism, which was developed by Jeremy Bentham, was that public policy should seek to provide "the greatest happiness of the greatest number". While this could be interpreted as a justification for state action to reduce poverty, it was used by classical liberals to justify inaction with the argument that the net benefit to all individuals would be higher.
Utilitarianism provided British governments with the political justification to implement economic liberalism, which was to dominate economic policy from the 1830s. Although utilitarianism prompted legislative and administrative reform and John Stuart Mill's later writings on the subject foreshadowed the welfare state, it was mainly used as a justification for laissez-faire.
Classical liberals following Mill saw utility as the foundation for public policies. This broke both with conservative "tradition" and Lockean "natural rights", which were seen as irrational. Utility, which emphasises the happiness of individuals, became the central ethical value of all Mill-style liberalism. Although utilitarianism inspired wide-ranging reforms, it became primarily a justification for laissez-faire economics. However, Mill adherents rejected Smith's belief that the "invisible hand" would lead to general benefits and embraced Malthus' view that population expansion would prevent any general benefit and Ricardo's view of the inevitability of class conflict. Laissez-faire was seen as the only possible economic approach and any government intervention was seen as useless and harmful. The Poor Law Amendment Act 1834 was defended on "scientific or economic principles" while the authors of the Poor Relief Act 1601 were seen as not having had the benefit of reading Malthus.
However, commitment to laissez-faire was not uniform and some economists advocated state support of public works and education. Classical liberals were also divided on free trade as Ricardo expressed doubt that the removal of grain tariffs advocated by Richard Cobden and the Anti-Corn Law League would have any general benefits. Most classical liberals also supported legislation to regulate the number of hours that children were allowed to work and usually did not oppose factory reform legislation.
Despite the pragmatism of classical economists, their views were expressed in dogmatic terms by such popular writers as Jane Marcet and Harriet Martineau. The strongest defender of laissez-faire was The Economist founded by James Wilson in 1843. The Economist criticised Ricardo for his lack of support for free trade and expressed hostility to welfare, believing that the lower orders were responsible for their economic circumstances. The Economist took the position that regulation of factory hours was harmful to workers and also strongly opposed state support for education, health, the provision of water and granting of patents and copyrights.
The Economist also campaigned against the Corn Laws that protected landlords in the United Kingdom of Great Britain and Ireland against competition from less expensive foreign imports of cereal products. A rigid belief in laissez-faire guided the government response in 1846–1849 to the Great Famine in Ireland, during which an estimated 1.5 million people died. The minister responsible for economic and financial affairs, Charles Wood, expected that private enterprise and free trade, rather than government intervention, would alleviate the famine. The Corn Laws were finally repealed in 1846 by the removal of tariffs on grain which kept the price of bread artificially high, but it came too late to stop the Irish famine, partly because it was done in stages over three years.
Several liberals, including Smith and Cobden, argued that the free exchange of goods between nations could lead to world peace. Erik Gartzke states: "Scholars like Montesquieu, Adam Smith, Richard Cobden, Norman Angell, and Richard Rosecrance have long speculated that free markets have the potential to free states from the looming prospect of recurrent warfare". American political scientists John R. Oneal and Bruce M. Russett, well known for their work on the democratic peace theory, state:
The classical liberals advocated policies to increase liberty and prosperity. They sought to empower the commercial class politically and to abolish royal charters, monopolies, and the protectionist policies of mercantilism so as to encourage entrepreneurship and increase productive efficiency. They also expected democracy and laissez-faire economics to diminish the frequency of war.
In The Wealth of Nations, Smith argued that as societies progressed from hunter gatherers to industrial societies the spoils of war would rise, but that the costs of war would rise further and thus making war difficult and costly for industrialised nations:
[T]he honours, the fame, the emoluments of war, belong not to [the middle and industrial classes]; the battle-plain is the harvest field of the aristocracy, watered with the blood of the people. ... Whilst our trade rested upon our foreign dependencies, as was the case in the middle of the last century...force and violence, were necessary to command our customers for our manufacturers...But war, although the greatest of consumers, not only produces nothing in return, but, by abstracting labour from productive employment and interrupting the course of trade, it impedes, in a variety of indirect ways, the creation of wealth; and, should hostilities be continued for a series of years, each successive war-loan will be felt in our commercial and manufacturing districts with an augmented pressure
[B]y virtue of their mutual interest does nature unite people against violence and war, for the concept of cosmopolitan right does not protect them from it. The spirit of trade cannot coexist with war, and sooner or later this spirit dominates every people. For among all those powers (or means) that belong to a nation, financial power may be the most reliable in forcing nations to pursue the noble cause of peace (though not from moral motives); and wherever in the world war threatens to break out, they will try to head it off through mediation, just as if they were permanently leagued for this purpose.
Cobden believed that military expenditures worsened the welfare of the state and benefited a small, but concentrated elite minority, summing up British imperialism, which he believed was the result of the economic restrictions of mercantilist policies. To Cobden and many classical liberals, those who advocated peace must also advocate free markets. The belief that free trade would promote peace was widely shared by English liberals of the 19th and early 20th century, leading the economist John Maynard Keynes (1883–1946), who was a classical liberal in his early life, to say that this was a doctrine on which he was "brought up" and which he held unquestioned only until the 1920s. In his review of a book on Keynes, Michael S. Lawlor argues that it may be in large part due to Keynes' contributions in economics and politics, as in the implementation of the Marshall Plan and the way economies have been managed since his work, "that we have the luxury of not facing his unpalatable choice between free trade and full employment". A related manifestation of this idea was the argument of Norman Angell (1872–1967), most famously before World War I in The Great Illusion (1909), that the interdependence of the economies of the major powers was now so great that war between them was futile and irrational; and therefore unlikely.
Although general libertarian, liberal-conservative and some right-wing populist political parties are also included in classical liberal parties in a broad sense, but only general classical liberal parties such as Germany's FDP, Denmark's Liberal Alliance and Thailand Democrat Party should be listed.
Tadd Wilson, writing for the libertarian Foundation for Economic Education, noted that "Many on the left and right criticize classical liberals for focusing purely on economics and politics to the neglect of a vital issue: culture."
Helena Vieira, writing for the London School of Economics, argued that classical liberalism "may contradict some fundamental democratic principles as they are inconsistent with the principle of unanimity (also known as the Pareto Principle) – the idea that if everyone in society prefers a policy A to a policy B, then the former should be adopted." | [
{
"paragraph_id": 0,
"text": "Classical liberalism is a political tradition and a branch of liberalism which advocates free market and laissez-faire economics; and civil liberties under the rule of law, with special emphasis on individual autonomy, limited government, economic freedom, political freedom and freedom of speech. Classical liberalism, contrary to liberal branches like social liberalism, looks more negatively on social policies, taxation and the state involvement in the lives of individuals, and it advocates deregulation.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Until the Great Depression and the rise of social liberalism, classical liberalism was called economic liberalism. Later, the term was applied as a retronym, to distinguish earlier 19th-century liberalism from social liberalism. By modern standards, in the United States, simple liberalism often means social liberalism, but in Europe and Australia, simple liberalism often means classical liberalism.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Classical liberalism gained full flowering in the early 18th century, building on ideas starting at least as far back as the 16th century, within the Iberian, Anglo-Saxon, and central European contexts, and it was foundational to the American Revolution and \"American Project\" more broadly. Notable liberal individuals whose ideas contributed to classical liberalism include John Locke, Jean-Baptiste Say, Thomas Malthus, and David Ricardo. It drew on classical economics, especially the economic ideas as espoused by Adam Smith in Book One of The Wealth of Nations, and on a belief in natural law, social progress, and utilitarianism. In contemporary times, Friedrich Hayek, Milton Friedman, Ludwig von Mises, Thomas Sowell, George Stigler and Larry Arnhart are seen as the most prominent advocates of classical liberalism. However, other scholars have made reference to these contemporary thoughts as neoclassical liberalism, distinguishing them from 18th-century classical liberalism.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the context of American politics, \"classical liberalism\" may be described as \"fiscally conservative\" and \"socially liberal\". Despite this, classical liberals tend to reject the right's higher tolerance for economic protectionism and the left's inclination for collective group rights due to classical liberalism's central principle of individualism. Additionally, in the United States, classical liberalism is considered closely tied to, or synonymous with, American libertarianism.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Core beliefs of classical liberals included new ideas – which departed from both the older conservative idea of society as a family and from the later sociological concept of society as a complex set of social networks.",
"title": "Evolution of core beliefs"
},
{
"paragraph_id": 5,
"text": "Classical liberals agreed with Thomas Hobbes that individuals created government to protect themselves from each other and to minimize conflict between individuals that would otherwise arise in a state of nature. These beliefs were complemented by a belief that financial incentive could be best motivate labourers. This belief led to the passage of the Poor Law Amendment Act 1834, which limited the provision of social assistance, based on the idea that markets are the mechanism that most efficiently leads to wealth.",
"title": "Evolution of core beliefs"
},
{
"paragraph_id": 6,
"text": "Drawing on ideas of Adam Smith, classical liberals believed that it is in the common interest that all individuals be able to secure their own economic self-interest. They were critical of what would come to be the idea of the welfare state as interfering in a free market. Despite Smith's resolute recognition of the importance and value of labour and of labourers, classical liberals criticized labour's group rights being pursued at the expense of individual rights while accepting corporations' rights, which led to inequality of bargaining power. Classical liberals argued that individuals should be free to obtain work from the highest-paying employers, while the profit motive would ensure that products that people desired were produced at prices they would pay. In a free market, both labour and capital would receive the greatest possible reward, while production would be organized efficiently to meet consumer demand. Classical liberals argued for what they called a minimal state and government, limited to the following functions:",
"title": "Evolution of core beliefs"
},
{
"paragraph_id": 7,
"text": "Classical liberals asserted that rights are of a negative nature and therefore stipulate that other individuals and governments are to refrain from interfering with the free market, opposing social liberals who assert that individuals have positive rights, such as the right to vote, the right to an education, the right to health care, and the right to a minimum wage. For society to guarantee positive rights, it requires taxation over and above the minimum needed to enforce negative rights.",
"title": "Evolution of core beliefs"
},
{
"paragraph_id": 8,
"text": "Core beliefs of classical liberals did not necessarily include democracy nor government by a majority vote by citizens because \"there is nothing in the bare idea of majority rule to show that majorities will always respect the rights of property or maintain rule of law\". For example, James Madison argued for a constitutional republic with protections for individual liberty over a pure democracy, reasoning that in a pure democracy a \"common passion or interest will, in almost every case, be felt by a majority of the whole ... and there is nothing to check the inducements to sacrifice the weaker party\".",
"title": "Evolution of core beliefs"
},
{
"paragraph_id": 9,
"text": "In the late 19th century, classical liberalism developed into neoclassical liberalism, which argued for government to be as small as possible to allow the exercise of individual freedom. In its most extreme form, neoclassical liberalism advocated social Darwinism. Right-libertarianism is a modern form of neoclassical liberalism. However, Edwin Van de Haar states although classical liberal thought influenced libertarianism, there are significant differences between them. Classical liberalism refuses to give priority to liberty over order and therefore does not exhibit the hostility to the state which is the defining feature of libertarianism. As such, right-libertarians believe classical liberals do not have enough respect for individual property rights and lack sufficient trust in the free market's workings and spontaneous order leading to their support of a much larger state. Right-libertarians also disagree with classical liberals as being too supportive of central banks and monetarist policies.",
"title": "Evolution of core beliefs"
},
{
"paragraph_id": 10,
"text": "Friedrich Hayek identified two different traditions within classical liberalism, namely the British tradition and the French tradition:",
"title": "Evolution of core beliefs"
},
{
"paragraph_id": 11,
"text": "Hayek conceded that the national labels did not exactly correspond to those belonging to each tradition since he saw the Frenchmen Montesquieu, Benjamin Constant, Joseph De Maistre and Alexis de Tocqueville as belonging to the British tradition and the British Thomas Hobbes, Joseph Priestley, Richard Price, Edward Gibbon, Benjamin Franklin, Thomas Jefferson and Thomas Paine as belonging to the French tradition. Hayek also rejected the label laissez-faire as originating from the French tradition and alien to the beliefs of Hume and Smith.",
"title": "Evolution of core beliefs"
},
{
"paragraph_id": 12,
"text": "Guido De Ruggiero also identified differences between \"Montesquieu and Rousseau, the English and the democratic types of liberalism\" and argued that there was a \"profound contrast between the two Liberal systems\". He claimed that the spirit of \"authentic English Liberalism\" had \"built up its work piece by piece without ever destroying what had once been built, but basing upon it every new departure\". This liberalism had \"insensibly adapted ancient institutions to modern needs\" and \"instinctively recoiled from all abstract proclamations of principles and rights\". Ruggiero claimed that this liberalism was challenged by what he called the \"new Liberalism of France\" that was characterised by egalitarianism and a \"rationalistic consciousness\".",
"title": "Evolution of core beliefs"
},
{
"paragraph_id": 13,
"text": "In 1848, Francis Lieber distinguished between what he called \"Anglican and Gallican Liberty\". Lieber asserted that \"independence in the highest degree, compatible with safety and broad national guarantees of liberty, is the great aim of Anglican liberty, and self-reliance is the chief source from which it draws its strength\". On the other hand, Gallican liberty \"is sought in government ... . [T]he French look for the highest degree of political civilisation in organisation, that is, in the highest degree of interference by public power\".",
"title": "Evolution of core beliefs"
},
{
"paragraph_id": 14,
"text": "French physiocracy heavily influenced British classical liberalism, which traces its roots to the Whigs and Radicals. Whiggery had become a dominant ideology following the Glorious Revolution of 1688 and was associated with supporting the British Parliament, upholding the rule of law, defending landed property and sometimes included freedom of the press and freedom of speech. The origins of rights were seen as being in an ancient constitution existing from time immemorial. Custom rather than as natural rights justified these rights. Whigs believed that executive power had to be constrained. While they supported limited suffrage, they saw voting as a privilege rather than as a right. However, there was no consistency in Whig ideology and diverse writers including John Locke, David Hume, Adam Smith and Edmund Burke were all influential among Whigs, although none of them were universally accepted.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "From the 1790s to the 1820s, British radicals concentrated on parliamentary and electoral reform, emphasising natural rights and popular sovereignty. Richard Price and Joseph Priestley adapted the language of Locke to the ideology of radicalism. The radicals saw parliamentary reform as a first step toward dealing with their many grievances, including the treatment of Protestant Dissenters, the slave trade, high prices, and high taxes. There was greater unity among classical liberals than there had been among Whigs. Classical liberals were committed to individualism, liberty, and equal rights, as well as some other important tenants of leftism, since classical liberalism was introduced in the late 18th century as a leftist movement. They believed these goals required a free economy with minimal government interference. Some elements of Whiggery were uncomfortable with the commercial nature of classical liberalism. These elements became associated with conservatism.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Classical liberalism was the dominant political theory in Britain from the early 19th century until the First World War. Its notable victories were the Roman Catholic Relief Act 1829, the Reform Act of 1832 and the repeal of the Corn Laws in 1846. The Anti-Corn Law League brought together a coalition of liberal and radical groups in support of free trade under the leadership of Richard Cobden and John Bright, who opposed aristocratic privilege, militarism, and public expenditure and believed that the backbone of Great Britain was the yeoman farmer. Their policies of low public expenditure and low taxation were adopted by William Gladstone when he became Chancellor of the Exchequer and later Prime Minister. Classical liberalism was often associated with religious dissent and nonconformism.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Although classical liberals aspired to a minimum of state activity, they accepted the principle of government intervention in the economy from the early 19th century on, with passage of the Factory Acts. From around 1840 to 1860, laissez-faire advocates of the Manchester School and writers in The Economist were confident that their early victories would lead to a period of expanding economic and personal liberty and world peace, but would face reversals as government intervention and activity continued to expand from the 1850s. Jeremy Bentham and James Mill, although advocates of laissez-faire, non-intervention in foreign affairs, and individual liberty, believed that social institutions could be rationally redesigned through the principles of utilitarianism. The Conservative Prime Minister Benjamin Disraeli rejected classical liberalism altogether and advocated Tory democracy. By the 1870s, Herbert Spencer and other classical liberals concluded that historical development was turning against them. By the First World War, the Liberal Party had largely abandoned classical liberal principles.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The changing economic and social conditions of the 19th century led to a division between neo-classical and social (or welfare) liberals, who while agreeing on the importance of individual liberty differed on the role of the state. Neo-classical liberals, who called themselves \"true liberals\", saw Locke's Second Treatise as the best guide and emphasised \"limited government\" while social liberals supported government regulation and the welfare state. Herbert Spencer in Britain and William Graham Sumner were the leading neo-classical liberal theorists of the 19th century. The evolution from classical to social/welfare liberalism is for example reflected in Britain in the evolution of the thought of John Maynard Keynes.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The Ottoman Empire had liberal free trade policies by the 18th century, with origins in capitulations of the Ottoman Empire, dating back to the first commercial treaties signed with France in 1536 and taken further with capitulations in 1673, in 1740 which lowered duties to only 3% for imports and exports and in 1790. Ottoman free trade policies were praised by British economists advocating free trade such as J. R. McCulloch in his Dictionary of Commerce (1834) but criticized by British politicians opposing free trade such as Prime Minister Benjamin Disraeli, who cited the Ottoman Empire as \"an instance of the injury done by unrestrained competition\" in the 1846 Corn Laws debate, arguing that it destroyed what had been \"some of the finest manufactures of the world\" in 1812.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In the United States, liberalism took a strong root because it had little opposition to its ideals, whereas in Europe liberalism was opposed by many reactionary or feudal interests such as the nobility; the aristocracy, including army officers; the landed gentry; and the established church. Thomas Jefferson adopted many of the ideals of liberalism, but in the Declaration of Independence changed Locke's \"life, liberty and property\" to the more socially liberal \"Life, Liberty and the pursuit of Happiness\". As the United States grew, industry became a larger and larger part of American life; and during the term of its first populist President, Andrew Jackson, economic questions came to the forefront. The economic ideas of the Jacksonian era were almost universally the ideas of classical liberalism. Freedom, according to classical liberals, was maximised when the government took a \"hands off\" attitude toward the economy. Historian Kathleen G. Donohue argues:",
"title": "History"
},
{
"paragraph_id": 21,
"text": "[A]t the center of classical liberal theory [in Europe] was the idea of laissez-faire. To the vast majority of American classical liberals, however, laissez-faire did not mean no government intervention at all. On the contrary, they were more than willing to see government provide tariffs, railroad subsidies, and internal improvements, all of which benefited producers. What they condemned was intervention on behalf of consumers.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The Nation magazine espoused liberalism every week starting in 1865 under the influential editor Edwin Lawrence Godkin (1831–1902). The ideas of classical liberalism remained essentially unchallenged until a series of depressions, thought to be impossible according to the tenets of classical economics, led to economic hardship from which the voters demanded relief. In the words of William Jennings Bryan, \"You shall not crucify this nation on a cross of gold\". Classical liberalism remained the orthodox belief among American businessmen until the Great Depression. The Great Depression in the United States saw a sea change in liberalism, with priority shifting from the producers to consumers. Franklin D. Roosevelt's New Deal represented the dominance of modern liberalism in politics for decades. In the words of Arthur Schlesinger Jr.:",
"title": "History"
},
{
"paragraph_id": 23,
"text": "When the growing complexity of industrial conditions required increasing government intervention in order to assure more equal opportunities, the liberal tradition, faithful to the goal rather than to the dogma, altered its view of the state. ... There emerged the conception of a social welfare state, in which the national government had the express obligation to maintain high levels of employment in the economy, to supervise standards of life and labour, to regulate the methods of business competition, and to establish comprehensive patterns of social security.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Alan Wolfe summarizes the viewpoint that there is a continuous liberal understanding that includes both Adam Smith and John Maynard Keynes:",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The idea that liberalism comes in two forms assumes that the most fundamental question facing mankind is how much government intervenes into the economy. ... When instead we discuss human purpose and the meaning of life, Adam Smith and John Maynard Keynes are on the same side. Both of them possessed an expansive sense of what we are put on this earth to accomplish. ... For Smith, mercantilism was the enemy of human liberty. For Keynes, monopolies were. It makes perfect sense for an eighteenth-century thinker to conclude that humanity would flourish under the market. For a twentieth century thinker committed to the same ideal, government was an essential tool to the same end.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The view that modern liberalism is a continuation of classical liberalism is controversial and disputed by many. James Kurth, Robert E. Lerner, John Micklethwait, Adrian Wooldridge and several other political scholars have argued that classical liberalism still exists today, but in the form of American conservatism. According to Deepak Lal, only in the United States does classical liberalism continue to be a significant political force through American conservatism. American libertarians also claim to be the true continuation of the classical liberal tradition.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Central to classical liberal ideology was their interpretation of John Locke's Second Treatise of Government and A Letter Concerning Toleration, which had been written as a defence of the Glorious Revolution of 1688. Although these writings were considered too radical at the time for Britain's new rulers, Whigs, radicals and supporters of the American Revolution later came to cite them. However, much of later liberal thought was absent in Locke's writings or scarcely mentioned and his writings have been subject to various interpretations. For example, there is little mention of constitutionalism, the separation of powers and limited government.",
"title": "Intellectual sources"
},
{
"paragraph_id": 28,
"text": "James L. Richardson identified five central themes in Locke's writing:",
"title": "Intellectual sources"
},
{
"paragraph_id": 29,
"text": "Although Locke did not develop a theory of natural rights, he envisioned individuals in the state of nature as being free and equal. The individual, rather than the community or institutions, was the point of reference. Locke believed that individuals had given consent to government and therefore authority derived from the people rather than from above. This belief would influence later revolutionary movements.",
"title": "Intellectual sources"
},
{
"paragraph_id": 30,
"text": "As a trustee, government was expected to serve the interests of the people, not the rulers; and rulers were expected to follow the laws enacted by legislatures. Locke also held that the main purpose of men uniting into commonwealths and governments was for the preservation of their property. Despite the ambiguity of Locke's definition of property, which limited property to \"as much land as a man tills, plants, improves, cultivates, and can use the product of\", this principle held great appeal to individuals possessed of great wealth.",
"title": "Intellectual sources"
},
{
"paragraph_id": 31,
"text": "Locke held that the individual had the right to follow his own religious beliefs and that the state should not impose a religion against Dissenters, but there were limitations. No tolerance should be shown for atheists, who were seen as amoral, or to Catholics, who were seen as owing allegiance to the Pope over their own national government.",
"title": "Intellectual sources"
},
{
"paragraph_id": 32,
"text": "Adam Smith's The Wealth of Nations, published in 1776, was to provide most of the ideas of economics, at least until the publication of John Stuart Mill's Principles of Political Economy in 1848. Smith addressed the motivation for economic activity, the causes of prices and the distribution of wealth and the policies the state should follow to maximise wealth.",
"title": "Intellectual sources"
},
{
"paragraph_id": 33,
"text": "Smith wrote that as long as supply, demand, prices and competition were left free of government regulation, the pursuit of material self-interest, rather than altruism, would maximise the wealth of a society through profit-driven production of goods and services. An \"invisible hand\" directed individuals and firms to work toward the public good as an unintended consequence of efforts to maximise their own gain. This provided a moral justification for the accumulation of wealth, which had previously been viewed by some as sinful.",
"title": "Intellectual sources"
},
{
"paragraph_id": 34,
"text": "He assumed that workers could be paid wages as low as was necessary for their survival, which was later transformed by David Ricardo and Thomas Robert Malthus into the \"iron law of wages\". His main emphasis was on the benefit of free internal and international trade, which he thought could increase wealth through specialisation in production. He also opposed restrictive trade preferences, state grants of monopolies and employers' organisations and trade unions. Government should be limited to defence, public works and the administration of justice, financed by taxes based on income.",
"title": "Intellectual sources"
},
{
"paragraph_id": 35,
"text": "Smith's economics was carried into practice in the nineteenth century with the lowering of tariffs in the 1820s, the repeal of the Poor Relief Act that had restricted the mobility of labour in 1834 and the end of the rule of the East India Company over India in 1858.",
"title": "Intellectual sources"
},
{
"paragraph_id": 36,
"text": "In addition to Smith's legacy, Say's law, Thomas Robert Malthus' theories of population and David Ricardo's iron law of wages became central doctrines of classical economics. The pessimistic nature of these theories provided a basis for criticism of capitalism by its opponents and helped perpetuate the tradition of calling economics the \"dismal science\".",
"title": "Intellectual sources"
},
{
"paragraph_id": 37,
"text": "Jean-Baptiste Say was a French economist who introduced Smith's economic theories into France and whose commentaries on Smith were read in both France and Britain. Say challenged Smith's labour theory of value, believing that prices were determined by utility and also emphasised the critical role of the entrepreneur in the economy. However, neither of those observations became accepted by British economists at the time. His most important contribution to economic thinking was Say's law, which was interpreted by classical economists that there could be no overproduction in a market and that there would always be a balance between supply and demand. This general belief influenced government policies until the 1930s. Following this law, since the economic cycle was seen as self-correcting, government did not intervene during periods of economic hardship because it was seen as futile.",
"title": "Intellectual sources"
},
{
"paragraph_id": 38,
"text": "Malthus wrote two books, An Essay on the Principle of Population (published in 1798) and Principles of Political Economy (published in 1820). The second book which was a rebuttal of Say's law had little influence on contemporary economists. However, his first book became a major influence on classical liberalism. In that book, Malthus claimed that population growth would outstrip food production because population grew geometrically while food production grew arithmetically. As people were provided with food, they would reproduce until their growth outstripped the food supply. Nature would then provide a check to growth in the forms of vice and misery. No gains in income could prevent this and any welfare for the poor would be self-defeating. The poor were in fact responsible for their own problems which could have been avoided through self-restraint.",
"title": "Intellectual sources"
},
{
"paragraph_id": 39,
"text": "Ricardo, who was an admirer of Smith, covered many of the same topics, but while Smith drew conclusions from broadly empirical observations he used deduction, drawing conclusions by reasoning from basic assumptions While Ricardo accepted Smith's labour theory of value, he acknowledged that utility could influence the price of some rare items. Rents on agricultural land were seen as the production that was surplus to the subsistence required by the tenants. Wages were seen as the amount required for workers' subsistence and to maintain current population levels. According to his iron law of wages, wages could never rise beyond subsistence levels. Ricardo explained profits as a return on capital, which itself was the product of labour, but a conclusion many drew from his theory was that profit was a surplus appropriated by capitalists to which they were not entitled.",
"title": "Intellectual sources"
},
{
"paragraph_id": 40,
"text": "The central concept of utilitarianism, which was developed by Jeremy Bentham, was that public policy should seek to provide \"the greatest happiness of the greatest number\". While this could be interpreted as a justification for state action to reduce poverty, it was used by classical liberals to justify inaction with the argument that the net benefit to all individuals would be higher.",
"title": "Intellectual sources"
},
{
"paragraph_id": 41,
"text": "Utilitarianism provided British governments with the political justification to implement economic liberalism, which was to dominate economic policy from the 1830s. Although utilitarianism prompted legislative and administrative reform and John Stuart Mill's later writings on the subject foreshadowed the welfare state, it was mainly used as a justification for laissez-faire.",
"title": "Intellectual sources"
},
{
"paragraph_id": 42,
"text": "Classical liberals following Mill saw utility as the foundation for public policies. This broke both with conservative \"tradition\" and Lockean \"natural rights\", which were seen as irrational. Utility, which emphasises the happiness of individuals, became the central ethical value of all Mill-style liberalism. Although utilitarianism inspired wide-ranging reforms, it became primarily a justification for laissez-faire economics. However, Mill adherents rejected Smith's belief that the \"invisible hand\" would lead to general benefits and embraced Malthus' view that population expansion would prevent any general benefit and Ricardo's view of the inevitability of class conflict. Laissez-faire was seen as the only possible economic approach and any government intervention was seen as useless and harmful. The Poor Law Amendment Act 1834 was defended on \"scientific or economic principles\" while the authors of the Poor Relief Act 1601 were seen as not having had the benefit of reading Malthus.",
"title": "Political economy"
},
{
"paragraph_id": 43,
"text": "However, commitment to laissez-faire was not uniform and some economists advocated state support of public works and education. Classical liberals were also divided on free trade as Ricardo expressed doubt that the removal of grain tariffs advocated by Richard Cobden and the Anti-Corn Law League would have any general benefits. Most classical liberals also supported legislation to regulate the number of hours that children were allowed to work and usually did not oppose factory reform legislation.",
"title": "Political economy"
},
{
"paragraph_id": 44,
"text": "Despite the pragmatism of classical economists, their views were expressed in dogmatic terms by such popular writers as Jane Marcet and Harriet Martineau. The strongest defender of laissez-faire was The Economist founded by James Wilson in 1843. The Economist criticised Ricardo for his lack of support for free trade and expressed hostility to welfare, believing that the lower orders were responsible for their economic circumstances. The Economist took the position that regulation of factory hours was harmful to workers and also strongly opposed state support for education, health, the provision of water and granting of patents and copyrights.",
"title": "Political economy"
},
{
"paragraph_id": 45,
"text": "The Economist also campaigned against the Corn Laws that protected landlords in the United Kingdom of Great Britain and Ireland against competition from less expensive foreign imports of cereal products. A rigid belief in laissez-faire guided the government response in 1846–1849 to the Great Famine in Ireland, during which an estimated 1.5 million people died. The minister responsible for economic and financial affairs, Charles Wood, expected that private enterprise and free trade, rather than government intervention, would alleviate the famine. The Corn Laws were finally repealed in 1846 by the removal of tariffs on grain which kept the price of bread artificially high, but it came too late to stop the Irish famine, partly because it was done in stages over three years.",
"title": "Political economy"
},
{
"paragraph_id": 46,
"text": "Several liberals, including Smith and Cobden, argued that the free exchange of goods between nations could lead to world peace. Erik Gartzke states: \"Scholars like Montesquieu, Adam Smith, Richard Cobden, Norman Angell, and Richard Rosecrance have long speculated that free markets have the potential to free states from the looming prospect of recurrent warfare\". American political scientists John R. Oneal and Bruce M. Russett, well known for their work on the democratic peace theory, state:",
"title": "Political economy"
},
{
"paragraph_id": 47,
"text": "The classical liberals advocated policies to increase liberty and prosperity. They sought to empower the commercial class politically and to abolish royal charters, monopolies, and the protectionist policies of mercantilism so as to encourage entrepreneurship and increase productive efficiency. They also expected democracy and laissez-faire economics to diminish the frequency of war.",
"title": "Political economy"
},
{
"paragraph_id": 48,
"text": "In The Wealth of Nations, Smith argued that as societies progressed from hunter gatherers to industrial societies the spoils of war would rise, but that the costs of war would rise further and thus making war difficult and costly for industrialised nations:",
"title": "Political economy"
},
{
"paragraph_id": 49,
"text": "[T]he honours, the fame, the emoluments of war, belong not to [the middle and industrial classes]; the battle-plain is the harvest field of the aristocracy, watered with the blood of the people. ... Whilst our trade rested upon our foreign dependencies, as was the case in the middle of the last century...force and violence, were necessary to command our customers for our manufacturers...But war, although the greatest of consumers, not only produces nothing in return, but, by abstracting labour from productive employment and interrupting the course of trade, it impedes, in a variety of indirect ways, the creation of wealth; and, should hostilities be continued for a series of years, each successive war-loan will be felt in our commercial and manufacturing districts with an augmented pressure",
"title": "Political economy"
},
{
"paragraph_id": 50,
"text": "[B]y virtue of their mutual interest does nature unite people against violence and war, for the concept of cosmopolitan right does not protect them from it. The spirit of trade cannot coexist with war, and sooner or later this spirit dominates every people. For among all those powers (or means) that belong to a nation, financial power may be the most reliable in forcing nations to pursue the noble cause of peace (though not from moral motives); and wherever in the world war threatens to break out, they will try to head it off through mediation, just as if they were permanently leagued for this purpose.",
"title": "Political economy"
},
{
"paragraph_id": 51,
"text": "Cobden believed that military expenditures worsened the welfare of the state and benefited a small, but concentrated elite minority, summing up British imperialism, which he believed was the result of the economic restrictions of mercantilist policies. To Cobden and many classical liberals, those who advocated peace must also advocate free markets. The belief that free trade would promote peace was widely shared by English liberals of the 19th and early 20th century, leading the economist John Maynard Keynes (1883–1946), who was a classical liberal in his early life, to say that this was a doctrine on which he was \"brought up\" and which he held unquestioned only until the 1920s. In his review of a book on Keynes, Michael S. Lawlor argues that it may be in large part due to Keynes' contributions in economics and politics, as in the implementation of the Marshall Plan and the way economies have been managed since his work, \"that we have the luxury of not facing his unpalatable choice between free trade and full employment\". A related manifestation of this idea was the argument of Norman Angell (1872–1967), most famously before World War I in The Great Illusion (1909), that the interdependence of the economies of the major powers was now so great that war between them was futile and irrational; and therefore unlikely.",
"title": "Political economy"
},
{
"paragraph_id": 52,
"text": "Although general libertarian, liberal-conservative and some right-wing populist political parties are also included in classical liberal parties in a broad sense, but only general classical liberal parties such as Germany's FDP, Denmark's Liberal Alliance and Thailand Democrat Party should be listed.",
"title": "Classical liberal parties worldwide"
},
{
"paragraph_id": 53,
"text": "Tadd Wilson, writing for the libertarian Foundation for Economic Education, noted that \"Many on the left and right criticize classical liberals for focusing purely on economics and politics to the neglect of a vital issue: culture.\"",
"title": "Criticism"
},
{
"paragraph_id": 54,
"text": "Helena Vieira, writing for the London School of Economics, argued that classical liberalism \"may contradict some fundamental democratic principles as they are inconsistent with the principle of unanimity (also known as the Pareto Principle) – the idea that if everyone in society prefers a policy A to a policy B, then the former should be adopted.\"",
"title": "Criticism"
}
] | Classical liberalism is a political tradition and a branch of liberalism which advocates free market and laissez-faire economics; and civil liberties under the rule of law, with special emphasis on individual autonomy, limited government, economic freedom, political freedom and freedom of speech. Classical liberalism, contrary to liberal branches like social liberalism, looks more negatively on social policies, taxation and the state involvement in the lives of individuals, and it advocates deregulation. Until the Great Depression and the rise of social liberalism, classical liberalism was called economic liberalism. Later, the term was applied as a retronym, to distinguish earlier 19th-century liberalism from social liberalism. By modern standards, in the United States, simple liberalism often means social liberalism, but in Europe and Australia, simple liberalism often means classical liberalism. Classical liberalism gained full flowering in the early 18th century, building on ideas starting at least as far back as the 16th century, within the Iberian, Anglo-Saxon, and central European contexts, and it was foundational to the American Revolution and "American Project" more broadly. Notable liberal individuals whose ideas contributed to classical liberalism include John Locke, Jean-Baptiste Say, Thomas Malthus, and David Ricardo. It drew on classical economics, especially the economic ideas as espoused by Adam Smith in Book One of The Wealth of Nations, and on a belief in natural law, social progress, and utilitarianism. In contemporary times, Friedrich Hayek, Milton Friedman, Ludwig von Mises, Thomas Sowell, George Stigler and Larry Arnhart are seen as the most prominent advocates of classical liberalism. However, other scholars have made reference to these contemporary thoughts as neoclassical liberalism, distinguishing them from 18th-century classical liberalism. In the context of American politics, "classical liberalism" may be described as "fiscally conservative" and "socially liberal". Despite this, classical liberals tend to reject the right's higher tolerance for economic protectionism and the left's inclination for collective group rights due to classical liberalism's central principle of individualism. Additionally, in the United States, classical liberalism is considered closely tied to, or synonymous with, American libertarianism. | 2001-10-12T16:08:19Z | 2023-12-25T12:13:42Z | [
"Template:Snd",
"Template:Liberalism US",
"Template:Original research",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:ISBN?",
"Template:Use dmy dates",
"Template:Sfn",
"Template:Philosophy topics",
"Template:Cite journal",
"Template:Cite speech",
"Template:Commonscatinline",
"Template:Liberalism",
"Template:Source needed",
"Template:Cite web",
"Template:Dead link",
"Template:Dubious",
"Template:Cite news",
"Template:Div col end",
"Template:Efn",
"Template:ISBN",
"Template:JSTOR",
"Template:Refbegin",
"Template:Refend",
"Template:About",
"Template:Capitalism sidebar",
"Template:Lang",
"Template:Portal",
"Template:Libertarianism US",
"Template:Blockquote",
"Template:Notelist",
"Template:Wikiquote-inline",
"Template:EngvarB",
"Template:Liberalism sidebar",
"Template:Short description",
"Template:Cite book",
"Template:Webarchive",
"Template:Citation",
"Template:Wiktionary-inline",
"Template:Clearleft",
"Template:Div col"
] | https://en.wikipedia.org/wiki/Classical_liberalism |
6,678 | Cat | The cat (Felis catus), commonly referred to as the domestic cat or house cat, is the only domesticated species in the family Felidae. Recent advances in archaeology and genetics have shown that the domestication of the cat occurred in the Near East around 7500 BC. It is commonly kept as a house pet and farm cat, but also ranges freely as a feral cat avoiding human contact. It is valued by humans for companionship and its ability to kill vermin. Because of its retractable claws it is adapted to killing small prey like mice and rats. It has a strong flexible body, quick reflexes, sharp teeth, and its night vision and sense of smell are well developed. It is a social species, but a solitary hunter and a crepuscular predator. Cat communication includes vocalizations like meowing, purring, trilling, hissing, growling, and grunting as well as cat body language. It can hear sounds too faint or too high in frequency for human ears, such as those made by small mammals. It also secretes and perceives pheromones.
Female domestic cats can have kittens from spring to late autumn in temperate zones and throughout the year in equatorial regions, with litter sizes often ranging from two to five kittens. Domestic cats are bred and shown at events as registered pedigreed cats, a hobby known as cat fancy. Animal population control of cats may be achieved by spaying and neutering, but their proliferation and the abandonment of pets has resulted in large numbers of feral cats worldwide, contributing to the extinction of bird, mammal and reptile species.
As of 2017, the domestic cat was the second most popular pet in the United States, with 95.6 million cats owned and around 42 million households owning at least one cat. In the United Kingdom, 26% of adults have a cat, with an estimated population of 10.9 million pet cats as of 2020. As of 2021, there were an estimated 220 million owned and 480 million stray cats in the world.
The origin of the English word cat, Old English catt, is thought to be the Late Latin word cattus, which was first used at the beginning of the 6th century. The Late Latin word may be derived from an unidentified African language. The Nubian word kaddîska 'wildcat' and Nobiin kadīs are possible sources or cognates. The Nubian word may be a loan from Arabic قَطّ qaṭṭ ~ قِطّ qiṭṭ.
However, it is "equally likely that the forms might derive from an ancient Germanic word, imported into Latin and thence to Greek and to Syriac and Arabic". The word may be derived from Germanic and Northern European languages, and ultimately be borrowed from Uralic, cf. Northern Sámi gáđfi, 'female stoat', and Hungarian hölgy, 'lady, female stoat'; from Proto-Uralic *käďwä, 'female (of a furred animal)'.
The English puss, extended as pussy and pussycat, is attested from the 16th century and may have been introduced from Dutch poes or from Low German puuskatte, related to Swedish kattepus, or Norwegian pus, pusekatt. Similar forms exist in Lithuanian puižė and Irish puisín or puiscín. The etymology of this word is unknown, but it may have arisen from a sound used to attract a cat.
A male cat is called a tom or tomcat (or a gib, if neutered). A female is called a queen (or a molly, if spayed), especially in a cat-breeding context. A juvenile cat is referred to as a kitten. In Early Modern English, the word kitten was interchangeable with the now-obsolete word catling. A group of cats can be referred to as a clowder or a glaring.
The scientific name Felis catus was proposed by Carl Linnaeus in 1758 for a domestic cat. Felis catus domesticus was proposed by Johann Christian Polycarp Erxleben in 1777. Felis daemon proposed by Konstantin Satunin in 1904 was a black cat from the Transcaucasus, later identified as a domestic cat.
In 2003, the International Commission on Zoological Nomenclature ruled that the domestic cat is a distinct species, namely Felis catus. In 2007, it was considered a subspecies, F. silvestris catus, of the European wildcat (F. silvestris) following results of phylogenetic research. In 2017, the IUCN Cat Classification Taskforce followed the recommendation of the ICZN in regarding the domestic cat as a distinct species, Felis catus.
The domestic cat is a member of the Felidae, a family that had a common ancestor about 10 to 15 million years ago. The evolutionary radiation of the Felidae began in Asia during the Miocene around 8.38 to 14.45 million years ago. Analysis of mitochondrial DNA of all Felidae species indicates a radiation at 6.46 to 16.76 million years ago. The genus Felis genetically diverged from other Felidae around 6 to 7 million years ago. Results of phylogenetic research shows that the wild members of this genus evolved through sympatric or parapatric speciation, whereas the domestic cat evolved through artificial selection. The domestic cat and its closest wild ancestor are diploid and both possess 38 chromosomes and roughly 20,000 genes.
It was long thought that the domestication of the cat began in ancient Egypt, where cats were venerated from around 3100 BC, However, the earliest known indication for the taming of an African wildcat was excavated close by a human Neolithic grave in Shillourokambos, southern Cyprus, dating to about 7500–7200 BC. Since there is no evidence of native mammalian fauna on Cyprus, the inhabitants of this Neolithic village most likely brought the cat and other wild mammals to the island from the Middle Eastern mainland. Scientists therefore assume that African wildcats were attracted to early human settlements in the Fertile Crescent by rodents, in particular the house mouse (Mus musculus), and were tamed by Neolithic farmers. This mutual relationship between early farmers and tamed cats lasted thousands of years. As agricultural practices spread, so did tame and domesticated cats. Wildcats of Egypt contributed to the maternal gene pool of the domestic cat at a later time.
The earliest known evidence for the occurrence of the domestic cat in Greece dates to around 1200 BC. Greek, Phoenician, Carthaginian and Etruscan traders introduced domestic cats to southern Europe. During the Roman Empire they were introduced to Corsica and Sardinia before the beginning of the 1st millennium. By the 5th century BC, they were familiar animals around settlements in Magna Graecia and Etruria. By the end of the Western Roman Empire in the 5th century, the Egyptian domestic cat lineage had arrived in a Baltic Sea port in northern Germany.
The leopard cat (Prionailurus bengalensis) was tamed independently in China around 5500 BC. This line of partially domesticated cats leaves no trace in the domestic cat populations of today.
During domestication, cats have undergone only minor changes in anatomy and behavior, and they are still capable of surviving in the wild. Several natural behaviors and characteristics of wildcats may have pre-adapted them for domestication as pets. These traits include their small size, social nature, obvious body language, love of play, and high intelligence. Captive Leopardus cats may also display affectionate behavior toward humans but were not domesticated. House cats often mate with feral cats. Hybridisation between domestic and other Felinae species is also possible, producing hybrids such as the Kellas cat in Scotland.
Development of cat breeds started in the mid 19th century. An analysis of the domestic cat genome revealed that the ancestral wildcat genome was significantly altered in the process of domestication, as specific mutations were selected to develop cat breeds. Most breeds are founded on random-bred domestic cats. Genetic diversity of these breeds varies between regions, and is lowest in purebred populations, which show more than 20 deleterious genetic disorders.
The domestic cat has a smaller skull and shorter bones than the European wildcat. It averages about 46 cm (18 in) in head-to-body length and 23–25 cm (9.1–9.8 in) in height, with about 30 cm (12 in) long tails. Males are larger than females. Adult domestic cats typically weigh 4–5 kg (8.8–11.0 lb).
Cats have seven cervical vertebrae (as do most mammals); 13 thoracic vertebrae (humans have 12); seven lumbar vertebrae (humans have five); three sacral vertebrae (as do most mammals, but humans have five); and a variable number of caudal vertebrae in the tail (humans have only three to five vestigial caudal vertebrae, fused into an internal coccyx). The extra lumbar and thoracic vertebrae account for the cat's spinal mobility and flexibility. Attached to the spine are 13 ribs, the shoulder, and the pelvis. Unlike human arms, cat forelimbs are attached to the shoulder by free-floating clavicle bones which allow them to pass their body through any space into which they can fit their head.
The cat skull is unusual among mammals in having very large eye sockets and a powerful specialized jaw. Within the jaw, cats have teeth adapted for killing prey and tearing meat. When it overpowers its prey, a cat delivers a lethal neck bite with its two long canine teeth, inserting them between two of the prey's vertebrae and severing its spinal cord, causing irreversible paralysis and death. Compared to other felines, domestic cats have narrowly spaced canine teeth relative to the size of their jaw, which is an adaptation to their preferred prey of small rodents, which have small vertebrae.
The premolar and first molar together compose the carnassial pair on each side of the mouth, which efficiently shears meat into small pieces, like a pair of scissors. These are vital in feeding, since cats' small molars cannot chew food effectively, and cats are largely incapable of mastication. Cats tend to have better teeth than most humans, with decay generally less likely because of a thicker protective layer of enamel, a less damaging saliva, less retention of food particles between teeth, and a diet mostly devoid of sugar. Nonetheless, they are subject to occasional tooth loss and infection.
Cats have protractible and retractable claws. In their normal, relaxed position, the claws are sheathed with the skin and fur around the paw's toe pads. This keeps the claws sharp by preventing wear from contact with the ground and allows for the silent stalking of prey. The claws on the forefeet are typically sharper than those on the hindfeet. Cats can voluntarily extend their claws on one or more paws. They may extend their claws in hunting or self-defense, climbing, kneading, or for extra traction on soft surfaces. Cats shed the outside layer of their claw sheaths when scratching rough surfaces.
Most cats have five claws on their front paws and four on their rear paws. The dewclaw is proximal to the other claws. More proximally is a protrusion which appears to be a sixth "finger". This special feature of the front paws on the inside of the wrists has no function in normal walking but is thought to be an antiskidding device used while jumping. Some cat breeds are prone to having extra digits ("polydactyly"). Polydactylous cats occur along North America's northeast coast and in Great Britain.
The cat is digitigrade. It walks on the toes, with the bones of the feet making up the lower part of the visible leg. Unlike most mammals, it uses a "pacing" gait and moves both legs on one side of the body before the legs on the other side. It registers directly by placing each hind paw close to the track of the corresponding fore paw, minimizing noise and visible tracks. This also provides sure footing for hind paws when navigating rough terrain. As it speeds up from walking to trotting, its gait changes to a "diagonal" gait: The diagonally opposite hind and fore legs move simultaneously.
Cats are generally fond of sitting in high places or perching. A higher place may serve as a concealed site from which to hunt; domestic cats strike prey by pouncing from a perch such as a tree branch. Another possible explanation is that height gives the cat a better observation point, allowing it to survey its territory. A cat falling from heights of up to 3 m (9.8 ft) can right itself and land on its paws.
During a fall from a high place, a cat reflexively twists its body and rights itself to land on its feet using its acute sense of balance and flexibility. This reflex is known as the cat righting reflex. A cat always rights itself in the same way during a fall, if it has enough time to do so, which is the case in falls of 90 cm (3.0 ft) or more. How cats are able to right themselves when falling has been investigated as the "falling cat problem".
The cat family (Felidae) can pass down many colors and patterns to their offspring. The domestic cat genes MC1R and ASIP allow for the variety of color in coats. The feline ASIP gene consists of three coding exons. Three novel microsatellite markers linked to ASIP were isolated from a domestic cat BAC clone containing this gene and were used to perform linkage analysis in a pedigree of 89 domestic cats that segregated for melanism.
Cats have excellent night vision and can see at only one-sixth the light level required for human vision. This is partly the result of cat eyes having a tapetum lucidum, which reflects any light that passes through the retina back into the eye, thereby increasing the eye's sensitivity to dim light. Large pupils are an adaptation to dim light. The domestic cat has slit pupils, which allow it to focus bright light without chromatic aberration. At low light, a cat's pupils expand to cover most of the exposed surface of its eyes. The domestic cat has rather poor color vision and only two types of cone cells, optimized for sensitivity to blue and yellowish green; its ability to distinguish between red and green is limited. A response to middle wavelengths from a system other than the rod cells might be due to a third type of cone. This appears to be an adaptation to low light levels rather than representing true trichromatic vision. Cats also have a nictitating membrane, allowing them to blink without hindering their vision.
The domestic cat's hearing is most acute in the range of 500 Hz to 32 kHz. It can detect an extremely broad range of frequencies ranging from 55 Hz to 79 kHz, whereas humans can only detect frequencies between 20 Hz and 20 kHz. It can hear a range of 10.5 octaves, while humans and dogs can hear ranges of about 9 octaves. Its hearing sensitivity is enhanced by its large movable outer ears, the pinnae, which amplify sounds and help detect the location of a noise. It can detect ultrasound, which enables it to detect ultrasonic calls made by rodent prey. Recent research has shown that cats have socio-spatial cognitive abilities to create mental maps of owners' locations based on hearing owners' voices.
Cats have an acute sense of smell, due in part to their well-developed olfactory bulb and a large surface of olfactory mucosa, about 5.8 cm (0.90 in) in area, which is about twice that of humans. Cats and many other animals have a Jacobson's organ in their mouths that is used in the behavioral process of flehmening. It allows them to sense certain aromas in a way that humans cannot. Cats are sensitive to pheromones such as 3-mercapto-3-methylbutan-1-ol, which they use to communicate through urine spraying and marking with scent glands. Many cats also respond strongly to plants that contain nepetalactone, especially catnip, as they can detect that substance at less than one part per billion. About 70–80% of cats are affected by nepetalactone. This response is also produced by other plants, such as silver vine (Actinidia polygama) and the herb valerian; it may be caused by the smell of these plants mimicking a pheromone and stimulating cats' social or sexual behaviors.
Cats have relatively few taste buds compared to humans (470 or so versus more than 9,000 on the human tongue). Domestic and wild cats share a taste receptor gene mutation that keeps their sweet taste buds from binding to sugary molecules, leaving them with no ability to taste sweetness. They, however, possess taste bud receptors specialized for acids, amino acids like protein, and bitter tastes. Their taste buds possess the receptors needed to detect umami. However, these receptors contain molecular changes that make the cat taste of umami different from that of humans. In humans, they detect the amino acids of glutamic acid and aspartic acid, but in cats they instead detect nucleotides, in this case inosine monophosphate and l-Histidine. These nucleotides are particularly enriched in tuna. This has been argued is why cats find tuna so palatable: as put by researchers into cat taste, "the specific combination of the high IMP and free l-Histidine contents of tuna" .. "produces a strong umami taste synergy that is highly preferred by cats". One of the researchers involved in this research has further claimed, "I think umami is as important for cats as sweet is for humans".
Cats also have a distinct temperature preference for their food, preferring food with a temperature around 38 °C (100 °F) which is similar to that of a fresh kill; some cats reject cold food (which would signal to the cat that the "prey" item is long dead and therefore possibly toxic or decomposing).
To aid with navigation and sensation, cats have dozens of movable whiskers (vibrissae) over their body, especially their faces. These provide information on the width of gaps and on the location of objects in the dark, both by touching objects directly and by sensing air currents; they also trigger protective blink reflexes to protect the eyes from damage.
Outdoor cats are active both day and night, although they tend to be slightly more active at night. Domestic cats spend the majority of their time in the vicinity of their homes but can range many hundreds of meters from this central point. They establish territories that vary considerably in size, in one study ranging 7–28 ha (17–69 acres). The timing of cats' activity is quite flexible and varied but being low-light predators, they are generally crepuscular, which means they tend to be more active near dawn and dusk. However, house cats' behavior is also influenced by human activity and they may adapt to their owners' sleeping patterns to some extent.
Cats conserve energy by sleeping more than most animals, especially as they grow older. The daily duration of sleep varies, usually between 12 and 16 hours, with 13 and 14 being the average. Some cats can sleep as much as 20 hours. The term "cat nap" for a short rest refers to the cat's tendency to fall asleep (lightly) for a brief period. While asleep, cats experience short periods of rapid eye movement sleep often accompanied by muscle twitches, which suggests they are dreaming.
The social behavior of the domestic cat ranges from widely dispersed individuals to feral cat colonies that gather around a food source, based on groups of co-operating females. Within such groups, one cat is usually dominant over the others. Each cat in a colony holds a distinct territory, with sexually active males having the largest territories, which are about 10 times larger than those of female cats and may overlap with several females' territories. These territories are marked by urine spraying, by rubbing objects at head height with secretions from facial glands, and by defecation. Between these territories are neutral areas where cats watch and greet one another without territorial conflicts. Outside these neutral areas, territory holders usually chase away stranger cats, at first by staring, hissing, and growling and, if that does not work, by short but noisy and violent attacks. Despite this colonial organization, cats do not have a social survival strategy or a herd behavior, and always hunt alone.
Life in proximity to humans and other domestic animals has led to a symbiotic social adaptation in cats, and cats may express great affection toward humans or other animals. Ethologically, a cat's human keeper functions as if a mother surrogate. Adult cats live their lives in a kind of extended kittenhood, a form of behavioral neoteny. Their high-pitched sounds may mimic the cries of a hungry human infant, making them particularly difficult for humans to ignore. Some pet cats are poorly socialized. In particular, older cats show aggressiveness toward newly arrived kittens, which include biting and scratching; this type of behavior is known as feline asocial aggression.
Redirected aggression is a common form of aggression which can occur in multiple cat households. In redirected aggression there is usually something that agitates the cat: this could be a sight, sound, or another source of stimuli which causes a heightened level of anxiety or arousal. If the cat cannot attack the stimuli, it may direct anger elsewhere by attacking or directing aggression to the nearest cat, dog, human or other being.
Domestic cats' scent rubbing behavior toward humans or other cats is thought to be a feline means for social bonding.
Domestic cats use many vocalizations for communication, including purring, trilling, hissing, growling/snarling, grunting, and several different forms of meowing. Their body language, including position of ears and tail, relaxation of the whole body, and kneading of the paws, are all indicators of mood. The tail and ears are particularly important social signal mechanisms in cats. A raised tail indicates a friendly greeting, and flattened ears indicate hostility. Tail-raising also indicates the cat's position in the group's social hierarchy, with dominant individuals raising their tails less often than subordinate ones. Feral cats are generally silent. Nose-to-nose touching is also a common greeting and may be followed by social grooming, which is solicited by one of the cats raising and tilting its head.
Purring may have developed as an evolutionary advantage as a signaling mechanism of reassurance between mother cats and nursing kittens, who are thought to use it as a care-soliciting signal. Post-nursing cats also often purr as a sign of contentment: when being petted, becoming relaxed, or eating. Even though purring is popularly interpreted as indicative of pleasure, it has been recorded in a wide variety of circumstances, most of which involve physical contact between the cat and another, presumably trusted individual. Some cats have been observed to purr continuously when chronically ill or in apparent pain.
The exact mechanism by which cats purr has long been elusive, but it has been proposed that purring is generated via a series of sudden build-ups and releases of pressure as the glottis is opened and closed, which causes the vocal folds to separate forcefully. The laryngeal muscles in control of the glottis are thought to be driven by a neural oscillator which generates a cycle of contraction and release every 30–40 milliseconds (giving a frequency of 33 to 25 Hz).
Domestic cats observed in a rescue facility have total of 276 distinct facial expressions based on 26 different facial movements; each facial expression corresponds to different social functions that are likely influenced by domestication.
Cats are known for spending considerable amounts of time licking their coats to keep them clean. The cat's tongue has backward-facing spines about 500 μm long, which are called papillae. These contain keratin which makes them rigid so the papillae act like a hairbrush. Some cats, particularly longhaired cats, occasionally regurgitate hairballs of fur that have collected in their stomachs from grooming. These clumps of fur are usually sausage-shaped and about 2–3 cm (0.79–1.18 in) long. Hairballs can be prevented with remedies that ease elimination of the hair through the gut, as well as regular grooming of the coat with a comb or stiff brush.
Among domestic cats, males are more likely to fight than females. Among feral cats, the most common reason for cat fighting is competition between two males to mate with a female. In such cases, most fights are won by the heavier male. Another common reason for fighting in domestic cats is the difficulty of establishing territories within a small home. Female cats also fight over territory or to defend their kittens. Neutering will decrease or eliminate this behavior in many cases, suggesting that the behavior is linked to sex hormones.
When cats become aggressive, they try to make themselves appear larger and more threatening by raising their fur, arching their backs, turning sideways and hissing or spitting. Often, the ears are pointed down and back to avoid damage to the inner ear and potentially listen for any changes behind them while focused forward. Cats may also vocalize loudly and bare their teeth in an effort to further intimidate their opponents. Fights usually consist of grappling and delivering powerful slaps to the face and body with the forepaws as well as bites. Cats also throw themselves to the ground in a defensive posture to rake their opponent's belly with their powerful hind legs.
Serious damage is rare, as the fights are usually short in duration, with the loser running away with little more than a few scratches to the face and ears. Fights for mating rights are typically more severe and injuries may include deep puncture wounds and lacerations. Normally, serious injuries from fighting are limited to infections of scratches and bites, though these can occasionally kill cats if untreated. In addition, bites are probably the main route of transmission of feline immunodeficiency virus. Sexually active males are usually involved in many fights during their lives, and often have decidedly battered faces with obvious scars and cuts to their ears and nose. Cats are willing to threaten animals larger than them to defend their territory, such as dogs and foxes.
The shape and structure of cats' cheeks is insufficient to allow them to take in liquids using suction. Therefore, when drinking they lap with the tongue to draw liquid upward into their mouths. Lapping at a rate of four times a second, the cat touches the smooth tip of its tongue to the surface of the water, and quickly retracts it like a corkscrew, drawing water upward.
Feral cats and free-fed house cats consume several small meals in a day. The frequency and size of meals varies between individuals. They select food based on its temperature, smell and texture; they dislike chilled foods and respond most strongly to moist foods rich in amino acids, which are similar to meat. Cats reject novel flavors (a response termed neophobia) and learn quickly to avoid foods that have tasted unpleasant in the past. It is also a common misconception that cats like milk/cream, as they tend to avoid sweet food and milk. Most adult cats are lactose intolerant; the sugar in milk is not easily digested and may cause soft stools or diarrhea. Some also develop odd eating habits and like to eat or chew on things like wool, plastic, cables, paper, string, aluminum foil, or even coal. This condition, pica, can threaten their health, depending on the amount and toxicity of the items eaten.
Cats hunt small prey, primarily birds and rodents, and are often used as a form of pest control. Other common small creatures such as lizards and snakes may also become prey. Cats use two hunting strategies, either stalking prey actively, or waiting in ambush until an animal comes close enough to be captured. The strategy used depends on the prey species in the area, with cats waiting in ambush outside burrows, but tending to actively stalk birds. Domestic cats are a major predator of wildlife in the United States, killing an estimated 1.3 to 4.0 billion birds and 6.3 to 22.3 billion mammals annually.
Certain species appear more susceptible than others; in one English village, for example, 30% of house sparrow mortality was linked to the domestic cat. In the recovery of ringed robins (Erithacus rubecula) and dunnocks (Prunella modularis) in Britain, 31% of deaths were a result of cat predation. In parts of North America, the presence of larger carnivores such as coyotes which prey on cats and other small predators reduces the effect of predation by cats and other small predators such as opossums and raccoons on bird numbers and variety.
Perhaps the best-known element of cats' hunting behavior, which is commonly misunderstood and often appalls cat owners because it looks like torture, is that cats often appear to "play" with prey by releasing and recapturing it. This cat and mouse behavior is due to an instinctive imperative to ensure that the prey is weak enough to be killed without endangering the cat.
Another poorly understood element of cat hunting behavior is the presentation of prey to human guardians. One explanation is that cats adopt humans into their social group and share excess kill with others in the group according to the dominance hierarchy, in which humans are reacted to as if they are at or near the top. Another explanation is that they attempt to teach their guardians to hunt or to help their human as if feeding "an elderly cat, or an inept kitten". This hypothesis is inconsistent with the fact that male cats also bring home prey, despite males having negligible involvement in raising kittens.
Domestic cats, especially young kittens, are known for their love of play. This behavior mimics hunting and is important in helping kittens learn to stalk, capture, and kill prey. Cats also engage in play fighting, with each other and with humans. This behavior may be a way for cats to practice the skills needed for real combat, and might also reduce any fear they associate with launching attacks on other animals.
Cats also tend to play with toys more when they are hungry. Owing to the close similarity between play and hunting, cats prefer to play with objects that resemble prey, such as small furry toys that move rapidly, but rapidly lose interest. They become habituated to a toy they have played with before. String is often used as a toy, but if it is eaten, it can become caught at the base of the cat's tongue and then move into the intestines, a medical emergency which can cause serious illness, even death. Owing to the risks posed by cats eating string, it is sometimes replaced with a laser pointer's dot, which cats may chase.
The cat secretes and perceives pheromones. Female cats, called queens, are polyestrous with several estrus cycles during a year, lasting usually 21 days. They are usually ready to mate between early February and August in northern temperate zones and throughout the year in equatorial regions.
Several males, called tomcats, are attracted to a female in heat. They fight over her, and the victor wins the right to mate. At first, the female rejects the male, but eventually, the female allows the male to mate. The female utters a loud yowl as the male pulls out of her because a male cat's penis has a band of about 120–150 backward-pointing penile spines, which are about 1 mm (0.039 in) long; upon withdrawal of the penis, the spines may provide the female with increased sexual stimulation, which acts to induce ovulation.
After mating, the female cleans her vulva thoroughly. If a male attempts to mate with her at this point, the female attacks him. After about 20 to 30 minutes, once the female is finished grooming, the cycle will repeat. Because ovulation is not always triggered by a single mating, females may not be impregnated by the first male with which they mate. Furthermore, cats are superfecund; that is, a female may mate with more than one male when she is in heat, with the result that different kittens in a litter may have different fathers.
The morula forms 124 hours after conception. At 148 hours, early blastocysts form. At 10–12 days, implantation occurs. The gestation of queens lasts between 64 and 67 days, with an average of 65 days.
Data on the reproductive capacity of more than 2,300 free-ranging queens were collected during a study between May 1998 and October 2000. They had one to six kittens per litter, with an average of three kittens. They produced a mean of 1.4 litters per year, but a maximum of three litters in a year. Of 169 kittens, 127 died before they were six months old due to a trauma caused in most cases by dog attacks and road accidents. The first litter is usually smaller than subsequent litters. Kittens are weaned between six and seven weeks of age. Queens normally reach sexual maturity at 5–10 months, and males at 5–7 months. This varies depending on breed. Kittens reach puberty at the age of 9–10 months.
Cats are ready to go to new homes at about 12 weeks of age, when they are ready to leave their mother. They can be surgically sterilized (spayed or castrated) as early as seven weeks to limit unwanted reproduction. This surgery also prevents undesirable sex-related behavior, such as aggression, territory marking (spraying urine) in males and yowling (calling) in females. Traditionally, this surgery was performed at around six to nine months of age, but it is increasingly being performed before puberty, at about three to six months. In the United States, about 80% of household cats are neutered.
The average lifespan of pet cats has risen in recent decades. In the early 1980s, it was about seven years, rising to 9.4 years in 1995 and an average of about 13 years as of 2014 and 2023. Some cats have been reported as surviving into their 30s, with the oldest known cat dying at a verified age of 38.
Neutering increases life expectancy: one study found castrated male cats live twice as long as intact males, while spayed female cats live 62% longer than intact females. Having a cat neutered confers health benefits, because castrated males cannot develop testicular cancer, spayed females cannot develop uterine or ovarian cancer, and both have a reduced risk of mammary cancer.
About 250 heritable genetic disorders have been identified in cats, many similar to human inborn errors of metabolism. The high level of similarity among the metabolism of mammals allows many of these feline diseases to be diagnosed using genetic tests that were originally developed for use in humans, as well as the use of cats as animal models in the study of the human diseases. Diseases affecting domestic cats include acute infections, parasitic infestations, injuries, and chronic diseases such as kidney disease, thyroid disease, and arthritis. Vaccinations are available for many infectious diseases, as are treatments to eliminate parasites such as worms, ticks, and fleas.
The domestic cat is a cosmopolitan species and occurs across much of the world. It is adaptable and now present on all continents except Antarctica, and on 118 of the 131 main groups of islands, even on the isolated Kerguelen Islands. Due to its ability to thrive in almost any terrestrial habitat, it is among the world's most invasive species. It lives on small islands with no human inhabitants. Feral cats can live in forests, grasslands, tundra, coastal areas, agricultural land, scrublands, urban areas, and wetlands.
The unwantedness that leads to the domestic cat being treated as an invasive species is twofold. On one hand, as it is little altered from the wildcat, it can readily interbreed with the wildcat. This hybridization poses a danger to the genetic distinctiveness of some wildcat populations, particularly in Scotland and Hungary, possibly also the Iberian Peninsula, and where protected natural areas are close to human-dominated landscapes, such as Kruger National Park in South Africa. However, its introduction to places where no native felines are present also contributes to the decline of native species.
Feral cats are domestic cats that were born in or have reverted to a wild state. They are unfamiliar with and wary of humans and roam freely in urban and rural areas. The numbers of feral cats is not known, but estimates of the United States feral population range from 25 to 60 million. Feral cats may live alone, but most are found in large colonies, which occupy a specific territory and are usually associated with a source of food. Famous feral cat colonies are found in Rome around the Colosseum and Forum Romanum, with cats at some of these sites being fed and given medical attention by volunteers.
Public attitudes toward feral cats vary widely, from seeing them as free-ranging pets to regarding them as vermin.
Some feral cats can be successfully socialized and 're-tamed' for adoption; young cats, especially kittens and cats that have had prior experience and contact with humans are the most receptive to these efforts.
On islands, birds can contribute as much as 60% of a cat's diet. In nearly all cases, the cat cannot be identified as the sole cause for reducing the numbers of island birds, and in some instances, eradication of cats has caused a "mesopredator release" effect; where the suppression of top carnivores creates an abundance of smaller predators that cause a severe decline in their shared prey. Domestic cats are a contributing factor to the decline of many species, a factor that has ultimately led, in some cases, to extinction. The South Island piopio, Chatham rail, and the New Zealand merganser are a few from a long list, with the most extreme case being the flightless Lyall's wren, which was driven to extinction only a few years after its discovery. One feral cat in New Zealand killed 102 New Zealand lesser short-tailed bats in seven days. In the US, feral and free-ranging domestic cats kill an estimated 6.3 – 22.3 billion mammals annually.
In Australia, the impact of cats on mammal populations is even greater than the impact of habitat loss. More than one million reptiles are killed by feral cats each day, representing 258 species. Cats have contributed to the extinction of the Navassa curly-tailed lizard and Chioninia coctei.
Cats are common pets throughout the world, and their worldwide population as of 2007 exceeded 500 million. As of 2017, the domestic cat was the second most popular pet in the United States, with 95.6 million cats owned and around 42 million households owning at least one cat. In the United Kingdom, 26% of adults have a cat, with an estimated population of 10.9 million pet cats as of 2020. As of 2021, there were an estimated 220 million owned and 480 million stray cats in the world.
Cats have been used for millennia to control rodents, notably around grain stores and aboard ships, and both uses extend to the present day.
As well as being kept as pets, cats are also used in the international fur trade and leather industries for making coats, hats, blankets, stuffed toys, shoes, gloves, and musical instruments. About 24 cats are needed to make a cat-fur coat. This use has been outlawed in the United States since 2000 and in the European Union (as well as the United Kingdom) since 2007.
Cat pelts have been used for superstitious purposes as part of the practice of witchcraft, and are still made into blankets in Switzerland as traditional medicine thought to cure rheumatism.
A few attempts to build a cat census have been made over the years, both through associations or national and international organizations (such as that of the Canadian Federation of Humane Societies) and over the Internet, but such a task does not seem simple to achieve. General estimates for the global population of domestic cats range widely from anywhere between 200 million to 600 million. Walter Chandoha made his career photographing cats after his 1949 images of Loco, an especially charming stray taken in, were published around the world. He is reported to have photographed 90,000 cats during his career and maintained an archive of 225,000 images that he drew from for publications during his lifetime.
A cat show is a judged event in which the owners of cats compete to win titles in various cat-registering organizations by entering their cats to be judged after a breed standard. It is often required that a cat must be healthy and vaccinated in order to participate in a cat show. Both pedigreed and non-purebred companion ("moggy") cats are admissible, although the rules differ depending on the organization. Competing cats are compared to the applicable breed standard, and assessed for temperament.
Cats can be infected or infested with viruses, bacteria, fungus, protozoans, arthropods or worms that can transmit diseases to humans. In some cases, the cat exhibits no symptoms of the disease. The same disease can then become evident in a human. The likelihood that a person will become diseased depends on the age and immune status of the person. Humans who have cats living in their home or in close association are more likely to become infected. Others might also acquire infections from cat feces and parasites exiting the cat's body. Some of the infections of most concern include salmonella, cat-scratch disease and toxoplasmosis.
In ancient Egypt, cats were worshipped, and the goddess Bastet often depicted in cat form, sometimes taking on the war-like aspect of a lioness. The Greek historian Herodotus reported that killing a cat was forbidden, and when a household cat died, the entire family mourned and shaved their eyebrows. Families took their dead cats to the sacred city of Bubastis, where they were embalmed and buried in sacred repositories. Herodotus expressed astonishment at the domestic cats in Egypt, because he had only ever seen wildcats.
Ancient Greeks and Romans kept weasels as pets, which were seen as the ideal rodent-killers. The earliest unmistakable evidence of the Greeks having domestic cats comes from two coins from Magna Graecia dating to the mid-fifth century BC showing Iokastos and Phalanthos, the legendary founders of Rhegion and Taras respectively, playing with their pet cats. The usual ancient Greek word for 'cat' was ailouros, meaning 'thing with the waving tail'. Cats are rarely mentioned in ancient Greek literature. Aristotle remarked in his History of Animals that "female cats are naturally lecherous." The Greeks later syncretized their own goddess Artemis with the Egyptian goddess Bastet, adopting Bastet's associations with cats and ascribing them to Artemis. In Ovid's Metamorphoses, when the deities flee to Egypt and take animal forms, the goddess Diana turns into a cat.
Cats eventually displaced weasels as the pest control of choice because they were more pleasant to have around the house and were more enthusiastic hunters of mice. During the Middle Ages, many of Artemis's associations with cats were grafted onto the Virgin Mary. Cats are often shown in icons of Annunciation and of the Holy Family and, according to Italian folklore, on the same night that Mary gave birth to Jesus, a cat in Bethlehem gave birth to a kitten. Domestic cats were spread throughout much of the rest of the world during the Age of Discovery, as ships' cats were carried on sailing ships to control shipboard rodents and as good-luck charms.
Several ancient religions believed cats are exalted souls, companions or guides for humans, that are all-knowing but mute so they cannot influence decisions made by humans. In Japan, the maneki neko cat is a symbol of good fortune. In Norse mythology, Freyja, the goddess of love, beauty, and fertility, is depicted as riding a chariot drawn by cats. In Jewish legend, the first cat was living in the house of the first man Adam as a pet that got rid of mice. The cat was once partnering with the first dog before the latter broke an oath they had made which resulted in enmity between the descendants of these two animals. It is also written that neither cats nor foxes are represented in the water, while every other animal has an incarnation species in the water. Although no species are sacred in Islam, cats are revered by Muslims. Some Western writers have stated Muhammad had a favorite cat, Muezza. He is reported to have loved cats so much, "he would do without his cloak rather than disturb one that was sleeping on it". The story has no origin in early Muslim writers, and seems to confuse a story of a later Sufi saint, Ahmed ar-Rifa'i, centuries after Muhammad. One of the companions of Muhammad was known as Abu Hurayrah ("father of the kitten"), in reference to his documented affection to cats.
Many cultures have negative superstitions about cats. An example would be the belief that encountering a black cat ("crossing one's path") leads to bad luck, or that cats are witches' familiars used to augment a witch's powers and skills. The killing of cats in Medieval Ypres, Belgium, is commemorated in the innocuous present-day Kattenstoet (cat parade). In mid-16th century France, cats would be burnt alive as a form of entertainment, particularly during midsummer festivals. According to Norman Davies, the assembled people "shrieked with laughter as the animals, howling with pain, were singed, roasted, and finally carbonized". The remaining ashes were sometimes taken back home by the people for good luck.
According to a myth in many cultures, cats have multiple lives. In many countries, they are believed to have nine lives, but in Italy, Germany, Greece, Brazil and some Spanish-speaking regions, they are said to have seven lives, while in Arabic traditions, the number of lives is six. An early mention of the myth can be found in John Heywood's The Proverbs of John Heywood (1546):
Husband, (quoth she), ye studie, be merrie now, And even as ye thinke now, so come to yow. Nay not so, (quoth he), for my thought to tell right, I thinke how you lay groning, wife, all last night. Husband, a groning horse and a groning wife Never faile their master, (quoth she), for my life. No wife, a woman hath nine lives like a cat.
The myth is attributed to the natural suppleness and swiftness cats exhibit to escape life-threatening situations. Also lending credence to this myth is the fact that falling cats often land on their feet, using an instinctive righting reflex to twist their bodies around. Nonetheless, cats can still be injured or killed by a high fall. | [
{
"paragraph_id": 0,
"text": "The cat (Felis catus), commonly referred to as the domestic cat or house cat, is the only domesticated species in the family Felidae. Recent advances in archaeology and genetics have shown that the domestication of the cat occurred in the Near East around 7500 BC. It is commonly kept as a house pet and farm cat, but also ranges freely as a feral cat avoiding human contact. It is valued by humans for companionship and its ability to kill vermin. Because of its retractable claws it is adapted to killing small prey like mice and rats. It has a strong flexible body, quick reflexes, sharp teeth, and its night vision and sense of smell are well developed. It is a social species, but a solitary hunter and a crepuscular predator. Cat communication includes vocalizations like meowing, purring, trilling, hissing, growling, and grunting as well as cat body language. It can hear sounds too faint or too high in frequency for human ears, such as those made by small mammals. It also secretes and perceives pheromones.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Female domestic cats can have kittens from spring to late autumn in temperate zones and throughout the year in equatorial regions, with litter sizes often ranging from two to five kittens. Domestic cats are bred and shown at events as registered pedigreed cats, a hobby known as cat fancy. Animal population control of cats may be achieved by spaying and neutering, but their proliferation and the abandonment of pets has resulted in large numbers of feral cats worldwide, contributing to the extinction of bird, mammal and reptile species.",
"title": ""
},
{
"paragraph_id": 2,
"text": "As of 2017, the domestic cat was the second most popular pet in the United States, with 95.6 million cats owned and around 42 million households owning at least one cat. In the United Kingdom, 26% of adults have a cat, with an estimated population of 10.9 million pet cats as of 2020. As of 2021, there were an estimated 220 million owned and 480 million stray cats in the world.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The origin of the English word cat, Old English catt, is thought to be the Late Latin word cattus, which was first used at the beginning of the 6th century. The Late Latin word may be derived from an unidentified African language. The Nubian word kaddîska 'wildcat' and Nobiin kadīs are possible sources or cognates. The Nubian word may be a loan from Arabic قَطّ qaṭṭ ~ قِطّ qiṭṭ.",
"title": "Etymology and naming"
},
{
"paragraph_id": 4,
"text": "However, it is \"equally likely that the forms might derive from an ancient Germanic word, imported into Latin and thence to Greek and to Syriac and Arabic\". The word may be derived from Germanic and Northern European languages, and ultimately be borrowed from Uralic, cf. Northern Sámi gáđfi, 'female stoat', and Hungarian hölgy, 'lady, female stoat'; from Proto-Uralic *käďwä, 'female (of a furred animal)'.",
"title": "Etymology and naming"
},
{
"paragraph_id": 5,
"text": "The English puss, extended as pussy and pussycat, is attested from the 16th century and may have been introduced from Dutch poes or from Low German puuskatte, related to Swedish kattepus, or Norwegian pus, pusekatt. Similar forms exist in Lithuanian puižė and Irish puisín or puiscín. The etymology of this word is unknown, but it may have arisen from a sound used to attract a cat.",
"title": "Etymology and naming"
},
{
"paragraph_id": 6,
"text": "A male cat is called a tom or tomcat (or a gib, if neutered). A female is called a queen (or a molly, if spayed), especially in a cat-breeding context. A juvenile cat is referred to as a kitten. In Early Modern English, the word kitten was interchangeable with the now-obsolete word catling. A group of cats can be referred to as a clowder or a glaring.",
"title": "Etymology and naming"
},
{
"paragraph_id": 7,
"text": "The scientific name Felis catus was proposed by Carl Linnaeus in 1758 for a domestic cat. Felis catus domesticus was proposed by Johann Christian Polycarp Erxleben in 1777. Felis daemon proposed by Konstantin Satunin in 1904 was a black cat from the Transcaucasus, later identified as a domestic cat.",
"title": "Taxonomy"
},
{
"paragraph_id": 8,
"text": "In 2003, the International Commission on Zoological Nomenclature ruled that the domestic cat is a distinct species, namely Felis catus. In 2007, it was considered a subspecies, F. silvestris catus, of the European wildcat (F. silvestris) following results of phylogenetic research. In 2017, the IUCN Cat Classification Taskforce followed the recommendation of the ICZN in regarding the domestic cat as a distinct species, Felis catus.",
"title": "Taxonomy"
},
{
"paragraph_id": 9,
"text": "The domestic cat is a member of the Felidae, a family that had a common ancestor about 10 to 15 million years ago. The evolutionary radiation of the Felidae began in Asia during the Miocene around 8.38 to 14.45 million years ago. Analysis of mitochondrial DNA of all Felidae species indicates a radiation at 6.46 to 16.76 million years ago. The genus Felis genetically diverged from other Felidae around 6 to 7 million years ago. Results of phylogenetic research shows that the wild members of this genus evolved through sympatric or parapatric speciation, whereas the domestic cat evolved through artificial selection. The domestic cat and its closest wild ancestor are diploid and both possess 38 chromosomes and roughly 20,000 genes.",
"title": "Evolution"
},
{
"paragraph_id": 10,
"text": "It was long thought that the domestication of the cat began in ancient Egypt, where cats were venerated from around 3100 BC, However, the earliest known indication for the taming of an African wildcat was excavated close by a human Neolithic grave in Shillourokambos, southern Cyprus, dating to about 7500–7200 BC. Since there is no evidence of native mammalian fauna on Cyprus, the inhabitants of this Neolithic village most likely brought the cat and other wild mammals to the island from the Middle Eastern mainland. Scientists therefore assume that African wildcats were attracted to early human settlements in the Fertile Crescent by rodents, in particular the house mouse (Mus musculus), and were tamed by Neolithic farmers. This mutual relationship between early farmers and tamed cats lasted thousands of years. As agricultural practices spread, so did tame and domesticated cats. Wildcats of Egypt contributed to the maternal gene pool of the domestic cat at a later time.",
"title": "Evolution"
},
{
"paragraph_id": 11,
"text": "The earliest known evidence for the occurrence of the domestic cat in Greece dates to around 1200 BC. Greek, Phoenician, Carthaginian and Etruscan traders introduced domestic cats to southern Europe. During the Roman Empire they were introduced to Corsica and Sardinia before the beginning of the 1st millennium. By the 5th century BC, they were familiar animals around settlements in Magna Graecia and Etruria. By the end of the Western Roman Empire in the 5th century, the Egyptian domestic cat lineage had arrived in a Baltic Sea port in northern Germany.",
"title": "Evolution"
},
{
"paragraph_id": 12,
"text": "The leopard cat (Prionailurus bengalensis) was tamed independently in China around 5500 BC. This line of partially domesticated cats leaves no trace in the domestic cat populations of today.",
"title": "Evolution"
},
{
"paragraph_id": 13,
"text": "During domestication, cats have undergone only minor changes in anatomy and behavior, and they are still capable of surviving in the wild. Several natural behaviors and characteristics of wildcats may have pre-adapted them for domestication as pets. These traits include their small size, social nature, obvious body language, love of play, and high intelligence. Captive Leopardus cats may also display affectionate behavior toward humans but were not domesticated. House cats often mate with feral cats. Hybridisation between domestic and other Felinae species is also possible, producing hybrids such as the Kellas cat in Scotland.",
"title": "Evolution"
},
{
"paragraph_id": 14,
"text": "Development of cat breeds started in the mid 19th century. An analysis of the domestic cat genome revealed that the ancestral wildcat genome was significantly altered in the process of domestication, as specific mutations were selected to develop cat breeds. Most breeds are founded on random-bred domestic cats. Genetic diversity of these breeds varies between regions, and is lowest in purebred populations, which show more than 20 deleterious genetic disorders.",
"title": "Evolution"
},
{
"paragraph_id": 15,
"text": "The domestic cat has a smaller skull and shorter bones than the European wildcat. It averages about 46 cm (18 in) in head-to-body length and 23–25 cm (9.1–9.8 in) in height, with about 30 cm (12 in) long tails. Males are larger than females. Adult domestic cats typically weigh 4–5 kg (8.8–11.0 lb).",
"title": "Characteristics"
},
{
"paragraph_id": 16,
"text": "Cats have seven cervical vertebrae (as do most mammals); 13 thoracic vertebrae (humans have 12); seven lumbar vertebrae (humans have five); three sacral vertebrae (as do most mammals, but humans have five); and a variable number of caudal vertebrae in the tail (humans have only three to five vestigial caudal vertebrae, fused into an internal coccyx). The extra lumbar and thoracic vertebrae account for the cat's spinal mobility and flexibility. Attached to the spine are 13 ribs, the shoulder, and the pelvis. Unlike human arms, cat forelimbs are attached to the shoulder by free-floating clavicle bones which allow them to pass their body through any space into which they can fit their head.",
"title": "Characteristics"
},
{
"paragraph_id": 17,
"text": "The cat skull is unusual among mammals in having very large eye sockets and a powerful specialized jaw. Within the jaw, cats have teeth adapted for killing prey and tearing meat. When it overpowers its prey, a cat delivers a lethal neck bite with its two long canine teeth, inserting them between two of the prey's vertebrae and severing its spinal cord, causing irreversible paralysis and death. Compared to other felines, domestic cats have narrowly spaced canine teeth relative to the size of their jaw, which is an adaptation to their preferred prey of small rodents, which have small vertebrae.",
"title": "Characteristics"
},
{
"paragraph_id": 18,
"text": "The premolar and first molar together compose the carnassial pair on each side of the mouth, which efficiently shears meat into small pieces, like a pair of scissors. These are vital in feeding, since cats' small molars cannot chew food effectively, and cats are largely incapable of mastication. Cats tend to have better teeth than most humans, with decay generally less likely because of a thicker protective layer of enamel, a less damaging saliva, less retention of food particles between teeth, and a diet mostly devoid of sugar. Nonetheless, they are subject to occasional tooth loss and infection.",
"title": "Characteristics"
},
{
"paragraph_id": 19,
"text": "Cats have protractible and retractable claws. In their normal, relaxed position, the claws are sheathed with the skin and fur around the paw's toe pads. This keeps the claws sharp by preventing wear from contact with the ground and allows for the silent stalking of prey. The claws on the forefeet are typically sharper than those on the hindfeet. Cats can voluntarily extend their claws on one or more paws. They may extend their claws in hunting or self-defense, climbing, kneading, or for extra traction on soft surfaces. Cats shed the outside layer of their claw sheaths when scratching rough surfaces.",
"title": "Characteristics"
},
{
"paragraph_id": 20,
"text": "Most cats have five claws on their front paws and four on their rear paws. The dewclaw is proximal to the other claws. More proximally is a protrusion which appears to be a sixth \"finger\". This special feature of the front paws on the inside of the wrists has no function in normal walking but is thought to be an antiskidding device used while jumping. Some cat breeds are prone to having extra digits (\"polydactyly\"). Polydactylous cats occur along North America's northeast coast and in Great Britain.",
"title": "Characteristics"
},
{
"paragraph_id": 21,
"text": "The cat is digitigrade. It walks on the toes, with the bones of the feet making up the lower part of the visible leg. Unlike most mammals, it uses a \"pacing\" gait and moves both legs on one side of the body before the legs on the other side. It registers directly by placing each hind paw close to the track of the corresponding fore paw, minimizing noise and visible tracks. This also provides sure footing for hind paws when navigating rough terrain. As it speeds up from walking to trotting, its gait changes to a \"diagonal\" gait: The diagonally opposite hind and fore legs move simultaneously.",
"title": "Characteristics"
},
{
"paragraph_id": 22,
"text": "Cats are generally fond of sitting in high places or perching. A higher place may serve as a concealed site from which to hunt; domestic cats strike prey by pouncing from a perch such as a tree branch. Another possible explanation is that height gives the cat a better observation point, allowing it to survey its territory. A cat falling from heights of up to 3 m (9.8 ft) can right itself and land on its paws.",
"title": "Characteristics"
},
{
"paragraph_id": 23,
"text": "During a fall from a high place, a cat reflexively twists its body and rights itself to land on its feet using its acute sense of balance and flexibility. This reflex is known as the cat righting reflex. A cat always rights itself in the same way during a fall, if it has enough time to do so, which is the case in falls of 90 cm (3.0 ft) or more. How cats are able to right themselves when falling has been investigated as the \"falling cat problem\".",
"title": "Characteristics"
},
{
"paragraph_id": 24,
"text": "The cat family (Felidae) can pass down many colors and patterns to their offspring. The domestic cat genes MC1R and ASIP allow for the variety of color in coats. The feline ASIP gene consists of three coding exons. Three novel microsatellite markers linked to ASIP were isolated from a domestic cat BAC clone containing this gene and were used to perform linkage analysis in a pedigree of 89 domestic cats that segregated for melanism.",
"title": "Characteristics"
},
{
"paragraph_id": 25,
"text": "Cats have excellent night vision and can see at only one-sixth the light level required for human vision. This is partly the result of cat eyes having a tapetum lucidum, which reflects any light that passes through the retina back into the eye, thereby increasing the eye's sensitivity to dim light. Large pupils are an adaptation to dim light. The domestic cat has slit pupils, which allow it to focus bright light without chromatic aberration. At low light, a cat's pupils expand to cover most of the exposed surface of its eyes. The domestic cat has rather poor color vision and only two types of cone cells, optimized for sensitivity to blue and yellowish green; its ability to distinguish between red and green is limited. A response to middle wavelengths from a system other than the rod cells might be due to a third type of cone. This appears to be an adaptation to low light levels rather than representing true trichromatic vision. Cats also have a nictitating membrane, allowing them to blink without hindering their vision.",
"title": "Senses"
},
{
"paragraph_id": 26,
"text": "The domestic cat's hearing is most acute in the range of 500 Hz to 32 kHz. It can detect an extremely broad range of frequencies ranging from 55 Hz to 79 kHz, whereas humans can only detect frequencies between 20 Hz and 20 kHz. It can hear a range of 10.5 octaves, while humans and dogs can hear ranges of about 9 octaves. Its hearing sensitivity is enhanced by its large movable outer ears, the pinnae, which amplify sounds and help detect the location of a noise. It can detect ultrasound, which enables it to detect ultrasonic calls made by rodent prey. Recent research has shown that cats have socio-spatial cognitive abilities to create mental maps of owners' locations based on hearing owners' voices.",
"title": "Senses"
},
{
"paragraph_id": 27,
"text": "Cats have an acute sense of smell, due in part to their well-developed olfactory bulb and a large surface of olfactory mucosa, about 5.8 cm (0.90 in) in area, which is about twice that of humans. Cats and many other animals have a Jacobson's organ in their mouths that is used in the behavioral process of flehmening. It allows them to sense certain aromas in a way that humans cannot. Cats are sensitive to pheromones such as 3-mercapto-3-methylbutan-1-ol, which they use to communicate through urine spraying and marking with scent glands. Many cats also respond strongly to plants that contain nepetalactone, especially catnip, as they can detect that substance at less than one part per billion. About 70–80% of cats are affected by nepetalactone. This response is also produced by other plants, such as silver vine (Actinidia polygama) and the herb valerian; it may be caused by the smell of these plants mimicking a pheromone and stimulating cats' social or sexual behaviors.",
"title": "Senses"
},
{
"paragraph_id": 28,
"text": "Cats have relatively few taste buds compared to humans (470 or so versus more than 9,000 on the human tongue). Domestic and wild cats share a taste receptor gene mutation that keeps their sweet taste buds from binding to sugary molecules, leaving them with no ability to taste sweetness. They, however, possess taste bud receptors specialized for acids, amino acids like protein, and bitter tastes. Their taste buds possess the receptors needed to detect umami. However, these receptors contain molecular changes that make the cat taste of umami different from that of humans. In humans, they detect the amino acids of glutamic acid and aspartic acid, but in cats they instead detect nucleotides, in this case inosine monophosphate and l-Histidine. These nucleotides are particularly enriched in tuna. This has been argued is why cats find tuna so palatable: as put by researchers into cat taste, \"the specific combination of the high IMP and free l-Histidine contents of tuna\" .. \"produces a strong umami taste synergy that is highly preferred by cats\". One of the researchers involved in this research has further claimed, \"I think umami is as important for cats as sweet is for humans\".",
"title": "Senses"
},
{
"paragraph_id": 29,
"text": "Cats also have a distinct temperature preference for their food, preferring food with a temperature around 38 °C (100 °F) which is similar to that of a fresh kill; some cats reject cold food (which would signal to the cat that the \"prey\" item is long dead and therefore possibly toxic or decomposing).",
"title": "Senses"
},
{
"paragraph_id": 30,
"text": "To aid with navigation and sensation, cats have dozens of movable whiskers (vibrissae) over their body, especially their faces. These provide information on the width of gaps and on the location of objects in the dark, both by touching objects directly and by sensing air currents; they also trigger protective blink reflexes to protect the eyes from damage.",
"title": "Senses"
},
{
"paragraph_id": 31,
"text": "Outdoor cats are active both day and night, although they tend to be slightly more active at night. Domestic cats spend the majority of their time in the vicinity of their homes but can range many hundreds of meters from this central point. They establish territories that vary considerably in size, in one study ranging 7–28 ha (17–69 acres). The timing of cats' activity is quite flexible and varied but being low-light predators, they are generally crepuscular, which means they tend to be more active near dawn and dusk. However, house cats' behavior is also influenced by human activity and they may adapt to their owners' sleeping patterns to some extent.",
"title": "Behavior"
},
{
"paragraph_id": 32,
"text": "Cats conserve energy by sleeping more than most animals, especially as they grow older. The daily duration of sleep varies, usually between 12 and 16 hours, with 13 and 14 being the average. Some cats can sleep as much as 20 hours. The term \"cat nap\" for a short rest refers to the cat's tendency to fall asleep (lightly) for a brief period. While asleep, cats experience short periods of rapid eye movement sleep often accompanied by muscle twitches, which suggests they are dreaming.",
"title": "Behavior"
},
{
"paragraph_id": 33,
"text": "The social behavior of the domestic cat ranges from widely dispersed individuals to feral cat colonies that gather around a food source, based on groups of co-operating females. Within such groups, one cat is usually dominant over the others. Each cat in a colony holds a distinct territory, with sexually active males having the largest territories, which are about 10 times larger than those of female cats and may overlap with several females' territories. These territories are marked by urine spraying, by rubbing objects at head height with secretions from facial glands, and by defecation. Between these territories are neutral areas where cats watch and greet one another without territorial conflicts. Outside these neutral areas, territory holders usually chase away stranger cats, at first by staring, hissing, and growling and, if that does not work, by short but noisy and violent attacks. Despite this colonial organization, cats do not have a social survival strategy or a herd behavior, and always hunt alone.",
"title": "Behavior"
},
{
"paragraph_id": 34,
"text": "Life in proximity to humans and other domestic animals has led to a symbiotic social adaptation in cats, and cats may express great affection toward humans or other animals. Ethologically, a cat's human keeper functions as if a mother surrogate. Adult cats live their lives in a kind of extended kittenhood, a form of behavioral neoteny. Their high-pitched sounds may mimic the cries of a hungry human infant, making them particularly difficult for humans to ignore. Some pet cats are poorly socialized. In particular, older cats show aggressiveness toward newly arrived kittens, which include biting and scratching; this type of behavior is known as feline asocial aggression.",
"title": "Behavior"
},
{
"paragraph_id": 35,
"text": "Redirected aggression is a common form of aggression which can occur in multiple cat households. In redirected aggression there is usually something that agitates the cat: this could be a sight, sound, or another source of stimuli which causes a heightened level of anxiety or arousal. If the cat cannot attack the stimuli, it may direct anger elsewhere by attacking or directing aggression to the nearest cat, dog, human or other being.",
"title": "Behavior"
},
{
"paragraph_id": 36,
"text": "Domestic cats' scent rubbing behavior toward humans or other cats is thought to be a feline means for social bonding.",
"title": "Behavior"
},
{
"paragraph_id": 37,
"text": "Domestic cats use many vocalizations for communication, including purring, trilling, hissing, growling/snarling, grunting, and several different forms of meowing. Their body language, including position of ears and tail, relaxation of the whole body, and kneading of the paws, are all indicators of mood. The tail and ears are particularly important social signal mechanisms in cats. A raised tail indicates a friendly greeting, and flattened ears indicate hostility. Tail-raising also indicates the cat's position in the group's social hierarchy, with dominant individuals raising their tails less often than subordinate ones. Feral cats are generally silent. Nose-to-nose touching is also a common greeting and may be followed by social grooming, which is solicited by one of the cats raising and tilting its head.",
"title": "Behavior"
},
{
"paragraph_id": 38,
"text": "Purring may have developed as an evolutionary advantage as a signaling mechanism of reassurance between mother cats and nursing kittens, who are thought to use it as a care-soliciting signal. Post-nursing cats also often purr as a sign of contentment: when being petted, becoming relaxed, or eating. Even though purring is popularly interpreted as indicative of pleasure, it has been recorded in a wide variety of circumstances, most of which involve physical contact between the cat and another, presumably trusted individual. Some cats have been observed to purr continuously when chronically ill or in apparent pain.",
"title": "Behavior"
},
{
"paragraph_id": 39,
"text": "The exact mechanism by which cats purr has long been elusive, but it has been proposed that purring is generated via a series of sudden build-ups and releases of pressure as the glottis is opened and closed, which causes the vocal folds to separate forcefully. The laryngeal muscles in control of the glottis are thought to be driven by a neural oscillator which generates a cycle of contraction and release every 30–40 milliseconds (giving a frequency of 33 to 25 Hz).",
"title": "Behavior"
},
{
"paragraph_id": 40,
"text": "Domestic cats observed in a rescue facility have total of 276 distinct facial expressions based on 26 different facial movements; each facial expression corresponds to different social functions that are likely influenced by domestication.",
"title": "Behavior"
},
{
"paragraph_id": 41,
"text": "Cats are known for spending considerable amounts of time licking their coats to keep them clean. The cat's tongue has backward-facing spines about 500 μm long, which are called papillae. These contain keratin which makes them rigid so the papillae act like a hairbrush. Some cats, particularly longhaired cats, occasionally regurgitate hairballs of fur that have collected in their stomachs from grooming. These clumps of fur are usually sausage-shaped and about 2–3 cm (0.79–1.18 in) long. Hairballs can be prevented with remedies that ease elimination of the hair through the gut, as well as regular grooming of the coat with a comb or stiff brush.",
"title": "Behavior"
},
{
"paragraph_id": 42,
"text": "Among domestic cats, males are more likely to fight than females. Among feral cats, the most common reason for cat fighting is competition between two males to mate with a female. In such cases, most fights are won by the heavier male. Another common reason for fighting in domestic cats is the difficulty of establishing territories within a small home. Female cats also fight over territory or to defend their kittens. Neutering will decrease or eliminate this behavior in many cases, suggesting that the behavior is linked to sex hormones.",
"title": "Behavior"
},
{
"paragraph_id": 43,
"text": "When cats become aggressive, they try to make themselves appear larger and more threatening by raising their fur, arching their backs, turning sideways and hissing or spitting. Often, the ears are pointed down and back to avoid damage to the inner ear and potentially listen for any changes behind them while focused forward. Cats may also vocalize loudly and bare their teeth in an effort to further intimidate their opponents. Fights usually consist of grappling and delivering powerful slaps to the face and body with the forepaws as well as bites. Cats also throw themselves to the ground in a defensive posture to rake their opponent's belly with their powerful hind legs.",
"title": "Behavior"
},
{
"paragraph_id": 44,
"text": "Serious damage is rare, as the fights are usually short in duration, with the loser running away with little more than a few scratches to the face and ears. Fights for mating rights are typically more severe and injuries may include deep puncture wounds and lacerations. Normally, serious injuries from fighting are limited to infections of scratches and bites, though these can occasionally kill cats if untreated. In addition, bites are probably the main route of transmission of feline immunodeficiency virus. Sexually active males are usually involved in many fights during their lives, and often have decidedly battered faces with obvious scars and cuts to their ears and nose. Cats are willing to threaten animals larger than them to defend their territory, such as dogs and foxes.",
"title": "Behavior"
},
{
"paragraph_id": 45,
"text": "The shape and structure of cats' cheeks is insufficient to allow them to take in liquids using suction. Therefore, when drinking they lap with the tongue to draw liquid upward into their mouths. Lapping at a rate of four times a second, the cat touches the smooth tip of its tongue to the surface of the water, and quickly retracts it like a corkscrew, drawing water upward.",
"title": "Behavior"
},
{
"paragraph_id": 46,
"text": "Feral cats and free-fed house cats consume several small meals in a day. The frequency and size of meals varies between individuals. They select food based on its temperature, smell and texture; they dislike chilled foods and respond most strongly to moist foods rich in amino acids, which are similar to meat. Cats reject novel flavors (a response termed neophobia) and learn quickly to avoid foods that have tasted unpleasant in the past. It is also a common misconception that cats like milk/cream, as they tend to avoid sweet food and milk. Most adult cats are lactose intolerant; the sugar in milk is not easily digested and may cause soft stools or diarrhea. Some also develop odd eating habits and like to eat or chew on things like wool, plastic, cables, paper, string, aluminum foil, or even coal. This condition, pica, can threaten their health, depending on the amount and toxicity of the items eaten.",
"title": "Behavior"
},
{
"paragraph_id": 47,
"text": "Cats hunt small prey, primarily birds and rodents, and are often used as a form of pest control. Other common small creatures such as lizards and snakes may also become prey. Cats use two hunting strategies, either stalking prey actively, or waiting in ambush until an animal comes close enough to be captured. The strategy used depends on the prey species in the area, with cats waiting in ambush outside burrows, but tending to actively stalk birds. Domestic cats are a major predator of wildlife in the United States, killing an estimated 1.3 to 4.0 billion birds and 6.3 to 22.3 billion mammals annually.",
"title": "Behavior"
},
{
"paragraph_id": 48,
"text": "Certain species appear more susceptible than others; in one English village, for example, 30% of house sparrow mortality was linked to the domestic cat. In the recovery of ringed robins (Erithacus rubecula) and dunnocks (Prunella modularis) in Britain, 31% of deaths were a result of cat predation. In parts of North America, the presence of larger carnivores such as coyotes which prey on cats and other small predators reduces the effect of predation by cats and other small predators such as opossums and raccoons on bird numbers and variety.",
"title": "Behavior"
},
{
"paragraph_id": 49,
"text": "Perhaps the best-known element of cats' hunting behavior, which is commonly misunderstood and often appalls cat owners because it looks like torture, is that cats often appear to \"play\" with prey by releasing and recapturing it. This cat and mouse behavior is due to an instinctive imperative to ensure that the prey is weak enough to be killed without endangering the cat.",
"title": "Behavior"
},
{
"paragraph_id": 50,
"text": "Another poorly understood element of cat hunting behavior is the presentation of prey to human guardians. One explanation is that cats adopt humans into their social group and share excess kill with others in the group according to the dominance hierarchy, in which humans are reacted to as if they are at or near the top. Another explanation is that they attempt to teach their guardians to hunt or to help their human as if feeding \"an elderly cat, or an inept kitten\". This hypothesis is inconsistent with the fact that male cats also bring home prey, despite males having negligible involvement in raising kittens.",
"title": "Behavior"
},
{
"paragraph_id": 51,
"text": "Domestic cats, especially young kittens, are known for their love of play. This behavior mimics hunting and is important in helping kittens learn to stalk, capture, and kill prey. Cats also engage in play fighting, with each other and with humans. This behavior may be a way for cats to practice the skills needed for real combat, and might also reduce any fear they associate with launching attacks on other animals.",
"title": "Behavior"
},
{
"paragraph_id": 52,
"text": "Cats also tend to play with toys more when they are hungry. Owing to the close similarity between play and hunting, cats prefer to play with objects that resemble prey, such as small furry toys that move rapidly, but rapidly lose interest. They become habituated to a toy they have played with before. String is often used as a toy, but if it is eaten, it can become caught at the base of the cat's tongue and then move into the intestines, a medical emergency which can cause serious illness, even death. Owing to the risks posed by cats eating string, it is sometimes replaced with a laser pointer's dot, which cats may chase.",
"title": "Behavior"
},
{
"paragraph_id": 53,
"text": "The cat secretes and perceives pheromones. Female cats, called queens, are polyestrous with several estrus cycles during a year, lasting usually 21 days. They are usually ready to mate between early February and August in northern temperate zones and throughout the year in equatorial regions.",
"title": "Behavior"
},
{
"paragraph_id": 54,
"text": "Several males, called tomcats, are attracted to a female in heat. They fight over her, and the victor wins the right to mate. At first, the female rejects the male, but eventually, the female allows the male to mate. The female utters a loud yowl as the male pulls out of her because a male cat's penis has a band of about 120–150 backward-pointing penile spines, which are about 1 mm (0.039 in) long; upon withdrawal of the penis, the spines may provide the female with increased sexual stimulation, which acts to induce ovulation.",
"title": "Behavior"
},
{
"paragraph_id": 55,
"text": "After mating, the female cleans her vulva thoroughly. If a male attempts to mate with her at this point, the female attacks him. After about 20 to 30 minutes, once the female is finished grooming, the cycle will repeat. Because ovulation is not always triggered by a single mating, females may not be impregnated by the first male with which they mate. Furthermore, cats are superfecund; that is, a female may mate with more than one male when she is in heat, with the result that different kittens in a litter may have different fathers.",
"title": "Behavior"
},
{
"paragraph_id": 56,
"text": "The morula forms 124 hours after conception. At 148 hours, early blastocysts form. At 10–12 days, implantation occurs. The gestation of queens lasts between 64 and 67 days, with an average of 65 days.",
"title": "Behavior"
},
{
"paragraph_id": 57,
"text": "Data on the reproductive capacity of more than 2,300 free-ranging queens were collected during a study between May 1998 and October 2000. They had one to six kittens per litter, with an average of three kittens. They produced a mean of 1.4 litters per year, but a maximum of three litters in a year. Of 169 kittens, 127 died before they were six months old due to a trauma caused in most cases by dog attacks and road accidents. The first litter is usually smaller than subsequent litters. Kittens are weaned between six and seven weeks of age. Queens normally reach sexual maturity at 5–10 months, and males at 5–7 months. This varies depending on breed. Kittens reach puberty at the age of 9–10 months.",
"title": "Behavior"
},
{
"paragraph_id": 58,
"text": "Cats are ready to go to new homes at about 12 weeks of age, when they are ready to leave their mother. They can be surgically sterilized (spayed or castrated) as early as seven weeks to limit unwanted reproduction. This surgery also prevents undesirable sex-related behavior, such as aggression, territory marking (spraying urine) in males and yowling (calling) in females. Traditionally, this surgery was performed at around six to nine months of age, but it is increasingly being performed before puberty, at about three to six months. In the United States, about 80% of household cats are neutered.",
"title": "Behavior"
},
{
"paragraph_id": 59,
"text": "The average lifespan of pet cats has risen in recent decades. In the early 1980s, it was about seven years, rising to 9.4 years in 1995 and an average of about 13 years as of 2014 and 2023. Some cats have been reported as surviving into their 30s, with the oldest known cat dying at a verified age of 38.",
"title": "Lifespan and health"
},
{
"paragraph_id": 60,
"text": "Neutering increases life expectancy: one study found castrated male cats live twice as long as intact males, while spayed female cats live 62% longer than intact females. Having a cat neutered confers health benefits, because castrated males cannot develop testicular cancer, spayed females cannot develop uterine or ovarian cancer, and both have a reduced risk of mammary cancer.",
"title": "Lifespan and health"
},
{
"paragraph_id": 61,
"text": "About 250 heritable genetic disorders have been identified in cats, many similar to human inborn errors of metabolism. The high level of similarity among the metabolism of mammals allows many of these feline diseases to be diagnosed using genetic tests that were originally developed for use in humans, as well as the use of cats as animal models in the study of the human diseases. Diseases affecting domestic cats include acute infections, parasitic infestations, injuries, and chronic diseases such as kidney disease, thyroid disease, and arthritis. Vaccinations are available for many infectious diseases, as are treatments to eliminate parasites such as worms, ticks, and fleas.",
"title": "Lifespan and health"
},
{
"paragraph_id": 62,
"text": "The domestic cat is a cosmopolitan species and occurs across much of the world. It is adaptable and now present on all continents except Antarctica, and on 118 of the 131 main groups of islands, even on the isolated Kerguelen Islands. Due to its ability to thrive in almost any terrestrial habitat, it is among the world's most invasive species. It lives on small islands with no human inhabitants. Feral cats can live in forests, grasslands, tundra, coastal areas, agricultural land, scrublands, urban areas, and wetlands.",
"title": "Ecology"
},
{
"paragraph_id": 63,
"text": "The unwantedness that leads to the domestic cat being treated as an invasive species is twofold. On one hand, as it is little altered from the wildcat, it can readily interbreed with the wildcat. This hybridization poses a danger to the genetic distinctiveness of some wildcat populations, particularly in Scotland and Hungary, possibly also the Iberian Peninsula, and where protected natural areas are close to human-dominated landscapes, such as Kruger National Park in South Africa. However, its introduction to places where no native felines are present also contributes to the decline of native species.",
"title": "Ecology"
},
{
"paragraph_id": 64,
"text": "Feral cats are domestic cats that were born in or have reverted to a wild state. They are unfamiliar with and wary of humans and roam freely in urban and rural areas. The numbers of feral cats is not known, but estimates of the United States feral population range from 25 to 60 million. Feral cats may live alone, but most are found in large colonies, which occupy a specific territory and are usually associated with a source of food. Famous feral cat colonies are found in Rome around the Colosseum and Forum Romanum, with cats at some of these sites being fed and given medical attention by volunteers.",
"title": "Ecology"
},
{
"paragraph_id": 65,
"text": "Public attitudes toward feral cats vary widely, from seeing them as free-ranging pets to regarding them as vermin.",
"title": "Ecology"
},
{
"paragraph_id": 66,
"text": "Some feral cats can be successfully socialized and 're-tamed' for adoption; young cats, especially kittens and cats that have had prior experience and contact with humans are the most receptive to these efforts.",
"title": "Ecology"
},
{
"paragraph_id": 67,
"text": "On islands, birds can contribute as much as 60% of a cat's diet. In nearly all cases, the cat cannot be identified as the sole cause for reducing the numbers of island birds, and in some instances, eradication of cats has caused a \"mesopredator release\" effect; where the suppression of top carnivores creates an abundance of smaller predators that cause a severe decline in their shared prey. Domestic cats are a contributing factor to the decline of many species, a factor that has ultimately led, in some cases, to extinction. The South Island piopio, Chatham rail, and the New Zealand merganser are a few from a long list, with the most extreme case being the flightless Lyall's wren, which was driven to extinction only a few years after its discovery. One feral cat in New Zealand killed 102 New Zealand lesser short-tailed bats in seven days. In the US, feral and free-ranging domestic cats kill an estimated 6.3 – 22.3 billion mammals annually.",
"title": "Ecology"
},
{
"paragraph_id": 68,
"text": "In Australia, the impact of cats on mammal populations is even greater than the impact of habitat loss. More than one million reptiles are killed by feral cats each day, representing 258 species. Cats have contributed to the extinction of the Navassa curly-tailed lizard and Chioninia coctei.",
"title": "Ecology"
},
{
"paragraph_id": 69,
"text": "Cats are common pets throughout the world, and their worldwide population as of 2007 exceeded 500 million. As of 2017, the domestic cat was the second most popular pet in the United States, with 95.6 million cats owned and around 42 million households owning at least one cat. In the United Kingdom, 26% of adults have a cat, with an estimated population of 10.9 million pet cats as of 2020. As of 2021, there were an estimated 220 million owned and 480 million stray cats in the world.",
"title": "Interaction with humans"
},
{
"paragraph_id": 70,
"text": "Cats have been used for millennia to control rodents, notably around grain stores and aboard ships, and both uses extend to the present day.",
"title": "Interaction with humans"
},
{
"paragraph_id": 71,
"text": "As well as being kept as pets, cats are also used in the international fur trade and leather industries for making coats, hats, blankets, stuffed toys, shoes, gloves, and musical instruments. About 24 cats are needed to make a cat-fur coat. This use has been outlawed in the United States since 2000 and in the European Union (as well as the United Kingdom) since 2007.",
"title": "Interaction with humans"
},
{
"paragraph_id": 72,
"text": "Cat pelts have been used for superstitious purposes as part of the practice of witchcraft, and are still made into blankets in Switzerland as traditional medicine thought to cure rheumatism.",
"title": "Interaction with humans"
},
{
"paragraph_id": 73,
"text": "A few attempts to build a cat census have been made over the years, both through associations or national and international organizations (such as that of the Canadian Federation of Humane Societies) and over the Internet, but such a task does not seem simple to achieve. General estimates for the global population of domestic cats range widely from anywhere between 200 million to 600 million. Walter Chandoha made his career photographing cats after his 1949 images of Loco, an especially charming stray taken in, were published around the world. He is reported to have photographed 90,000 cats during his career and maintained an archive of 225,000 images that he drew from for publications during his lifetime.",
"title": "Interaction with humans"
},
{
"paragraph_id": 74,
"text": "A cat show is a judged event in which the owners of cats compete to win titles in various cat-registering organizations by entering their cats to be judged after a breed standard. It is often required that a cat must be healthy and vaccinated in order to participate in a cat show. Both pedigreed and non-purebred companion (\"moggy\") cats are admissible, although the rules differ depending on the organization. Competing cats are compared to the applicable breed standard, and assessed for temperament.",
"title": "Interaction with humans"
},
{
"paragraph_id": 75,
"text": "Cats can be infected or infested with viruses, bacteria, fungus, protozoans, arthropods or worms that can transmit diseases to humans. In some cases, the cat exhibits no symptoms of the disease. The same disease can then become evident in a human. The likelihood that a person will become diseased depends on the age and immune status of the person. Humans who have cats living in their home or in close association are more likely to become infected. Others might also acquire infections from cat feces and parasites exiting the cat's body. Some of the infections of most concern include salmonella, cat-scratch disease and toxoplasmosis.",
"title": "Interaction with humans"
},
{
"paragraph_id": 76,
"text": "In ancient Egypt, cats were worshipped, and the goddess Bastet often depicted in cat form, sometimes taking on the war-like aspect of a lioness. The Greek historian Herodotus reported that killing a cat was forbidden, and when a household cat died, the entire family mourned and shaved their eyebrows. Families took their dead cats to the sacred city of Bubastis, where they were embalmed and buried in sacred repositories. Herodotus expressed astonishment at the domestic cats in Egypt, because he had only ever seen wildcats.",
"title": "Interaction with humans"
},
{
"paragraph_id": 77,
"text": "Ancient Greeks and Romans kept weasels as pets, which were seen as the ideal rodent-killers. The earliest unmistakable evidence of the Greeks having domestic cats comes from two coins from Magna Graecia dating to the mid-fifth century BC showing Iokastos and Phalanthos, the legendary founders of Rhegion and Taras respectively, playing with their pet cats. The usual ancient Greek word for 'cat' was ailouros, meaning 'thing with the waving tail'. Cats are rarely mentioned in ancient Greek literature. Aristotle remarked in his History of Animals that \"female cats are naturally lecherous.\" The Greeks later syncretized their own goddess Artemis with the Egyptian goddess Bastet, adopting Bastet's associations with cats and ascribing them to Artemis. In Ovid's Metamorphoses, when the deities flee to Egypt and take animal forms, the goddess Diana turns into a cat.",
"title": "Interaction with humans"
},
{
"paragraph_id": 78,
"text": "Cats eventually displaced weasels as the pest control of choice because they were more pleasant to have around the house and were more enthusiastic hunters of mice. During the Middle Ages, many of Artemis's associations with cats were grafted onto the Virgin Mary. Cats are often shown in icons of Annunciation and of the Holy Family and, according to Italian folklore, on the same night that Mary gave birth to Jesus, a cat in Bethlehem gave birth to a kitten. Domestic cats were spread throughout much of the rest of the world during the Age of Discovery, as ships' cats were carried on sailing ships to control shipboard rodents and as good-luck charms.",
"title": "Interaction with humans"
},
{
"paragraph_id": 79,
"text": "Several ancient religions believed cats are exalted souls, companions or guides for humans, that are all-knowing but mute so they cannot influence decisions made by humans. In Japan, the maneki neko cat is a symbol of good fortune. In Norse mythology, Freyja, the goddess of love, beauty, and fertility, is depicted as riding a chariot drawn by cats. In Jewish legend, the first cat was living in the house of the first man Adam as a pet that got rid of mice. The cat was once partnering with the first dog before the latter broke an oath they had made which resulted in enmity between the descendants of these two animals. It is also written that neither cats nor foxes are represented in the water, while every other animal has an incarnation species in the water. Although no species are sacred in Islam, cats are revered by Muslims. Some Western writers have stated Muhammad had a favorite cat, Muezza. He is reported to have loved cats so much, \"he would do without his cloak rather than disturb one that was sleeping on it\". The story has no origin in early Muslim writers, and seems to confuse a story of a later Sufi saint, Ahmed ar-Rifa'i, centuries after Muhammad. One of the companions of Muhammad was known as Abu Hurayrah (\"father of the kitten\"), in reference to his documented affection to cats.",
"title": "Interaction with humans"
},
{
"paragraph_id": 80,
"text": "Many cultures have negative superstitions about cats. An example would be the belief that encountering a black cat (\"crossing one's path\") leads to bad luck, or that cats are witches' familiars used to augment a witch's powers and skills. The killing of cats in Medieval Ypres, Belgium, is commemorated in the innocuous present-day Kattenstoet (cat parade). In mid-16th century France, cats would be burnt alive as a form of entertainment, particularly during midsummer festivals. According to Norman Davies, the assembled people \"shrieked with laughter as the animals, howling with pain, were singed, roasted, and finally carbonized\". The remaining ashes were sometimes taken back home by the people for good luck.",
"title": "Interaction with humans"
},
{
"paragraph_id": 81,
"text": "According to a myth in many cultures, cats have multiple lives. In many countries, they are believed to have nine lives, but in Italy, Germany, Greece, Brazil and some Spanish-speaking regions, they are said to have seven lives, while in Arabic traditions, the number of lives is six. An early mention of the myth can be found in John Heywood's The Proverbs of John Heywood (1546):",
"title": "Interaction with humans"
},
{
"paragraph_id": 82,
"text": "Husband, (quoth she), ye studie, be merrie now, And even as ye thinke now, so come to yow. Nay not so, (quoth he), for my thought to tell right, I thinke how you lay groning, wife, all last night. Husband, a groning horse and a groning wife Never faile their master, (quoth she), for my life. No wife, a woman hath nine lives like a cat.",
"title": "Interaction with humans"
},
{
"paragraph_id": 83,
"text": "The myth is attributed to the natural suppleness and swiftness cats exhibit to escape life-threatening situations. Also lending credence to this myth is the fact that falling cats often land on their feet, using an instinctive righting reflex to twist their bodies around. Nonetheless, cats can still be injured or killed by a high fall.",
"title": "Interaction with humans"
}
] | The cat, commonly referred to as the domestic cat or house cat, is the only domesticated species in the family Felidae. Recent advances in archaeology and genetics have shown that the domestication of the cat occurred in the Near East around 7500 BC. It is commonly kept as a house pet and farm cat, but also ranges freely as a feral cat avoiding human contact. It is valued by humans for companionship and its ability to kill vermin. Because of its retractable claws it is adapted to killing small prey like mice and rats. It has a strong flexible body, quick reflexes, sharp teeth, and its night vision and sense of smell are well developed. It is a social species, but a solitary hunter and a crepuscular predator. Cat communication includes vocalizations like meowing, purring, trilling, hissing, growling, and grunting as well as cat body language. It can hear sounds too faint or too high in frequency for human ears, such as those made by small mammals. It also secretes and perceives pheromones. Female domestic cats can have kittens from spring to late autumn in temperate zones and throughout the year in equatorial regions, with litter sizes often ranging from two to five kittens. Domestic cats are bred and shown at events as registered pedigreed cats, a hobby known as cat fancy. Animal population control of cats may be achieved by spaying and neutering, but their proliferation and the abandonment of pets has resulted in large numbers of feral cats worldwide, contributing to the extinction of bird, mammal and reptile species. As of 2017, the domestic cat was the second most popular pet in the United States, with 95.6 million cats owned and around 42 million households owning at least one cat. In the United Kingdom, 26% of adults have a cat, with an estimated population of 10.9 million pet cats as of 2020. As of 2021, there were an estimated 220 million owned and 480 million stray cats in the world. | 2001-11-09T13:56:03Z | 2023-12-30T05:49:37Z | [
"Template:Portal",
"Template:Cite book",
"Template:Wikispecies-inline",
"Template:Cite Americana",
"Template:Good article",
"Template:MSW3 Wozencraft",
"Template:Cite web",
"Template:Commons-inline",
"Template:Cat nav",
"Template:Taxonbar",
"Template:Use dmy dates",
"Template:Reflist",
"Template:Spoken Wikipedia",
"Template:Authority control",
"Template:Cite journal",
"Template:Short description",
"Template:About",
"Template:Use American English",
"Template:Speciesbox",
"Template:Main",
"Template:See also",
"Template:Div col",
"Template:Tertiary source",
"Template:Citation",
"Template:Cite news",
"Template:Page needed",
"Template:Pp-semi-indef",
"Template:User-generated inline",
"Template:Mya",
"Template:Clade gallery",
"Template:Clear",
"Template:Wiktionary-inline",
"Template:Wikibooks inline",
"Template:Pp-move",
"Template:As of",
"Template:Rp",
"Template:Poem quote",
"Template:Div col end",
"Template:Cite magazine",
"Template:Lang",
"Template:Citation needed",
"Template:Image frame",
"Template:Carnivora",
"Template:Convert",
"Template:Wikiquote-inline"
] | https://en.wikipedia.org/wiki/Cat |
6,681 | Crank | Crank may refer to: | [
{
"paragraph_id": 0,
"text": "Crank may refer to:",
"title": ""
}
] | Crank may refer to: | 2022-07-05T10:14:10Z | [
"Template:Lookfrom",
"Template:Intitle",
"Template:Disambiguation",
"Template:Wiktionary",
"Template:TOC right"
] | https://en.wikipedia.org/wiki/Crank |
|
6,682 | Clade | In biological phylogenetics, a clade (from Ancient Greek κλάδος (kládos) 'branch'), also known as a monophyletic group or natural group, is a grouping of organisms that are monophyletic – that is, composed of a common ancestor and all its lineal descendants – on a phylogenetic tree. In the taxonomical literature, sometimes the Latin form cladus (plural cladi) is used rather than the English form. Clades are the fundamental unit of cladistics, a modern approach to taxonomy adopted by most biological fields.
The common ancestor may be an individual, a population, or a species (extinct or extant). Clades are nested, one in another, as each branch in turn splits into smaller branches. These splits reflect evolutionary history as populations diverged and evolved independently. Clades are termed monophyletic (Greek: "one clan") groups.
Over the last few decades, the cladistic approach has revolutionized biological classification and revealed surprising evolutionary relationships among organisms. Increasingly, taxonomists try to avoid naming taxa that are not clades; that is, taxa that are not monophyletic. Some of the relationships between organisms that the molecular biology arm of cladistics has revealed include that fungi are closer relatives to animals than they are to plants, archaea are now considered different from bacteria, and multicellular organisms may have evolved from archaea.
The term "clade" is also used with a similar meaning in other fields besides biology, such as historical linguistics; see Cladistics § In disciplines other than biology.
The term "clade" was coined in 1957 by the biologist Julian Huxley to refer to the result of cladogenesis, the evolutionary splitting of a parent species into two distinct species, a concept Huxley borrowed from Bernhard Rensch.
Many commonly named groups – rodents and insects, for example – are clades because, in each case, the group consists of a common ancestor with all its descendant branches. Rodents, for example, are a branch of mammals that split off after the end of the period when the clade Dinosauria stopped being the dominant terrestrial vertebrates 66 million years ago. The original population and all its descendants are a clade. The rodent clade corresponds to the order Rodentia, and insects to the class Insecta. These clades include smaller clades, such as chipmunk or ant, each of which consists of even smaller clades. The clade "rodent" is in turn included in the mammal, vertebrate and animal clades.
The idea of a clade did not exist in pre-Darwinian Linnaean taxonomy, which was based by necessity only on internal or external morphological similarities between organisms. Many of the better known animal groups in Linnaeus' original Systema Naturae (mostly vertebrate groups) do represent clades. The phenomenon of convergent evolution is responsible for many cases of misleading similarities in the morphology of groups that evolved from different lineages.
With the increasing realization in the first half of the 19th century that species had changed and split through the ages, classification increasingly came to be seen as branches on the evolutionary tree of life. The publication of Darwin's theory of evolution in 1859 gave this view increasing weight. Thomas Henry Huxley, an early advocate of evolutionary theory, proposed a revised taxonomy based on a concept strongly resembling clades, although the term clade itself would not be coined until 1957 by his grandson, Julian Huxley.
German biologist Emil Hans Willi Hennig (1913–1976) is considered to be the founder of cladistics. He proposed a classification system that represented repeated branchings of the family tree, as opposed to the previous systems, which put organisms on a "ladder", with supposedly more "advanced" organisms at the top.
Taxonomists have increasingly worked to make the taxonomic system reflect evolution. When it comes to naming, this principle is not always compatible with the traditional rank-based nomenclature (in which only taxa associated with a rank can be named) because not enough ranks exist to name a long series of nested clades. For these and other reasons, phylogenetic nomenclature has been developed; it is still controversial.
As an example, see the full current classification of Anas platyrhynchos (the mallard duck) with 40 clades from Eukaryota down by following this Wikispecies link and clicking on "Expand".
The name of a clade is conventionally a plural, where the singular refers to each member individually. A unique exception is the reptile clade Dracohors, which was made by haplology from Latin "draco" and "cohors", i.e. "the dragon cohort"; its form with a suffix added should be e.g. "dracohortian".
A clade is by definition monophyletic, meaning that it contains one ancestor (which can be an organism, a population, or a species) and all its descendants. The ancestor can be known or unknown; any and all members of a clade can be extant or extinct.
The science that tries to reconstruct phylogenetic trees and thus discover clades is called phylogenetics or cladistics, the latter term coined by Ernst Mayr (1965), derived from "clade". The results of phylogenetic/cladistic analyses are tree-shaped diagrams called cladograms; they, and all their branches, are phylogenetic hypotheses.
Three methods of defining clades are featured in phylogenetic nomenclature: node-, stem-, and apomorphy-based (see Phylogenetic nomenclature§Phylogenetic definitions of clade names for detailed definitions).
The relationship between clades can be described in several ways:
The age of a clade can be described based on two different reference points, crown age and stem age. The crown age of a clade refers to the age of the most recent common ancestor of all of the species in the clade. The stem age of a clade refers to the time that the ancestral lineage of the clade diverged from its sister clade. A clade's stem age is either the same as or older than its crown age. Note that ages of clades cannot be directly observed. They are inferred, either from stratigraphy of fossils, or from molecular clock estimates.
Viruses, and particularly RNA viruses form clades. These are useful in tracking the spread of viral infections. HIV, for example, has clades called subtypes, which vary in geographical prevalence. HIV subtype (clade) B, for example is predominant in Europe, the Americas and Japan, whereas subtype A is more common in east Africa. | [
{
"paragraph_id": 0,
"text": "In biological phylogenetics, a clade (from Ancient Greek κλάδος (kládos) 'branch'), also known as a monophyletic group or natural group, is a grouping of organisms that are monophyletic – that is, composed of a common ancestor and all its lineal descendants – on a phylogenetic tree. In the taxonomical literature, sometimes the Latin form cladus (plural cladi) is used rather than the English form. Clades are the fundamental unit of cladistics, a modern approach to taxonomy adopted by most biological fields.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The common ancestor may be an individual, a population, or a species (extinct or extant). Clades are nested, one in another, as each branch in turn splits into smaller branches. These splits reflect evolutionary history as populations diverged and evolved independently. Clades are termed monophyletic (Greek: \"one clan\") groups.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Over the last few decades, the cladistic approach has revolutionized biological classification and revealed surprising evolutionary relationships among organisms. Increasingly, taxonomists try to avoid naming taxa that are not clades; that is, taxa that are not monophyletic. Some of the relationships between organisms that the molecular biology arm of cladistics has revealed include that fungi are closer relatives to animals than they are to plants, archaea are now considered different from bacteria, and multicellular organisms may have evolved from archaea.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The term \"clade\" is also used with a similar meaning in other fields besides biology, such as historical linguistics; see Cladistics § In disciplines other than biology.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The term \"clade\" was coined in 1957 by the biologist Julian Huxley to refer to the result of cladogenesis, the evolutionary splitting of a parent species into two distinct species, a concept Huxley borrowed from Bernhard Rensch.",
"title": "Naming and etymology"
},
{
"paragraph_id": 5,
"text": "Many commonly named groups – rodents and insects, for example – are clades because, in each case, the group consists of a common ancestor with all its descendant branches. Rodents, for example, are a branch of mammals that split off after the end of the period when the clade Dinosauria stopped being the dominant terrestrial vertebrates 66 million years ago. The original population and all its descendants are a clade. The rodent clade corresponds to the order Rodentia, and insects to the class Insecta. These clades include smaller clades, such as chipmunk or ant, each of which consists of even smaller clades. The clade \"rodent\" is in turn included in the mammal, vertebrate and animal clades.",
"title": "Naming and etymology"
},
{
"paragraph_id": 6,
"text": "The idea of a clade did not exist in pre-Darwinian Linnaean taxonomy, which was based by necessity only on internal or external morphological similarities between organisms. Many of the better known animal groups in Linnaeus' original Systema Naturae (mostly vertebrate groups) do represent clades. The phenomenon of convergent evolution is responsible for many cases of misleading similarities in the morphology of groups that evolved from different lineages.",
"title": "History of nomenclature and taxonomy"
},
{
"paragraph_id": 7,
"text": "With the increasing realization in the first half of the 19th century that species had changed and split through the ages, classification increasingly came to be seen as branches on the evolutionary tree of life. The publication of Darwin's theory of evolution in 1859 gave this view increasing weight. Thomas Henry Huxley, an early advocate of evolutionary theory, proposed a revised taxonomy based on a concept strongly resembling clades, although the term clade itself would not be coined until 1957 by his grandson, Julian Huxley.",
"title": "History of nomenclature and taxonomy"
},
{
"paragraph_id": 8,
"text": "German biologist Emil Hans Willi Hennig (1913–1976) is considered to be the founder of cladistics. He proposed a classification system that represented repeated branchings of the family tree, as opposed to the previous systems, which put organisms on a \"ladder\", with supposedly more \"advanced\" organisms at the top.",
"title": "History of nomenclature and taxonomy"
},
{
"paragraph_id": 9,
"text": "Taxonomists have increasingly worked to make the taxonomic system reflect evolution. When it comes to naming, this principle is not always compatible with the traditional rank-based nomenclature (in which only taxa associated with a rank can be named) because not enough ranks exist to name a long series of nested clades. For these and other reasons, phylogenetic nomenclature has been developed; it is still controversial.",
"title": "History of nomenclature and taxonomy"
},
{
"paragraph_id": 10,
"text": "As an example, see the full current classification of Anas platyrhynchos (the mallard duck) with 40 clades from Eukaryota down by following this Wikispecies link and clicking on \"Expand\".",
"title": "History of nomenclature and taxonomy"
},
{
"paragraph_id": 11,
"text": "The name of a clade is conventionally a plural, where the singular refers to each member individually. A unique exception is the reptile clade Dracohors, which was made by haplology from Latin \"draco\" and \"cohors\", i.e. \"the dragon cohort\"; its form with a suffix added should be e.g. \"dracohortian\".",
"title": "History of nomenclature and taxonomy"
},
{
"paragraph_id": 12,
"text": "A clade is by definition monophyletic, meaning that it contains one ancestor (which can be an organism, a population, or a species) and all its descendants. The ancestor can be known or unknown; any and all members of a clade can be extant or extinct.",
"title": "Definition"
},
{
"paragraph_id": 13,
"text": "The science that tries to reconstruct phylogenetic trees and thus discover clades is called phylogenetics or cladistics, the latter term coined by Ernst Mayr (1965), derived from \"clade\". The results of phylogenetic/cladistic analyses are tree-shaped diagrams called cladograms; they, and all their branches, are phylogenetic hypotheses.",
"title": "Clades and phylogenetic trees"
},
{
"paragraph_id": 14,
"text": "Three methods of defining clades are featured in phylogenetic nomenclature: node-, stem-, and apomorphy-based (see Phylogenetic nomenclature§Phylogenetic definitions of clade names for detailed definitions).",
"title": "Clades and phylogenetic trees"
},
{
"paragraph_id": 15,
"text": "The relationship between clades can be described in several ways:",
"title": "Terminology"
},
{
"paragraph_id": 16,
"text": "The age of a clade can be described based on two different reference points, crown age and stem age. The crown age of a clade refers to the age of the most recent common ancestor of all of the species in the clade. The stem age of a clade refers to the time that the ancestral lineage of the clade diverged from its sister clade. A clade's stem age is either the same as or older than its crown age. Note that ages of clades cannot be directly observed. They are inferred, either from stratigraphy of fossils, or from molecular clock estimates.",
"title": "Terminology"
},
{
"paragraph_id": 17,
"text": "Viruses, and particularly RNA viruses form clades. These are useful in tracking the spread of viral infections. HIV, for example, has clades called subtypes, which vary in geographical prevalence. HIV subtype (clade) B, for example is predominant in Europe, the Americas and Japan, whereas subtype A is more common in east Africa.",
"title": "Viruses"
}
] | In biological phylogenetics, a clade, also known as a monophyletic group or natural group, is a grouping of organisms that are monophyletic – that is, composed of a common ancestor and all its lineal descendants – on a phylogenetic tree. In the taxonomical literature, sometimes the Latin form cladus is used rather than the English form. Clades are the fundamental unit of cladistics, a modern approach to taxonomy adopted by most biological fields. The common ancestor may be an individual, a population, or a species. Clades are nested, one in another, as each branch in turn splits into smaller branches. These splits reflect evolutionary history as populations diverged and evolved independently. Clades are termed monophyletic groups. Over the last few decades, the cladistic approach has revolutionized biological classification and revealed surprising evolutionary relationships among organisms. Increasingly, taxonomists try to avoid naming taxa that are not clades; that is, taxa that are not monophyletic. Some of the relationships between organisms that the molecular biology arm of cladistics has revealed include that fungi are closer relatives to animals than they are to plants, archaea are now considered different from bacteria, and multicellular organisms may have evolved from archaea. The term "clade" is also used with a similar meaning in other fields besides biology, such as historical linguistics; see Cladistics § In disciplines other than biology. | 2002-02-25T15:43:11Z | 2023-12-22T19:22:34Z | [
"Template:Other uses",
"Template:Use dmy dates",
"Template:Main",
"Template:Sfn",
"Template:Phylogenetics",
"Template:Etymology",
"Template:Main articles",
"Template:Cite web",
"Template:Short description",
"Template:Reflist",
"Template:Cite book",
"Template:Refend",
"Template:Wiktionary",
"Template:Evolution",
"Template:Citation needed",
"Template:Div col",
"Template:Div col end",
"Template:Cite journal",
"Template:Refbegin"
] | https://en.wikipedia.org/wiki/Clade |
6,684 | Communications in Afghanistan | Communications in Afghanistan is under the control of the Ministry of Communications and Information Technology (MCIT). It has rapidly expanded after the Karzai administration was formed in late 2001, and has embarked on wireless companies, internet, radio stations and television channels.
The Afghan government signed a $64.5 million agreement in 2006 with China's ZTE on the establishment of a countrywide optical fiber telecommunications network. The project began to improve telephone, internet, television and radio services throughout Afghanistan. About 90% of the country's population had access to communication services by the end of 2013.
Afghanistan uses its own space satellite called Afghansat 1. There are about 18 million mobile phone users in the country. Telecom companies include Afghan Telecom, Afghan Wireless, Etisalat, MTN, Roshan, Salaam. Around 20% of the population has access to the Internet.
Afghanistan was given legal control of the ".af" domain in 2003, and the Afghanistan Network Information Center (AFGNIC) was established to administer domain names. The country has 327,000 IP addresses and around 6,000 .af domains. Internet in Afghanistan is accessed by over 9 million users today. According to a 2020 estimate, over 7 million residents, which is roughly 18% of the population, had access to the internet. There are over a dozen different internet service providers in Afghanistan.
In 1870, a central post office was established at Bala Hissar in Kabul and a post office in the capital of each province. The service was slowly being expanded over the years as more postal offices were established in each large city by 1918. Afghanistan became a member of the Universal Postal Union in 1928, and the postal administration elevated to the Ministry of Communication in 1934. Civil war caused a disruption in issuing official stamps during the 1980s–90s war but in 1999 postal service was operating again. Postal services to/from Kabul worked remarkably well all throughout the war years. Postal services to/from Herat resumed in 1997. The Afghan government has reported to the UPU several times about illegal stamps being issued and sold in 2003 and 2007.
Afghanistan Post has been reorganizing the postal service in 2000s with assistance from Pakistan Post. The Afghanistan Postal commission was formed to prepare a written policy for the development of the postal sector, which will form the basis of a new postal services law governing licensing of postal services providers. The project was expected to finish by 2008.
Radio broadcasting in Afghanistan began in 1925 with Radio Kabul being the first station. The country currently has over 200 AM, FM and shortwave radio stations. They broadcast in Dari, Pashto, English, Uzbeki and a number of other languages.
In January 2014 the Afghan Ministry of Communications and Information Technology signed an agreement with Eutelsat for the use of satellite resources to enhance deployment of Afghanistan's national broadcasting and telecommunications infrastructure as well as its international connectivity. Afghansat 1 was officially launched in May 2014, with expected service for at least seven years in Afghanistan. The Afghan government plans to launch Afghansat 2 after the lease of Afghansat 1 ends.
According to 2013 statistics, there were 20,521,585 GSM mobile phone subscribers and 177,705 CDMA subscribers in Afghanistan. Mobile communications have improved because of the introduction of wireless carriers. The first was Afghan Wireless and the second Roshan, which began providing services to all major cities within Afghanistan. There are also a number of VSAT stations in major cities such as Kabul, Kandahar, Herat, Mazari Sharif, and Jalalabad, providing international and domestic voice/data connectivity. The international calling code for Afghanistan is +93. The following is a partial list of mobile phone companies in the country:
All the companies providing communication services are obligated to deliver 2.5% of their income to the communication development fund annually. According to the Ministry of Communication and Information Technology there are 4760 active towers throughout the country which covers 85% of the population. The Ministry of Communication and Information Technology plans to expand its services in remote parts of the country where the remaining 15% of the population will be covered with the installation of 700 new towers. According to WikiLeaks, phone calls in Afghanistan have been monitored by the National Security Agency.
There are over 106 television operators in Afghanistan and 320 television transmitters, many of which are based Kabul, while others are broadcast from other provinces. Selected foreign channels are also shown to the public in Afghanistan, but with the use of the internet, over 3,500 international TV channels may be accessed in Afghanistan. | [
{
"paragraph_id": 0,
"text": "Communications in Afghanistan is under the control of the Ministry of Communications and Information Technology (MCIT). It has rapidly expanded after the Karzai administration was formed in late 2001, and has embarked on wireless companies, internet, radio stations and television channels.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The Afghan government signed a $64.5 million agreement in 2006 with China's ZTE on the establishment of a countrywide optical fiber telecommunications network. The project began to improve telephone, internet, television and radio services throughout Afghanistan. About 90% of the country's population had access to communication services by the end of 2013.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Afghanistan uses its own space satellite called Afghansat 1. There are about 18 million mobile phone users in the country. Telecom companies include Afghan Telecom, Afghan Wireless, Etisalat, MTN, Roshan, Salaam. Around 20% of the population has access to the Internet.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Afghanistan was given legal control of the \".af\" domain in 2003, and the Afghanistan Network Information Center (AFGNIC) was established to administer domain names. The country has 327,000 IP addresses and around 6,000 .af domains. Internet in Afghanistan is accessed by over 9 million users today. According to a 2020 estimate, over 7 million residents, which is roughly 18% of the population, had access to the internet. There are over a dozen different internet service providers in Afghanistan.",
"title": "Internet"
},
{
"paragraph_id": 4,
"text": "In 1870, a central post office was established at Bala Hissar in Kabul and a post office in the capital of each province. The service was slowly being expanded over the years as more postal offices were established in each large city by 1918. Afghanistan became a member of the Universal Postal Union in 1928, and the postal administration elevated to the Ministry of Communication in 1934. Civil war caused a disruption in issuing official stamps during the 1980s–90s war but in 1999 postal service was operating again. Postal services to/from Kabul worked remarkably well all throughout the war years. Postal services to/from Herat resumed in 1997. The Afghan government has reported to the UPU several times about illegal stamps being issued and sold in 2003 and 2007.",
"title": "Postal service"
},
{
"paragraph_id": 5,
"text": "Afghanistan Post has been reorganizing the postal service in 2000s with assistance from Pakistan Post. The Afghanistan Postal commission was formed to prepare a written policy for the development of the postal sector, which will form the basis of a new postal services law governing licensing of postal services providers. The project was expected to finish by 2008.",
"title": "Postal service"
},
{
"paragraph_id": 6,
"text": "Radio broadcasting in Afghanistan began in 1925 with Radio Kabul being the first station. The country currently has over 200 AM, FM and shortwave radio stations. They broadcast in Dari, Pashto, English, Uzbeki and a number of other languages.",
"title": "Radio"
},
{
"paragraph_id": 7,
"text": "In January 2014 the Afghan Ministry of Communications and Information Technology signed an agreement with Eutelsat for the use of satellite resources to enhance deployment of Afghanistan's national broadcasting and telecommunications infrastructure as well as its international connectivity. Afghansat 1 was officially launched in May 2014, with expected service for at least seven years in Afghanistan. The Afghan government plans to launch Afghansat 2 after the lease of Afghansat 1 ends.",
"title": "Satellite"
},
{
"paragraph_id": 8,
"text": "According to 2013 statistics, there were 20,521,585 GSM mobile phone subscribers and 177,705 CDMA subscribers in Afghanistan. Mobile communications have improved because of the introduction of wireless carriers. The first was Afghan Wireless and the second Roshan, which began providing services to all major cities within Afghanistan. There are also a number of VSAT stations in major cities such as Kabul, Kandahar, Herat, Mazari Sharif, and Jalalabad, providing international and domestic voice/data connectivity. The international calling code for Afghanistan is +93. The following is a partial list of mobile phone companies in the country:",
"title": "Telephone"
},
{
"paragraph_id": 9,
"text": "All the companies providing communication services are obligated to deliver 2.5% of their income to the communication development fund annually. According to the Ministry of Communication and Information Technology there are 4760 active towers throughout the country which covers 85% of the population. The Ministry of Communication and Information Technology plans to expand its services in remote parts of the country where the remaining 15% of the population will be covered with the installation of 700 new towers. According to WikiLeaks, phone calls in Afghanistan have been monitored by the National Security Agency.",
"title": "Telephone"
},
{
"paragraph_id": 10,
"text": "There are over 106 television operators in Afghanistan and 320 television transmitters, many of which are based Kabul, while others are broadcast from other provinces. Selected foreign channels are also shown to the public in Afghanistan, but with the use of the internet, over 3,500 international TV channels may be accessed in Afghanistan.",
"title": "Television"
}
] | Communications in Afghanistan is under the control of the Ministry of Communications and Information Technology (MCIT). It has rapidly expanded after the Karzai administration was formed in late 2001, and has embarked on wireless companies, internet, radio stations and television channels. The Afghan government signed a $64.5 million agreement in 2006 with China's ZTE on the establishment of a countrywide optical fiber telecommunications network. The project began to improve telephone, internet, television and radio services throughout Afghanistan. About 90% of the country's population had access to communication services by the end of 2013. Afghanistan uses its own space satellite called Afghansat 1. There are about 18 million mobile phone users in the country. Telecom companies include Afghan Telecom, Afghan Wireless, Etisalat, MTN, Roshan, Salaam. Around 20% of the population has access to the Internet. | 2001-10-16T05:46:17Z | 2023-11-13T16:07:15Z | [
"Template:Afghanistan topics",
"Template:Telecommunications",
"Template:Use mdy dates",
"Template:Main",
"Template:Citation needed",
"Template:Cite news",
"Template:Further",
"Template:Reflist",
"Template:Cite web",
"Template:Asia topic"
] | https://en.wikipedia.org/wiki/Communications_in_Afghanistan |
6,689 | Christian of Oliva | Christian of Oliva (Polish: Chrystian z Oliwy), also Christian of Prussia (German: Christian von Preußen) (died 4 December(?) 1245) was the first missionary bishop of Prussia.
Christian was born about 1180 in the Duchy of Pomerania, possibly in the area of Chociwel (according to Johannes Voigt). Probably as a juvenile he joined the Cistercian Order at newly established Kołbacz (Kolbatz) Abbey and in 1209 entered Oliwa Abbey near Gdańsk, founded in 1178 by the Samboride dukes of Pomerelia. At this time the Piast duke Konrad I of Masovia with the consent of Pope Innocent III had started the first of several unsuccessful Prussian Crusades into the adjacent Chełmno Land and Christian acted as a missionary among the Prussians east of the Vistula River.
In 1209, Christian was commissioned by the Pope to be responsible for the Prussian missions between the Vistula and Neman Rivers and in 1212 he was appointed bishop. In 1215 he went to Rome in order to report to the Curia on the condition and prospects of his mission, and was consecrated first "Bishop of Prussia" at the Fourth Council of the Lateran. His seat as a bishop remained at Oliwa Abbey on the western side of the Vistula, whereas the pagan Prussian (later East Prussian) territory was on the eastern side of it.
The attempts by Konrad of Masovia to subdue the Prussian lands had picked long-term and intense border quarrels, whereby the Polish lands of Masovia, Cuyavia and even Greater Poland became subject to continuous Prussian raids. Bishop Christian asked the new Pope Honorius III for the consent to start another Crusade, however a first campaign in 1217 proved a failure and even the joint efforts by Duke Konrad with the Polish High Duke Leszek I the White and Duke Henry I the Bearded of Silesia in 1222/23 only led to the reconquest of Chełmno Land but did not stop the Prussian invasions. At least Christian was able to establish the Diocese of Chełmno east of the Vistula, adopting the episcopal rights from the Masovian Bishop of Płock, confirmed by both Duke Konrad and the Pope.
Duke Konrad of Masovia still was not capable to end the Prussian attacks on his territory and in 1226 began to conduct negotiations with the Teutonic Knights under Grand Master Hermann von Salza in order to strengthen his forces. As von Salza initially hesitated to offer his services, Christian created the military Order of Dobrzyń (Fratres Milites Christi) in 1228, however to little avail.
Meanwhile, von Salza had to abandon his hope to establish an Order's State in the Burzenland region of Transylvania, which had led to an éclat with King Andrew II of Hungary. He obtained a charter by Emperor Frederick II issued in the 1226 Golden Bull of Rimini, whereby Chełmno Land would be the unshared possession of the Teutonic Knights, which was confirmed by Duke Konrad of Masovia in the 1230 Treaty of Kruszwica. Christian ceded his possessions to the new State of the Teutonic Order and in turn was appointed Bishop of Chełmno the next year.
Bishop Christian continued his mission in Sambia (Samland), where from 1233 to 1239 he was held captive by pagan Prussians, and freed in trade for five other hostages who then in turn were released for a ransom of 800 Marks, granted to him by Pope Gregory IX. He had to deal with the constant cut-back of his autonomy by the Knights and asked the Roman Curia for mediation. In 1243, the Papal legate William of Modena divided the Prussian lands of the Order's State into four dioceses, whereby the bishops retained the secular rule over about on third of the diocesan territory:
all suffragan dioceses under the Archbishopric of Riga. Christian was supposed to choose one of them, but did not agree to the division. He possibly retired to the Cistercians Abbey in Sulejów, where he died before the conflict was solved. | [
{
"paragraph_id": 0,
"text": "Christian of Oliva (Polish: Chrystian z Oliwy), also Christian of Prussia (German: Christian von Preußen) (died 4 December(?) 1245) was the first missionary bishop of Prussia.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Christian was born about 1180 in the Duchy of Pomerania, possibly in the area of Chociwel (according to Johannes Voigt). Probably as a juvenile he joined the Cistercian Order at newly established Kołbacz (Kolbatz) Abbey and in 1209 entered Oliwa Abbey near Gdańsk, founded in 1178 by the Samboride dukes of Pomerelia. At this time the Piast duke Konrad I of Masovia with the consent of Pope Innocent III had started the first of several unsuccessful Prussian Crusades into the adjacent Chełmno Land and Christian acted as a missionary among the Prussians east of the Vistula River.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "In 1209, Christian was commissioned by the Pope to be responsible for the Prussian missions between the Vistula and Neman Rivers and in 1212 he was appointed bishop. In 1215 he went to Rome in order to report to the Curia on the condition and prospects of his mission, and was consecrated first \"Bishop of Prussia\" at the Fourth Council of the Lateran. His seat as a bishop remained at Oliwa Abbey on the western side of the Vistula, whereas the pagan Prussian (later East Prussian) territory was on the eastern side of it.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The attempts by Konrad of Masovia to subdue the Prussian lands had picked long-term and intense border quarrels, whereby the Polish lands of Masovia, Cuyavia and even Greater Poland became subject to continuous Prussian raids. Bishop Christian asked the new Pope Honorius III for the consent to start another Crusade, however a first campaign in 1217 proved a failure and even the joint efforts by Duke Konrad with the Polish High Duke Leszek I the White and Duke Henry I the Bearded of Silesia in 1222/23 only led to the reconquest of Chełmno Land but did not stop the Prussian invasions. At least Christian was able to establish the Diocese of Chełmno east of the Vistula, adopting the episcopal rights from the Masovian Bishop of Płock, confirmed by both Duke Konrad and the Pope.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Duke Konrad of Masovia still was not capable to end the Prussian attacks on his territory and in 1226 began to conduct negotiations with the Teutonic Knights under Grand Master Hermann von Salza in order to strengthen his forces. As von Salza initially hesitated to offer his services, Christian created the military Order of Dobrzyń (Fratres Milites Christi) in 1228, however to little avail.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Meanwhile, von Salza had to abandon his hope to establish an Order's State in the Burzenland region of Transylvania, which had led to an éclat with King Andrew II of Hungary. He obtained a charter by Emperor Frederick II issued in the 1226 Golden Bull of Rimini, whereby Chełmno Land would be the unshared possession of the Teutonic Knights, which was confirmed by Duke Konrad of Masovia in the 1230 Treaty of Kruszwica. Christian ceded his possessions to the new State of the Teutonic Order and in turn was appointed Bishop of Chełmno the next year.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Bishop Christian continued his mission in Sambia (Samland), where from 1233 to 1239 he was held captive by pagan Prussians, and freed in trade for five other hostages who then in turn were released for a ransom of 800 Marks, granted to him by Pope Gregory IX. He had to deal with the constant cut-back of his autonomy by the Knights and asked the Roman Curia for mediation. In 1243, the Papal legate William of Modena divided the Prussian lands of the Order's State into four dioceses, whereby the bishops retained the secular rule over about on third of the diocesan territory:",
"title": "History"
},
{
"paragraph_id": 7,
"text": "all suffragan dioceses under the Archbishopric of Riga. Christian was supposed to choose one of them, but did not agree to the division. He possibly retired to the Cistercians Abbey in Sulejów, where he died before the conflict was solved.",
"title": "History"
}
] | Christian of Oliva, also Christian of Prussia was the first missionary bishop of Prussia. | 2001-10-04T05:21:41Z | 2023-11-04T16:56:00Z | [
"Template:Refimprove",
"Template:Lang-pl",
"Template:Lang-de",
"Template:Reflist",
"Template:In lang",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Christian_of_Oliva |
6,690 | Coca-Cola | Coca-Cola, or Coke, is a carbonated soft drink manufactured by the Coca-Cola Company. In 2013, Coke products were sold in over 200 countries worldwide, with consumers drinking more than 1.8 billion company beverage servings each day. Coca-Cola ranked No. 87 in the 2018 Fortune 500 list of the largest United States corporations by total revenue. Based on Interbrand's "best global brand" study of 2020, Coca-Cola was the world's sixth most valuable brand.
Originally marketed as a temperance drink and intended as a patent medicine, Coca-Cola was invented in the late 19th century by John Stith Pemberton in Atlanta, Georgia. In 1888, Pemberton sold the ownership rights to Asa Griggs Candler, a businessman, whose marketing tactics led Coca-Cola to its dominance of the global soft-drink market throughout the 20th and 21st century. The name refers to two of its original ingredients: coca leaves and kola nuts (a source of caffeine). The current formula of Coca-Cola remains a trade secret; however, a variety of reported recipes and experimental recreations have been published. The secrecy around the formula has been used by Coca-Cola in its marketing as only a handful of anonymous employees know the formula. The drink has inspired imitators and created a whole classification of soft drink: colas.
The Coca-Cola Company produces concentrate, which is then sold to licensed Coca-Cola bottlers throughout the world. The bottlers, who hold exclusive territory contracts with the company, produce the finished product in cans and bottles from the concentrate, in combination with filtered water and sweeteners. A typical 12-US-fluid-ounce (350 ml) can contains 38 grams (1.3 oz) of sugar (usually in the form of high-fructose corn syrup in North America). The bottlers then sell, distribute, and merchandise Coca-Cola to retail stores, restaurants, and vending machines throughout the world. The Coca-Cola Company also sells concentrate for soda fountains of major restaurants and foodservice distributors.
The Coca-Cola Company has on occasion introduced other cola drinks under the Coke name. The most common of these is Diet Coke, along with others including Caffeine-Free Coca-Cola, Diet Coke Caffeine-Free, Coca-Cola Zero Sugar, Coca-Cola Cherry, Coca-Cola Vanilla, and special versions with lemon, lime, and coffee. Coca-Cola was called Coca-Cola Classic from July 1985 to 2009, to distinguish it from "New Coke".
Confederate Colonel John Pemberton, wounded in the American Civil War and addicted to morphine, also had a medical degree and began a quest to find a substitute for the problematic drug. In 1885 at Pemberton's Eagle Drug and Chemical House, his drugstore in Columbus, Georgia, he registered Pemberton's French Wine Coca nerve tonic. Pemberton's tonic may have been inspired by the formidable success of Vin Mariani, a French-Corsican coca wine, but his recipe additionally included the African kola nut, the beverage's source of caffeine. A Spanish drink called "Kola Coca" was presented at a contest in Philadelphia in 1885, a year before the official birth of Coca-Cola. The rights for this Spanish drink were bought by Coca-Cola in 1953.
In 1886, when Atlanta and Fulton County passed prohibition legislation, Pemberton responded by developing Coca-Cola, a nonalcoholic version of Pemberton's French Wine Coca. It was marketed as "Coca-Cola: The temperance drink", which appealed to many people as the temperance movement enjoyed wide support during this time. The first sales were at Jacob's Pharmacy in Atlanta, Georgia, on May 8, 1886, where it initially sold for five cents a glass. Drugstore soda fountains were popular in the United States at the time due to the belief that carbonated water was good for the health, and Pemberton's new drink was marketed and sold as a patent medicine, Pemberton claiming it a cure for many diseases, including morphine addiction, indigestion, nerve disorders, headaches, and impotence. Pemberton ran the first advertisement for the beverage on May 29 of the same year in the Atlanta Journal.
By 1888, three versions of Coca-Cola – sold by three separate businesses – were on the market. A co-partnership had been formed on January 14, 1888, between Pemberton and four Atlanta businessmen: J.C. Mayfield, A.O. Murphey, C.O. Mullahy, and E.H. Bloodworth. Not codified by any signed document, a verbal statement given by Asa Candler years later asserted under testimony that he had acquired a stake in Pemberton's company as early as 1887. John Pemberton declared that the name "Coca-Cola" belonged to his son, Charley, but the other two manufacturers could continue to use the formula.
Charley Pemberton's record of control over the "Coca-Cola" name was the underlying factor that allowed for him to participate as a major shareholder in the March 1888 Coca-Cola Company incorporation filing made in his father's place. Charley's exclusive control over the "Coca-Cola" name became a continual thorn in Asa Candler's side. Candler's oldest son, Charles Howard Candler, authored a book in 1950 published by Emory University. In this definitive biography about his father, Candler specifically states: "on April 14, 1888, the young druggist Asa Griggs Candler purchased a one-third interest in the formula of an almost completely unknown proprietary elixir known as Coca-Cola." The deal was actually between John Pemberton's son Charley and Walker, Candler & Co. – with John Pemberton acting as cosigner for his son. For $50 down and $500 in 30 days, Walker, Candler & Co. obtained all of the one-third interest in the Coca-Cola Company that Charley held, all while Charley still held on to the name. After the April 14 deal, on April 17, 1888, one-half of the Walker/Dozier interest shares were acquired by Candler for an additional $750.
After Candler had gained a better foothold on Coca-Cola in April 1888, he nevertheless was forced to sell the beverage he produced with the recipe he had under the names "Yum Yum" and "Koke". This was while Charley Pemberton was selling the elixir, although a cruder mixture, under the name "Coca-Cola", all with his father's blessing. After both names failed to catch on for Candler, by the middle of 1888, the Atlanta pharmacist was quite anxious to establish a firmer legal claim to Coca-Cola, and hoped he could force his two competitors, Walker and Dozier, completely out of the business, as well.
John Pemberton died suddenly on August 16, 1888. Asa Candler then decided to move swiftly forward to attain full control of the entire Coca-Cola operation.
Charley Pemberton, an alcoholic and opium addict, unnerved Asa Candler more than anyone else. Candler is said to have quickly maneuvered to purchase the exclusive rights to the name "Coca-Cola" from Pemberton's son Charley immediately after he learned of Dr. Pemberton's death. One of several stories states that Candler approached Charley's mother at John Pemberton's funeral and offered her $300 in cash for the title to the name.
In Charles Howard Candler's 1950 book about his father, he stated: "On August 30 [1888], he [Asa Candler] became the sole proprietor of Coca-Cola, a fact which was stated on letterheads, invoice blanks and advertising copy."
With this action on August 30, 1888, Candler's sole control became technically all true. Candler had negotiated with Margaret Dozier and her brother Woolfolk Walker a full payment amounting to $1,000, which all agreed Candler could pay off with a series of notes over a specified time span. By May 1, 1889, Candler was now claiming full ownership of the Coca-Cola beverage, with a total investment outlay by Candler for the drink enterprise over the years amounting to $2,300.
In 1914, Margaret Dozier, as co-owner of the original Coca-Cola Company in 1888, came forward to claim that her signature on the 1888 Coca-Cola Company bill of sale had been forged. Subsequent analysis of other similar transfer documents had also indicated John Pemberton's signature had most likely been forged as well, which some accounts claim was precipitated by his son Charley.
In 1892, Candler set out to incorporate a second company, the Coca-Cola Company (the current corporation). When Candler had the earliest records of the "Coca-Cola Company" destroyed in 1910, the action was claimed to have been made during a move to new corporation offices around this time.
Charley Pemberton was found on June 23, 1894, unconscious, with a stick of opium by his side. Ten days later, Charley died at Atlanta's Grady Hospital at the age of 40.
On September 12, 1919, Coca-Cola Co. was purchased by a group of investors led by Ernest Woodruff's Trust Company for $25 million and reincorporated under Delaware General Corporation Law. The company publicly offered 500,000 shares of the company for $40 a share. In 1923, his son Robert W. Woodruff was elected President of the company. Woodruff expanded the company and brought Coca-Cola to the rest of the world. Coca-Cola began distributing bottles as "Six-packs", encouraging customers to purchase the beverage for their home.
During its first several decades, Coca-Cola officially wanted to be known by its full-name despite being commonly known as "Coke". This was due to company fears that the term "coke" would eventually become a generic trademark, which to an extent became true in the Southern United States where "coke" is used even for non Coca-Cola products. The company also didn't want to confuse its drink with the similarly named coal byproduct that clearly wasn't safe to consume. Eventually, out for fears that another company may claim the trademark for "Coke", Coca-Cola finally embraced it and officially endorsed the name "Coke" in 1941. "Coke" eventually became a registered trademark of the Coca-Cola Company in 1945.
In 1986, the Coca-Cola Company merged with two of their bottling operators (owned by JTL Corporation and BCI Holding Corporation) to form Coca-Cola Enterprises Inc. (CCE).
In December 1991, Coca-Cola Enterprises merged with the Johnston Coca-Cola Bottling Group, Inc.
The first bottling of Coca-Cola occurred in Vicksburg, Mississippi, at the Biedenharn Candy Company on March 12, 1894. The proprietor of the bottling works was Joseph A. Biedenharn. The original bottles were Hutchinson bottles, very different from the much later hobble-skirt design of 1915 now so familiar.
A few years later two entrepreneurs from Chattanooga, Tennessee, namely Benjamin F. Thomas and Joseph B. Whitehead, proposed the idea of bottling and were so persuasive that Candler signed a contract giving them control of the procedure for only one dollar. Candler later realized that he had made a grave mistake. Candler never collected his dollar, but in 1899, Chattanooga became the site of the first Coca-Cola bottling company. Candler remained very content just selling his company's syrup. The loosely termed contract proved to be problematic for the Coca-Cola Company for decades to come. Legal matters were not helped by the decision of the bottlers to subcontract to other companies, effectively becoming parent bottlers. This contract specified that bottles would be sold at 5¢ each and had no fixed duration, leading to the fixed price of Coca-Cola from 1886 to 1959.
The first outdoor wall advertisement that promoted the Coca-Cola drink was painted in 1894 in Cartersville, Georgia. Cola syrup was sold as an over-the-counter dietary supplement for upset stomach. By the time of its 50th anniversary, the soft drink had reached the status of a national icon in the US. In 1935, it was certified kosher by Atlanta rabbi Tobias Geffen. With the help of Harold Hirsch, Geffen was the first person outside the company to see the top-secret ingredients list after Coke faced scrutiny from the American Jewish population regarding the drink's kosher status. Consequently, the company made minor changes in the sourcing of some ingredients so it could continue to be consumed by America's Jewish population, including during Passover. A yellow cap on a Coca-Cola drink indicates that it is kosher.
The longest running commercial Coca-Cola soda fountain anywhere was Atlanta's Fleeman's Pharmacy, which first opened its doors in 1914. Jack Fleeman took over the pharmacy from his father and ran it until 1995; closing it after 81 years. On July 12, 1944, the one-billionth gallon of Coca-Cola syrup was manufactured by the Coca-Cola Company. Cans of Coke first appeared in 1955.
Sugar prices spiked in the 1970s because of Soviet demand/hoarding and possible futures contracts market manipulation. The Soviet Union was the largest producer of sugar at the time. In 1974 Coca-Cola switched over to high-fructose corn syrup because of the elevated prices.
On April 23, 1985, Coca-Cola, amid much publicity, attempted to change the formula of the drink with "New Coke". Follow-up taste tests revealed most consumers preferred the taste of New Coke to both Coke and Pepsi but Coca-Cola management was unprepared for the public's nostalgia for the old drink, leading to a backlash. The company gave in to protests and returned to the old formula under the name Coca-Cola Classic, on July 10, 1985. "New Coke" remained available and was renamed Coke II in 1992; it was discontinued in 2002.
On July 5, 2005, it was revealed that Coca-Cola would resume operations in Iraq for the first time since the Arab League boycotted the company in 1968.
In April 2007, in Canada, the name "Coca-Cola Classic" was changed back to "Coca-Cola". The word "Classic" was removed because "New Coke" was no longer in production, eliminating the need to differentiate between the two. The formula remained unchanged. In January 2009, Coca-Cola stopped printing the word "Classic" on the labels of 16-US-fluid-ounce (470 ml) bottles sold in parts of the southeastern United States. The change was part of a larger strategy to rejuvenate the product's image. The word "Classic" was removed from all Coca-Cola products by 2011.
In November 2009, due to a dispute over wholesale prices of Coca-Cola products, Costco stopped restocking its shelves with Coke and Diet Coke for two months; a separate pouring rights deal in 2013 saw Coke products removed from Costco food courts in favor of Pepsi. Some Costco locations (such as the ones in Tucson, Arizona) additionally sell imported Coca-Cola from Mexico with cane sugar instead of corn syrup from separate distributors. Coca-Cola introduced the 7.5-ounce mini-can in 2009, and on September 22, 2011, the company announced price reductions, asking retailers to sell eight-packs for $2.99. That same day, Coca-Cola announced the 12.5-ounce bottle, to sell for 89 cents. A 16-ounce bottle has sold well at 99 cents since being re-introduced, but the price was going up to $1.19.
In 2012, Coca-Cola resumed business in Myanmar after 60 years of absence due to US-imposed investment sanctions against the country. Coca-Cola's bottling plant is located in Yangon and is part of the company's five-year plan and $200 million investment in Myanmar. Coca-Cola with its partners is to invest US$5 billion in its operations in India by 2020.
In February 2021, as a plan to combat plastic waste, Coca-Cola said that it would start selling its sodas in bottles made from 100% recycled plastic material in the United States, and by 2030 planned to recycle one bottle or can for each one it sold. Coca-Cola started by selling 2000 paper bottles to see if they held up due to the risk of safety and of changing the taste of the drink.
A typical can of Coca-Cola (12 fl ounces/355 ml) contains 39 grams of sugar, 50 mg of sodium, 0 grams fat, 0 grams potassium, and 140 calories. On May 5, 2014, Coca-Cola said it was working to remove a controversial ingredient, brominated vegetable oil, from its drinks.
A UK 330 ml can contains 35 grammes of sugar and 139 calories.
The exact formula of Coca-Cola's natural flavorings (but not its other ingredients, which are listed on the side of the bottle or can) is a trade secret. The original copy of the formula was held in Truist Financial's main vault in Atlanta for 86 years. Its predecessor, the Trust Company, was the underwriter for the Coca-Cola Company's initial public offering in 1919. On December 8, 2011, the original secret formula was moved from the vault at SunTrust Banks to a new vault containing the formula which will be on display for visitors to its World of Coca-Cola museum in downtown Atlanta.
According to Snopes, a popular myth states that only two executives have access to the formula, with each executive having only half the formula. However, several sources state that while Coca-Cola does have a rule restricting access to only two executives, each knows the entire formula and others, in addition to the prescribed duo, have known the formulation process.
On February 11, 2011, Ira Glass said on his PRI radio show, This American Life, that TAL staffers had found a recipe in "Everett Beal's Recipe Book", reproduced in the February 28, 1979, issue of The Atlanta Journal-Constitution, that they believed was either Pemberton's original formula for Coca-Cola, or a version that he made either before or after the product hit the market in 1886. The formula basically matched the one found in Pemberton's diary. Coca-Cola archivist Phil Mooney acknowledged that the recipe "could be a precursor" to the formula used in the original 1886 product, but emphasized that Pemberton's original formula is not the same as the one used in the current product.
When launched, Coca-Cola's two key ingredients were cocaine and caffeine. The cocaine was derived from the coca leaf and the caffeine from kola nut (also spelled "cola nut" at the time), leading to the name Coca-Cola.
Pemberton called for five ounces of coca leaf per gallon of syrup (approximately 37 g/L), a significant dose; in 1891, Candler claimed his formula (altered extensively from Pemberton's original) contained only a tenth of this amount. Coca-Cola once contained an estimated nine milligrams of cocaine per glass. (For comparison, a typical dose or "line" of cocaine is 50–75 mg.) In 1903, it was removed.
After 1904, instead of using fresh leaves, Coca-Cola started using "spent" leaves – the leftovers of the cocaine-extraction process with trace levels of cocaine. Since then (by 1929), Coca-Cola has used a cocaine-free coca leaf extract. Today, that extract is prepared at a Stepan Company plant in Maywood, New Jersey, the only manufacturing plant authorized by the federal government to import and process coca leaves, which it obtains from Peru and Bolivia. Stepan Company extracts cocaine from the coca leaves, which it then sells to Mallinckrodt, the only company in the United States licensed to purify cocaine for medicinal use.
Long after the syrup had ceased to contain any significant amount of cocaine, in North Carolina "dope" remained a common colloquialism for Coca-Cola, and "dope-wagons" were trucks that transported it.
The kola nut acts as a flavoring and the original source of caffeine in Coca-Cola. It contains about 2.0 to 3.5% caffeine, and has a bitter flavor.
In 1911, the US government sued in United States v. Forty Barrels and Twenty Kegs of Coca-Cola, hoping to force the Coca-Cola Company to remove caffeine from its formula. The court found that the syrup, when diluted as directed, would result in a beverage containing 1.21 grains (or 78.4 mg) of caffeine per 8 US fluid ounces (240 ml) serving. The case was decided in favor of the Coca-Cola Company at the district court, but subsequently in 1912, the US Pure Food and Drug Act was amended, adding caffeine to the list of "habit-forming" and "deleterious" substances which must be listed on a product's label. In 1913 the case was appealed to the Sixth Circuit in Cincinnati, where the ruling was affirmed, but then appealed again in 1916 to the Supreme Court, where the government effectively won as a new trial was ordered. The company then voluntarily reduced the amount of caffeine in its product, and offered to pay the government's legal costs to settle and avoid further litigation.
Coca-Cola contains 34 mg of caffeine per 12 fluid ounces (9.8 mg per 100 ml).
The actual production and distribution of Coca-Cola follows a franchising model. The Coca-Cola Company only produces a syrup concentrate, which it sells to bottlers throughout the world, who hold Coca-Cola franchises for one or more geographical areas. The bottlers produce the final drink by mixing the syrup with filtered water and sweeteners, putting the mixture into cans and bottles, and carbonating it, which the bottlers then sell and distribute to retail stores, vending machines, restaurants, and foodservice distributors.
The Coca-Cola Company owns minority shares in some of its largest franchises, such as Coca-Cola Enterprises, Coca-Cola Amatil, Coca-Cola Hellenic Bottling Company, and Coca-Cola FEMSA, as well as some smaller ones, such as Coca-Cola Bottlers Uzbekistan, but fully independent bottlers produce almost half of the volume sold in the world. Independent bottlers are allowed to sweeten the drink according to local tastes.
The bottling plant in Skopje, Macedonia, received the 2009 award for "Best Bottling Company".
Since it announced its intention to begin distribution in Myanmar in June 2012, Coca-Cola has been officially available in every country in the world except Cuba and North Korea. However, it is reported to be available in both countries as a grey import. As of 2022, Coca-Cola has suspended its operations in Russia due to the invasion of Ukraine.
Coca-Cola has been a point of legal discussion in the Middle East. In the early 20th century, a fatwa was created in Egypt to discuss the question of "whether Muslims were permitted to drink Coca-Cola and Pepsi cola." The fatwa states: "According to the Muslim Hanefite, Shafi'ite, etc., the rule in Islamic law of forbidding or allowing foods and beverages is based on the presumption that such things are permitted unless it can be shown that they are forbidden on the basis of the Qur'an." The Muslim jurists stated that, unless the Qur'an specifically prohibits the consumption of a particular product, it is permissible to consume. Another clause was discussed, whereby the same rules apply if a person is unaware of the condition or ingredients of the item in question.
Coca-Cola first entered the Chinese market in the 1920s with no localized representation of its name. While the company researched a satisfactory translation, local shopkeepers created their own. These produced the desired "ko-ka ko-la" sound, but with odd meanings such as "female horse fastened with wax" or "bite the wax tadpole". In the 1930s, the company settled on the name "可口可樂(可口可乐)" (Ke-kou ke-le) taking into account the effects of syllable and meaning translations. The phrase means roughly "to allow the mouth to be able to rejoice". The story introduction from Coca-Cola mentions that Chiang Yee provided the new localized name, but there are also sources that the localized name appeared before 1935, or that it was given by someone named Jerome T. Lieu who studied at Columbia University in New York.
This is a list of variants of Coca-Cola introduced around the world. In addition to the caffeine-free version of the original, additional fruit flavors have been included over the years. Not included here are versions of Diet Coke and Coca-Cola Zero Sugar; variant versions of those no-calorie colas can be found at their respective articles.
The Coca-Cola logo was created by John Pemberton's bookkeeper, Frank Mason Robinson, in 1885. Robinson came up with the name and chose the logo's distinctive cursive script. The writing style used, known as Spencerian script, was developed in the mid-19th century and was the dominant form of formal handwriting in the United States during that period.
Robinson also played a significant role in early Coca-Cola advertising. His promotional suggestions to Pemberton included giving away thousands of free drink coupons and plastering the city of Atlanta with publicity banners and streetcar signs.
Coca-Cola came under scrutiny in Egypt in 1951 because of a conspiracy theory that the Coca-Cola logo, when reflected in a mirror, spells out "No Mohammed no Mecca" in Arabic.
The Coca-Cola bottle, called the "contour bottle" within the company, was created by bottle designer Earl R. Dean and Coca-Cola's general counsel, Harold Hirsch. In 1915, the Coca-Cola Company was represented by their general counsel to launch a competition among its bottle suppliers as well as any competition entrants to create a new bottle for their beverage that would distinguish it from other beverage bottles, "a bottle which a person could recognize even if they felt it in the dark, and so shaped that, even if broken, a person could tell at a glance what it was."
Chapman J. Root, president of The Root Glass Company of Terre Haute, Indiana, turned the project over to members of his supervisory staff, including company auditor T. Clyde Edwards, plant superintendent Alexander Samuelsson, and Earl R. Dean, bottle designer and supervisor of the bottle molding room. Root and his subordinates decided to base the bottle's design on one of the soda's two ingredients, the coca leaf or the kola nut, but were unaware of what either ingredient looked like. Dean and Edwards went to the Emeline Fairbanks Memorial Library and were unable to find any information about coca or kola. Instead, Dean was inspired by a picture of the gourd-shaped cocoa pod in the Encyclopædia Britannica. Dean made a rough sketch of the pod and returned to the plant to show Root. He explained to Root how he could transform the shape of the pod into a bottle. Root gave Dean his approval.
Faced with the upcoming scheduled maintenance of the mold-making machinery, over the next 24 hours Dean sketched out a concept drawing which was approved by Root the next morning. Chapman Root approved the prototype bottle and a design patent was issued on the bottle in November 1915. The prototype never made it to production since its middle diameter was larger than its base, making it unstable on conveyor belts. Dean resolved this issue by decreasing the bottle's middle diameter. During the 1916 bottler's convention, Dean's contour bottle was chosen over other entries and was on the market the same year. By 1920, the contour bottle became the standard for the Coca-Cola Company. A revised version was also patented in 1923. Because the Patent Office releases the Patent Gazette on Tuesday, the bottle was patented on December 25, 1923, and was nicknamed the "Christmas bottle". Today, the contour Coca-Cola bottle is one of the most recognized packages on the planet.
As a reward for his efforts, Dean was offered a choice between a $500 bonus or a lifetime job at The Root Glass Company. He chose the lifetime job and kept it until the Owens-Illinois Glass Company bought out The Root Glass Company in the mid-1930s. Dean went on to work in other Midwestern glass factories.
Raymond Loewy updated the design in 1955 to accommodate larger formats.
Others have attributed inspiration for the design not to the cocoa pod, but to a Victorian hooped dress.
In 1944, Associate Justice Roger J. Traynor of the Supreme Court of California took advantage of a case involving a waitress injured by an exploding Coca-Cola bottle to articulate the doctrine of strict liability for defective products. Traynor's concurring opinion in Escola v. Coca-Cola Bottling Co. is widely recognized as a landmark case in US law today.
Karl Lagerfeld is the latest designer to have created a collection of aluminum bottles for Coca-Cola. Lagerfeld is not the first fashion designer to create a special version of the famous Coca-Cola Contour bottle. A number of other limited edition bottles by fashion designers for Coca-Cola Light soda have been created in the last few years, including Jean Paul Gaultier.
In 2009, in Italy, Coca-Cola Light had a Tribute to Fashion to celebrate 100 years of the recognizable contour bottle. Well known Italian designers Alberta Ferretti, Blumarine, Etro, Fendi, Marni, Missoni, Moschino, and Versace each designed limited edition bottles.
In 2019, Coca-Cola shared the first beverage bottle made with ocean plastic.
Pepsi, the flagship product of PepsiCo, the Coca-Cola Company's main rival in the soft drink industry, is usually second to Coke in sales, and outsells Coca-Cola in some markets. RC Cola, now owned by the Dr Pepper Snapple Group, the third-largest soft drink manufacturer, is also widely available.
Around the world, many local brands compete with Coke. In South and Central America Kola Real, also known as Big Cola, is a growing competitor to Coca-Cola. On the French island of Corsica, Corsica Cola, made by brewers of the local Pietra beer, is a growing competitor to Coca-Cola. In the French region of Brittany, Breizh Cola is available. In Peru, Inca Kola outsells Coca-Cola, which led the Coca-Cola Company to purchase the brand in 1999. In Sweden, Julmust outsells Coca-Cola during the Christmas season. In Scotland, the locally produced Irn-Bru was more popular than Coca-Cola until 2005, when Coca-Cola and Diet Coke began to outpace its sales. In the former East Germany, Vita Cola, invented during communist rule, is gaining popularity.
While Coca-Cola does not have the majority of the market share in India, The Coca-Cola Company's other brands like Thums Up and Sprite perform well. The Coca-Cola Company purchased Thums Up in 1993 when they re-entered the Indian market. As of 2023, Coca-Cola held a 9% market-share in India while Thums Up and Sprite had a 16% and 20% market share respectively.
Tropicola, a domestic drink, is served in Cuba instead of Coca-Cola, due to a United States embargo. French brand Mecca-Cola and British brand Qibla Cola are competitors to Coca-Cola in the Middle East.
In Turkey, Cola Turka, in Iran and the Middle East, Zamzam and Parsi Cola, in some parts of China, Future Cola, in the Czech Republic and Slovakia, Kofola, in Slovenia, Cockta, and the inexpensive Mercator Cola, sold only in the country's biggest supermarket chain, Mercator, are some of the brand's competitors. Classiko Cola, made by Tiko Group, the largest manufacturing company in Madagascar, is a competitor to Coca-Cola in many regions.
In 2021, Coca-Cola petitioned to cancel registrations for the marks Thums Up and Limca issued to Meenaxi Enterprise, Inc. based on misrepresentation of source. The Trademark Trial and Appeal Board concluded that "Meenaxi engaged in blatant misuse in a manner calculated to trade on the goodwill and reputation of Coca-Cola in an attempt to confuse consumers in the United States that its Thums Up and Limca marks were licensed or produced by the source of the same types of cola and lemon-lime soda sold under these marks for decades in India."
Coca-Cola's advertising has significantly affected American culture, and it is frequently credited with inventing the modern image of Santa Claus as an old man in a red-and-white suit. Although the company did start using the red-and-white Santa image in the 1930s, with its winter advertising campaigns illustrated by Haddon Sundblom, the motif was already common. Coca-Cola was not even the first soft drink company to use the modern image of Santa Claus in its advertising: White Rock Beverages used Santa in advertisements for its ginger ale in 1923, after first using him to sell mineral water in 1915. Before Santa Claus, Coca-Cola relied on images of smartly dressed young women to sell its beverages. Coca-Cola's first such advertisement appeared in 1895, featuring the young Bostonian actress Hilda Clark as its spokeswoman.
1941 saw the first use of the nickname "Coke" as an official trademark for the product, with a series of advertisements informing consumers that "Coke means Coca-Cola". In 1971, a song from a Coca-Cola commercial called "I'd Like to Teach the World to Sing", produced by Billy Davis, became a hit single. During the 1950s the term cola wars emerged, describing the on-going battle between Coca-Cola and Pepsi for supremacy in the soft drink industry. Coca-Cola and Pepsi were competing with new products, global expansion, US marketing initiatives and sport sponsorships.
Coke's advertising is pervasive, as one of Woodruff's stated goals was to ensure that everyone on Earth drank Coca-Cola as their preferred beverage. This is especially true in southern areas of the United States, such as Atlanta, where Coke was born.
Some Coca-Cola television commercials between 1960 through 1986 were written and produced by former Atlanta radio veteran Don Naylor (WGST 1936–1950, WAGA 1951–1959) during his career as a producer for the McCann Erickson advertising agency. Many of these early television commercials for Coca-Cola featured movie stars, sports heroes, and popular singers.
During the 1980s, Pepsi ran a series of television advertisements showing people participating in taste tests demonstrating that, according to the commercials, "fifty percent of the participants who said they preferred Coke actually chose the Pepsi." Coca-Cola ran ads to combat Pepsi's ads in an incident sometimes referred to as the cola wars; one of Coke's ads compared the so-called Pepsi challenge to two chimpanzees deciding which tennis ball was furrier. Thereafter, Coca-Cola regained its leadership in the market.
Selena was a spokesperson for Coca-Cola from 1989 until the time of her death. She filmed three commercials for the company. During 1994, to commemorate her five years with the company, Coca-Cola issued special Selena coke bottles.
The Coca-Cola Company purchased Columbia Pictures in 1982, and began inserting Coke-product images into many of its films. After a few early successes during Coca-Cola's ownership, Columbia began to underperform, and the studio was sold to Sony in 1989.
Coca-Cola has gone through a number of different advertising slogans in its long history, including "It's the real thing", "The pause that refreshes", "I'd like to buy the world a Coke", and "Coke is it".
In 1999, the Coca-Cola Company introduced the Coke Card, a loyalty program that offered deals on items like clothes, entertainment and food when the cardholder purchased a Coca-Cola Classic. The scheme was cancelled after three years, with a Coca-Cola spokesperson declining to state why.
The company then introduced another loyalty campaign in 2006, My Coke Rewards. This allows consumers to earn points by entering codes from specially marked packages of Coca-Cola products into a website. These points can be redeemed for various prizes or sweepstakes entries.
In Australia in 2011, Coca-Cola began the "share a Coke" campaign, where the Coca-Cola logo was replaced on the bottles and replaced with first names. Coca-Cola used the 150 most popular names in Australia to print on the bottles. The campaign was paired with a website page, Facebook page, and an online "share a virtual Coke". The same campaign was introduced to Coca-Cola, Diet Coke and Coke Zero bottles and cans in the UK in 2013.
Coca-Cola has also advertised its product to be consumed as a breakfast beverage, instead of coffee or tea for the morning caffeine.
From 1886 to 1959, the price of Coca-Cola was fixed at five cents, in part due to an advertising campaign.
Throughout the years, Coca-Cola has released limited-time collector bottles for Christmas.
The "Holidays are coming!" advertisement features a train of red delivery trucks, emblazoned with the Coca-Cola name and decorated with Christmas lights, driving through a snowy landscape and causing everything that they pass to light up and people to watch as they pass through.
The advertisement fell into disuse in 2001, as the Coca-Cola Company restructured its advertising campaigns so that advertising around the world was produced locally in each country, rather than centrally in the company's headquarters in Atlanta, Georgia. In 2007, the company brought back the campaign after, according to the company, many consumers telephoned its information center saying that they considered it to mark the beginning of Christmas. The advertisement was created by US advertising agency Doner, and has been part of the company's global advertising campaign for many years.
Keith Law, a producer and writer of commercials for Belfast CityBeat, was not convinced by Coca-Cola's reintroduction of the advertisement in 2007, saying that "I do not think there's anything Christmassy about HGVs and the commercial is too generic."
In 2001, singer Melanie Thornton recorded the campaign's advertising jingle as a single, "Wonderful Dream (Holidays Are Coming)", which entered the pop-music charts in Germany at no. 9. In 2005, Coca-Cola expanded the advertising campaign to radio, employing several variations of the jingle.
In 2011, Coca-Cola launched a campaign for the Indian holiday Diwali. The campaign included commercials, a song, and an integration with Shah Rukh Khan's film Ra.One.
Coca-Cola was the first commercial sponsor of the Olympic Games, at the 1928 games in Amsterdam, and has been an Olympics sponsor ever since. This corporate sponsorship included the 1996 Summer Olympics hosted in Atlanta, which allowed Coca-Cola to spotlight its hometown. Most recently, Coca-Cola has released localized commercials for the 2010 Winter Olympics in Vancouver; one Canadian commercial referred to Canada's hockey heritage and was modified after Canada won the gold medal game on February 28, 2010, by changing the ending line of the commercial to say "Now they know whose game they're playing".
Since 1978, Coca-Cola has sponsored the FIFA World Cup, and other competitions organized by FIFA. One FIFA tournament trophy, the FIFA World Youth Championship from Tunisia in 1977 to Malaysia in 1997, was called "FIFA – Coca-Cola Cup". In addition, Coca-Cola sponsors NASCAR's annual Coca-Cola 600 and Coke Zero Sugar 400 at Charlotte Motor Speedway in Concord, North Carolina and Daytona International Speedway in Daytona, Florida; since 2020, Coca-Cola has served as a premier partner of the NASCAR Cup Series, which includes holding the naming rights to the series' regular season championship trophy. Coca-Cola is also the sponsor of the iRacing Pro Series.
Coca-Cola has a long history of sports marketing relationships, which over the years have included Major League Baseball, the National Football League, the National Basketball Association, and the National Hockey League, as well as with many teams within those leagues. Coca-Cola has had a longtime relationship with the NFL's Pittsburgh Steelers, due in part to the now-famous 1979 television commercial featuring "Mean Joe" Greene, leading to the two opening the Coca-Cola Great Hall at Heinz Field in 2001 and a more recent Coca-Cola Zero commercial featuring Troy Polamalu.
Coca-Cola is the official soft drink of many collegiate football teams throughout the nation, partly due to Coca-Cola providing those schools with upgraded athletic facilities in exchange for Coca-Cola's sponsorship. This is especially prevalent at the high school level, which is more dependent on such contracts due to tighter budgets.
Coca-Cola was one of the official sponsors of the 1996 Cricket World Cup held on the Indian subcontinent. Coca-Cola is also one of the associate sponsors of Delhi Capitals in the Indian Premier League.
In England, Coca-Cola was the main sponsor of The Football League between 2004 and 2010, a name given to the three professional divisions below the Premier League in soccer (football). In 2005, Coca-Cola launched a competition for the 72 clubs of The Football League – it was called "Win a Player". This allowed fans to place one vote per day for their favorite club, with one entry being chosen at random earning £250,000 for the club; this was repeated in 2006. The "Win A Player" competition was very controversial, as at the end of the 2 competitions, Leeds United A.F.C. had the most votes by more than double, yet they did not win any money to spend on a new player for the club. In 2007, the competition changed to "Buy a Player". This competition allowed fans to buy a bottle of Coca-Cola or Coca-Cola Zero and submit the code on the wrapper on the Coca-Cola website. This code could then earn anything from 50p to £100,000 for a club of their choice. This competition was favored over the old "Win a Player" competition, as it allowed all clubs to win some money. Between 1992 and 1998, Coca-Cola was the title sponsor of the Football League Cup (Coca-Cola Cup), the secondary cup tournament of England. Starting in 2019–20 season, Coca-Cola has agreed its biggest UK sponsorship deal by becoming Premier League football's seventh and final commercial partner for the UK and Ireland, China, Malaysia, Indonesia, Singapore, Egyptian and the West African markets.
Between 1994 and 1997, Coca-Cola was also the title sponsor of the Scottish League Cup, renaming it to the Coca-Cola Cup like its English counterpart. From 1998 to 2001, the company was the title sponsor of the Irish League Cup in Northern Ireland, where it was named the Coca-Cola League Cup.
Coca-Cola is the presenting sponsor of the Tour Championship, the final event of the PGA Tour held each year at East Lake Golf Club in Atlanta, Georgia.
Introduced March 1, 2010, in Canada, to celebrate the 2010 Winter Olympics, Coca-Cola sold gold colored cans in packs of 12 355 mL (12 imp fl oz; 12 US fl oz) each, in select stores.
Coca-Cola which has been a partner with UEFA since 1988.
Coca-Cola has been prominently featured in many films and television programs. It was a major plot element in films such as One, Two, Three, The Coca-Cola Kid, and The Gods Must Be Crazy, among many others. In music, such as in the Beatles' song, "Come Together", the lyrics say, "He shoot Coca-Cola". The Beach Boys also referenced Coca-Cola in their 1964 song "All Summer Long", singing "Member when you spilled Coke all over your blouse?"
The best selling solo artist of all time Elvis Presley, promoted Coca-Cola during his last tour of 1977. The Coca-Cola Company used Presley's image to promote the product. For example, the company used a song performed by Presley, "A Little Less Conversation", in a Japanese Coca-Cola commercial.
Other artists that promoted Coca-Cola include David Bowie, George Michael, Elton John, and Whitney Houston, who appeared in the Diet Coke commercial, among many others.
Not all musical references to Coca-Cola went well. A line in "Lola" by the Kinks was originally recorded as "You drink champagne and it tastes just like Coca-Cola." When the British Broadcasting Corporation refused to play the song because of the commercial reference, lead singer Ray Davies re-recorded the lyric as "it tastes just like cherry cola" to get airplay for the song.
Political cartoonist Michel Kichka satirized a famous Coca-Cola billboard in his 1982 poster "And I Love New York." On the billboard, the Coca-Cola wave is accompanied by the words "Enjoy Coke." In Kichka's poster, the lettering and script above the Coca-Cola wave instead read "Enjoy Cocaine."
Coca-Cola has a high degree of identification with the United States, being considered by some an "American Brand" or as an item representing America, criticized as Cocacolonization. After World War II, this gave rise to the brief production of White Coke by the request of and for Soviet Marshal Georgy Zhukov, who did not want to be seen drinking a symbol of American imperialism. The bottles were given by the President Eisenhower during a conference, and Marshal Zhukov enjoyed the drink. The bottles were disguised as vodka bottles, with the cap having a red star design, to avoid suspicion of Soviet officials. The drink is also often a metonym for the Coca-Cola Company.
Coca-Cola was introduced to China in 1927, and was very popular until 1949. After the Chinese Civil War ended in 1949, the beverage was no longer imported into China, as it was perceived to be a symbol of decadent Western culture and capitalist lifestyle. Importation and sales of the beverage resumed in 1979, after diplomatic relations between the United States and China were restored.
There are some consumer boycotts of Coca-Cola in Arab countries due to Coke's early investment in Israel during the Arab League boycott of Israel (its competitor Pepsi stayed out of Israel). Mecca-Cola and Pepsi are popular alternatives in the Middle East.
A Coca-Cola fountain dispenser (officially a Fluids Generic Bioprocessing Apparatus or FGBA) was developed for use on the Space Shuttle as a test bed to determine if carbonated beverages can be produced from separately stored carbon dioxide, water, and flavored syrups and determine if the resulting fluids can be made available for consumption without bubble nucleation and resulting foam formation. FGBA-1 flew on STS-63 in 1995 and dispensed pre-mixed beverages, followed by FGBA-2 on STS-77 the next year. The latter mixed CO₂, water, and syrup to make beverages. It supplied 1.65 liters each of Coca-Cola and Diet Coke.
Coca-Cola is sometimes used for the treatment of gastric phytobezoars. In about 50% of cases studied, Coca-Cola alone was found to be effective in gastric phytobezoar dissolution. This treatment can however result in the potential of developing small bowel obstruction in a minority of cases, necessitating surgical intervention.
Criticism of Coca-Cola has arisen from various groups around the world, concerning a variety of issues, including health effects, environmental issues, and business practices. The drink's coca flavoring, and the nickname "Coke", remain a common theme of criticism due to the relationship with the illegal drug cocaine. In 1911, the US government seized 40 barrels and 20 kegs of Coca-Cola syrup in Chattanooga, Tennessee, alleging the caffeine in its drink was "injurious to health", leading to amended food safety legislation.
Beginning in the 1940s, PepsiCo started marketing their drinks to African Americans, a niche market that was largely ignored by white-owned manufacturers in the US, and was able to use its anti-racism stance as a selling point, attacking Coke's reluctance to hire blacks and support by the chairman of the Coca-Cola Company for segregationist Governor of Georgia Herman Talmadge. As a result of this campaign, PepsiCo's market share as compared to Coca-Cola's shot up dramatically in the 1950s with African American soft-drink consumers three times more likely to purchase Pepsi over Coke.
The Coca-Cola Company, its subsidiaries and products have been subject to sustained criticism by consumer groups, environmentalists, and watchdogs, particularly since the early 2000s. In 2019, BreakFreeFromPlastic named Coca-Cola the single biggest plastic polluter in the world. After 72,541 volunteers collected 476,423 pieces of plastic waste from around where they lived, a total of 11,732 pieces were found to be labeled with a Coca-Cola brand (including the Dasani, Sprite, and Fanta brands) in 37 countries across four continents. At the 2020 World Economic Forum in Davos, Coca-Cola's Head of Sustainability, Bea Perez, said customers like them because they reseal and are lightweight, and "business won't be in business if we don't accommodate consumers." In February 2022, Coca-Cola announced that it will aim to make 25 percent of its packaging reusable by 2030.
Coca-Cola Classic is rich in sugars, especially sucrose, which causes dental caries when consumed regularly. Besides this, the high caloric value of the sugars themselves can contribute to obesity. Both are major health issues in the developed world.
In February 2021, Coca-Cola received criticism after a video of a training session, which told employees to "try to be less white", was leaked by an employee. The session also said in order to be "less white" employees had to be less "arrogant" and "defensive".
In July 2001, the Coca-Cola Company was sued over its alleged use of far-right death squads (the United Self-Defense Forces of Colombia) to kidnap, torture, and kill Colombian bottler workers that were linked with trade union activity. Coca-Cola was sued in a US federal court in Miami by the Colombian food and drink union Sinaltrainal. The suit alleged that Coca-Cola was indirectly responsible for having "contracted with or otherwise directed paramilitary security forces that utilized extreme violence and murdered, tortured, unlawfully detained or otherwise silenced trade union leaders". This sparked campaigns to boycott Coca-Cola in the UK, US, Germany, Italy, and Australia. Javier Correa, the president of Sinaltrainal, said the campaign aimed to put pressure on Coca-Cola "to mitigate the pain and suffering" that union members had suffered.
Speaking from the Coca-Cola Company's headquarters in Atlanta, company spokesperson Rafael Fernandez Quiros said "Coca-Cola denies any connection to any human-rights violation of this type" and added "We do not own or operate the plants".
Coca-Cola can be used to remove grease and oil stains from concrete, metal, and clothes. It is also used to delay concrete from setting. | [
{
"paragraph_id": 0,
"text": "Coca-Cola, or Coke, is a carbonated soft drink manufactured by the Coca-Cola Company. In 2013, Coke products were sold in over 200 countries worldwide, with consumers drinking more than 1.8 billion company beverage servings each day. Coca-Cola ranked No. 87 in the 2018 Fortune 500 list of the largest United States corporations by total revenue. Based on Interbrand's \"best global brand\" study of 2020, Coca-Cola was the world's sixth most valuable brand.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Originally marketed as a temperance drink and intended as a patent medicine, Coca-Cola was invented in the late 19th century by John Stith Pemberton in Atlanta, Georgia. In 1888, Pemberton sold the ownership rights to Asa Griggs Candler, a businessman, whose marketing tactics led Coca-Cola to its dominance of the global soft-drink market throughout the 20th and 21st century. The name refers to two of its original ingredients: coca leaves and kola nuts (a source of caffeine). The current formula of Coca-Cola remains a trade secret; however, a variety of reported recipes and experimental recreations have been published. The secrecy around the formula has been used by Coca-Cola in its marketing as only a handful of anonymous employees know the formula. The drink has inspired imitators and created a whole classification of soft drink: colas.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Coca-Cola Company produces concentrate, which is then sold to licensed Coca-Cola bottlers throughout the world. The bottlers, who hold exclusive territory contracts with the company, produce the finished product in cans and bottles from the concentrate, in combination with filtered water and sweeteners. A typical 12-US-fluid-ounce (350 ml) can contains 38 grams (1.3 oz) of sugar (usually in the form of high-fructose corn syrup in North America). The bottlers then sell, distribute, and merchandise Coca-Cola to retail stores, restaurants, and vending machines throughout the world. The Coca-Cola Company also sells concentrate for soda fountains of major restaurants and foodservice distributors.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Coca-Cola Company has on occasion introduced other cola drinks under the Coke name. The most common of these is Diet Coke, along with others including Caffeine-Free Coca-Cola, Diet Coke Caffeine-Free, Coca-Cola Zero Sugar, Coca-Cola Cherry, Coca-Cola Vanilla, and special versions with lemon, lime, and coffee. Coca-Cola was called Coca-Cola Classic from July 1985 to 2009, to distinguish it from \"New Coke\".",
"title": ""
},
{
"paragraph_id": 4,
"text": "Confederate Colonel John Pemberton, wounded in the American Civil War and addicted to morphine, also had a medical degree and began a quest to find a substitute for the problematic drug. In 1885 at Pemberton's Eagle Drug and Chemical House, his drugstore in Columbus, Georgia, he registered Pemberton's French Wine Coca nerve tonic. Pemberton's tonic may have been inspired by the formidable success of Vin Mariani, a French-Corsican coca wine, but his recipe additionally included the African kola nut, the beverage's source of caffeine. A Spanish drink called \"Kola Coca\" was presented at a contest in Philadelphia in 1885, a year before the official birth of Coca-Cola. The rights for this Spanish drink were bought by Coca-Cola in 1953.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In 1886, when Atlanta and Fulton County passed prohibition legislation, Pemberton responded by developing Coca-Cola, a nonalcoholic version of Pemberton's French Wine Coca. It was marketed as \"Coca-Cola: The temperance drink\", which appealed to many people as the temperance movement enjoyed wide support during this time. The first sales were at Jacob's Pharmacy in Atlanta, Georgia, on May 8, 1886, where it initially sold for five cents a glass. Drugstore soda fountains were popular in the United States at the time due to the belief that carbonated water was good for the health, and Pemberton's new drink was marketed and sold as a patent medicine, Pemberton claiming it a cure for many diseases, including morphine addiction, indigestion, nerve disorders, headaches, and impotence. Pemberton ran the first advertisement for the beverage on May 29 of the same year in the Atlanta Journal.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "By 1888, three versions of Coca-Cola – sold by three separate businesses – were on the market. A co-partnership had been formed on January 14, 1888, between Pemberton and four Atlanta businessmen: J.C. Mayfield, A.O. Murphey, C.O. Mullahy, and E.H. Bloodworth. Not codified by any signed document, a verbal statement given by Asa Candler years later asserted under testimony that he had acquired a stake in Pemberton's company as early as 1887. John Pemberton declared that the name \"Coca-Cola\" belonged to his son, Charley, but the other two manufacturers could continue to use the formula.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Charley Pemberton's record of control over the \"Coca-Cola\" name was the underlying factor that allowed for him to participate as a major shareholder in the March 1888 Coca-Cola Company incorporation filing made in his father's place. Charley's exclusive control over the \"Coca-Cola\" name became a continual thorn in Asa Candler's side. Candler's oldest son, Charles Howard Candler, authored a book in 1950 published by Emory University. In this definitive biography about his father, Candler specifically states: \"on April 14, 1888, the young druggist Asa Griggs Candler purchased a one-third interest in the formula of an almost completely unknown proprietary elixir known as Coca-Cola.\" The deal was actually between John Pemberton's son Charley and Walker, Candler & Co. – with John Pemberton acting as cosigner for his son. For $50 down and $500 in 30 days, Walker, Candler & Co. obtained all of the one-third interest in the Coca-Cola Company that Charley held, all while Charley still held on to the name. After the April 14 deal, on April 17, 1888, one-half of the Walker/Dozier interest shares were acquired by Candler for an additional $750.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "After Candler had gained a better foothold on Coca-Cola in April 1888, he nevertheless was forced to sell the beverage he produced with the recipe he had under the names \"Yum Yum\" and \"Koke\". This was while Charley Pemberton was selling the elixir, although a cruder mixture, under the name \"Coca-Cola\", all with his father's blessing. After both names failed to catch on for Candler, by the middle of 1888, the Atlanta pharmacist was quite anxious to establish a firmer legal claim to Coca-Cola, and hoped he could force his two competitors, Walker and Dozier, completely out of the business, as well.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "John Pemberton died suddenly on August 16, 1888. Asa Candler then decided to move swiftly forward to attain full control of the entire Coca-Cola operation.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Charley Pemberton, an alcoholic and opium addict, unnerved Asa Candler more than anyone else. Candler is said to have quickly maneuvered to purchase the exclusive rights to the name \"Coca-Cola\" from Pemberton's son Charley immediately after he learned of Dr. Pemberton's death. One of several stories states that Candler approached Charley's mother at John Pemberton's funeral and offered her $300 in cash for the title to the name.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In Charles Howard Candler's 1950 book about his father, he stated: \"On August 30 [1888], he [Asa Candler] became the sole proprietor of Coca-Cola, a fact which was stated on letterheads, invoice blanks and advertising copy.\"",
"title": "History"
},
{
"paragraph_id": 12,
"text": "With this action on August 30, 1888, Candler's sole control became technically all true. Candler had negotiated with Margaret Dozier and her brother Woolfolk Walker a full payment amounting to $1,000, which all agreed Candler could pay off with a series of notes over a specified time span. By May 1, 1889, Candler was now claiming full ownership of the Coca-Cola beverage, with a total investment outlay by Candler for the drink enterprise over the years amounting to $2,300.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1914, Margaret Dozier, as co-owner of the original Coca-Cola Company in 1888, came forward to claim that her signature on the 1888 Coca-Cola Company bill of sale had been forged. Subsequent analysis of other similar transfer documents had also indicated John Pemberton's signature had most likely been forged as well, which some accounts claim was precipitated by his son Charley.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 1892, Candler set out to incorporate a second company, the Coca-Cola Company (the current corporation). When Candler had the earliest records of the \"Coca-Cola Company\" destroyed in 1910, the action was claimed to have been made during a move to new corporation offices around this time.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Charley Pemberton was found on June 23, 1894, unconscious, with a stick of opium by his side. Ten days later, Charley died at Atlanta's Grady Hospital at the age of 40.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "On September 12, 1919, Coca-Cola Co. was purchased by a group of investors led by Ernest Woodruff's Trust Company for $25 million and reincorporated under Delaware General Corporation Law. The company publicly offered 500,000 shares of the company for $40 a share. In 1923, his son Robert W. Woodruff was elected President of the company. Woodruff expanded the company and brought Coca-Cola to the rest of the world. Coca-Cola began distributing bottles as \"Six-packs\", encouraging customers to purchase the beverage for their home.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "During its first several decades, Coca-Cola officially wanted to be known by its full-name despite being commonly known as \"Coke\". This was due to company fears that the term \"coke\" would eventually become a generic trademark, which to an extent became true in the Southern United States where \"coke\" is used even for non Coca-Cola products. The company also didn't want to confuse its drink with the similarly named coal byproduct that clearly wasn't safe to consume. Eventually, out for fears that another company may claim the trademark for \"Coke\", Coca-Cola finally embraced it and officially endorsed the name \"Coke\" in 1941. \"Coke\" eventually became a registered trademark of the Coca-Cola Company in 1945.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 1986, the Coca-Cola Company merged with two of their bottling operators (owned by JTL Corporation and BCI Holding Corporation) to form Coca-Cola Enterprises Inc. (CCE).",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In December 1991, Coca-Cola Enterprises merged with the Johnston Coca-Cola Bottling Group, Inc.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The first bottling of Coca-Cola occurred in Vicksburg, Mississippi, at the Biedenharn Candy Company on March 12, 1894. The proprietor of the bottling works was Joseph A. Biedenharn. The original bottles were Hutchinson bottles, very different from the much later hobble-skirt design of 1915 now so familiar.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "A few years later two entrepreneurs from Chattanooga, Tennessee, namely Benjamin F. Thomas and Joseph B. Whitehead, proposed the idea of bottling and were so persuasive that Candler signed a contract giving them control of the procedure for only one dollar. Candler later realized that he had made a grave mistake. Candler never collected his dollar, but in 1899, Chattanooga became the site of the first Coca-Cola bottling company. Candler remained very content just selling his company's syrup. The loosely termed contract proved to be problematic for the Coca-Cola Company for decades to come. Legal matters were not helped by the decision of the bottlers to subcontract to other companies, effectively becoming parent bottlers. This contract specified that bottles would be sold at 5¢ each and had no fixed duration, leading to the fixed price of Coca-Cola from 1886 to 1959.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The first outdoor wall advertisement that promoted the Coca-Cola drink was painted in 1894 in Cartersville, Georgia. Cola syrup was sold as an over-the-counter dietary supplement for upset stomach. By the time of its 50th anniversary, the soft drink had reached the status of a national icon in the US. In 1935, it was certified kosher by Atlanta rabbi Tobias Geffen. With the help of Harold Hirsch, Geffen was the first person outside the company to see the top-secret ingredients list after Coke faced scrutiny from the American Jewish population regarding the drink's kosher status. Consequently, the company made minor changes in the sourcing of some ingredients so it could continue to be consumed by America's Jewish population, including during Passover. A yellow cap on a Coca-Cola drink indicates that it is kosher.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The longest running commercial Coca-Cola soda fountain anywhere was Atlanta's Fleeman's Pharmacy, which first opened its doors in 1914. Jack Fleeman took over the pharmacy from his father and ran it until 1995; closing it after 81 years. On July 12, 1944, the one-billionth gallon of Coca-Cola syrup was manufactured by the Coca-Cola Company. Cans of Coke first appeared in 1955.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Sugar prices spiked in the 1970s because of Soviet demand/hoarding and possible futures contracts market manipulation. The Soviet Union was the largest producer of sugar at the time. In 1974 Coca-Cola switched over to high-fructose corn syrup because of the elevated prices.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "On April 23, 1985, Coca-Cola, amid much publicity, attempted to change the formula of the drink with \"New Coke\". Follow-up taste tests revealed most consumers preferred the taste of New Coke to both Coke and Pepsi but Coca-Cola management was unprepared for the public's nostalgia for the old drink, leading to a backlash. The company gave in to protests and returned to the old formula under the name Coca-Cola Classic, on July 10, 1985. \"New Coke\" remained available and was renamed Coke II in 1992; it was discontinued in 2002.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "On July 5, 2005, it was revealed that Coca-Cola would resume operations in Iraq for the first time since the Arab League boycotted the company in 1968.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "In April 2007, in Canada, the name \"Coca-Cola Classic\" was changed back to \"Coca-Cola\". The word \"Classic\" was removed because \"New Coke\" was no longer in production, eliminating the need to differentiate between the two. The formula remained unchanged. In January 2009, Coca-Cola stopped printing the word \"Classic\" on the labels of 16-US-fluid-ounce (470 ml) bottles sold in parts of the southeastern United States. The change was part of a larger strategy to rejuvenate the product's image. The word \"Classic\" was removed from all Coca-Cola products by 2011.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In November 2009, due to a dispute over wholesale prices of Coca-Cola products, Costco stopped restocking its shelves with Coke and Diet Coke for two months; a separate pouring rights deal in 2013 saw Coke products removed from Costco food courts in favor of Pepsi. Some Costco locations (such as the ones in Tucson, Arizona) additionally sell imported Coca-Cola from Mexico with cane sugar instead of corn syrup from separate distributors. Coca-Cola introduced the 7.5-ounce mini-can in 2009, and on September 22, 2011, the company announced price reductions, asking retailers to sell eight-packs for $2.99. That same day, Coca-Cola announced the 12.5-ounce bottle, to sell for 89 cents. A 16-ounce bottle has sold well at 99 cents since being re-introduced, but the price was going up to $1.19.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "In 2012, Coca-Cola resumed business in Myanmar after 60 years of absence due to US-imposed investment sanctions against the country. Coca-Cola's bottling plant is located in Yangon and is part of the company's five-year plan and $200 million investment in Myanmar. Coca-Cola with its partners is to invest US$5 billion in its operations in India by 2020.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In February 2021, as a plan to combat plastic waste, Coca-Cola said that it would start selling its sodas in bottles made from 100% recycled plastic material in the United States, and by 2030 planned to recycle one bottle or can for each one it sold. Coca-Cola started by selling 2000 paper bottles to see if they held up due to the risk of safety and of changing the taste of the drink.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "A typical can of Coca-Cola (12 fl ounces/355 ml) contains 39 grams of sugar, 50 mg of sodium, 0 grams fat, 0 grams potassium, and 140 calories. On May 5, 2014, Coca-Cola said it was working to remove a controversial ingredient, brominated vegetable oil, from its drinks.",
"title": "Production"
},
{
"paragraph_id": 32,
"text": "A UK 330 ml can contains 35 grammes of sugar and 139 calories.",
"title": "Production"
},
{
"paragraph_id": 33,
"text": "The exact formula of Coca-Cola's natural flavorings (but not its other ingredients, which are listed on the side of the bottle or can) is a trade secret. The original copy of the formula was held in Truist Financial's main vault in Atlanta for 86 years. Its predecessor, the Trust Company, was the underwriter for the Coca-Cola Company's initial public offering in 1919. On December 8, 2011, the original secret formula was moved from the vault at SunTrust Banks to a new vault containing the formula which will be on display for visitors to its World of Coca-Cola museum in downtown Atlanta.",
"title": "Production"
},
{
"paragraph_id": 34,
"text": "According to Snopes, a popular myth states that only two executives have access to the formula, with each executive having only half the formula. However, several sources state that while Coca-Cola does have a rule restricting access to only two executives, each knows the entire formula and others, in addition to the prescribed duo, have known the formulation process.",
"title": "Production"
},
{
"paragraph_id": 35,
"text": "On February 11, 2011, Ira Glass said on his PRI radio show, This American Life, that TAL staffers had found a recipe in \"Everett Beal's Recipe Book\", reproduced in the February 28, 1979, issue of The Atlanta Journal-Constitution, that they believed was either Pemberton's original formula for Coca-Cola, or a version that he made either before or after the product hit the market in 1886. The formula basically matched the one found in Pemberton's diary. Coca-Cola archivist Phil Mooney acknowledged that the recipe \"could be a precursor\" to the formula used in the original 1886 product, but emphasized that Pemberton's original formula is not the same as the one used in the current product.",
"title": "Production"
},
{
"paragraph_id": 36,
"text": "",
"title": "Production"
},
{
"paragraph_id": 37,
"text": "When launched, Coca-Cola's two key ingredients were cocaine and caffeine. The cocaine was derived from the coca leaf and the caffeine from kola nut (also spelled \"cola nut\" at the time), leading to the name Coca-Cola.",
"title": "Production"
},
{
"paragraph_id": 38,
"text": "Pemberton called for five ounces of coca leaf per gallon of syrup (approximately 37 g/L), a significant dose; in 1891, Candler claimed his formula (altered extensively from Pemberton's original) contained only a tenth of this amount. Coca-Cola once contained an estimated nine milligrams of cocaine per glass. (For comparison, a typical dose or \"line\" of cocaine is 50–75 mg.) In 1903, it was removed.",
"title": "Production"
},
{
"paragraph_id": 39,
"text": "After 1904, instead of using fresh leaves, Coca-Cola started using \"spent\" leaves – the leftovers of the cocaine-extraction process with trace levels of cocaine. Since then (by 1929), Coca-Cola has used a cocaine-free coca leaf extract. Today, that extract is prepared at a Stepan Company plant in Maywood, New Jersey, the only manufacturing plant authorized by the federal government to import and process coca leaves, which it obtains from Peru and Bolivia. Stepan Company extracts cocaine from the coca leaves, which it then sells to Mallinckrodt, the only company in the United States licensed to purify cocaine for medicinal use.",
"title": "Production"
},
{
"paragraph_id": 40,
"text": "Long after the syrup had ceased to contain any significant amount of cocaine, in North Carolina \"dope\" remained a common colloquialism for Coca-Cola, and \"dope-wagons\" were trucks that transported it.",
"title": "Production"
},
{
"paragraph_id": 41,
"text": "The kola nut acts as a flavoring and the original source of caffeine in Coca-Cola. It contains about 2.0 to 3.5% caffeine, and has a bitter flavor.",
"title": "Production"
},
{
"paragraph_id": 42,
"text": "In 1911, the US government sued in United States v. Forty Barrels and Twenty Kegs of Coca-Cola, hoping to force the Coca-Cola Company to remove caffeine from its formula. The court found that the syrup, when diluted as directed, would result in a beverage containing 1.21 grains (or 78.4 mg) of caffeine per 8 US fluid ounces (240 ml) serving. The case was decided in favor of the Coca-Cola Company at the district court, but subsequently in 1912, the US Pure Food and Drug Act was amended, adding caffeine to the list of \"habit-forming\" and \"deleterious\" substances which must be listed on a product's label. In 1913 the case was appealed to the Sixth Circuit in Cincinnati, where the ruling was affirmed, but then appealed again in 1916 to the Supreme Court, where the government effectively won as a new trial was ordered. The company then voluntarily reduced the amount of caffeine in its product, and offered to pay the government's legal costs to settle and avoid further litigation.",
"title": "Production"
},
{
"paragraph_id": 43,
"text": "Coca-Cola contains 34 mg of caffeine per 12 fluid ounces (9.8 mg per 100 ml).",
"title": "Production"
},
{
"paragraph_id": 44,
"text": "The actual production and distribution of Coca-Cola follows a franchising model. The Coca-Cola Company only produces a syrup concentrate, which it sells to bottlers throughout the world, who hold Coca-Cola franchises for one or more geographical areas. The bottlers produce the final drink by mixing the syrup with filtered water and sweeteners, putting the mixture into cans and bottles, and carbonating it, which the bottlers then sell and distribute to retail stores, vending machines, restaurants, and foodservice distributors.",
"title": "Production"
},
{
"paragraph_id": 45,
"text": "The Coca-Cola Company owns minority shares in some of its largest franchises, such as Coca-Cola Enterprises, Coca-Cola Amatil, Coca-Cola Hellenic Bottling Company, and Coca-Cola FEMSA, as well as some smaller ones, such as Coca-Cola Bottlers Uzbekistan, but fully independent bottlers produce almost half of the volume sold in the world. Independent bottlers are allowed to sweeten the drink according to local tastes.",
"title": "Production"
},
{
"paragraph_id": 46,
"text": "The bottling plant in Skopje, Macedonia, received the 2009 award for \"Best Bottling Company\".",
"title": "Production"
},
{
"paragraph_id": 47,
"text": "Since it announced its intention to begin distribution in Myanmar in June 2012, Coca-Cola has been officially available in every country in the world except Cuba and North Korea. However, it is reported to be available in both countries as a grey import. As of 2022, Coca-Cola has suspended its operations in Russia due to the invasion of Ukraine.",
"title": "Geographic spread"
},
{
"paragraph_id": 48,
"text": "Coca-Cola has been a point of legal discussion in the Middle East. In the early 20th century, a fatwa was created in Egypt to discuss the question of \"whether Muslims were permitted to drink Coca-Cola and Pepsi cola.\" The fatwa states: \"According to the Muslim Hanefite, Shafi'ite, etc., the rule in Islamic law of forbidding or allowing foods and beverages is based on the presumption that such things are permitted unless it can be shown that they are forbidden on the basis of the Qur'an.\" The Muslim jurists stated that, unless the Qur'an specifically prohibits the consumption of a particular product, it is permissible to consume. Another clause was discussed, whereby the same rules apply if a person is unaware of the condition or ingredients of the item in question.",
"title": "Geographic spread"
},
{
"paragraph_id": 49,
"text": "Coca-Cola first entered the Chinese market in the 1920s with no localized representation of its name. While the company researched a satisfactory translation, local shopkeepers created their own. These produced the desired \"ko-ka ko-la\" sound, but with odd meanings such as \"female horse fastened with wax\" or \"bite the wax tadpole\". In the 1930s, the company settled on the name \"可口可樂(可口可乐)\" (Ke-kou ke-le) taking into account the effects of syllable and meaning translations. The phrase means roughly \"to allow the mouth to be able to rejoice\". The story introduction from Coca-Cola mentions that Chiang Yee provided the new localized name, but there are also sources that the localized name appeared before 1935, or that it was given by someone named Jerome T. Lieu who studied at Columbia University in New York.",
"title": "Geographic spread"
},
{
"paragraph_id": 50,
"text": "This is a list of variants of Coca-Cola introduced around the world. In addition to the caffeine-free version of the original, additional fruit flavors have been included over the years. Not included here are versions of Diet Coke and Coca-Cola Zero Sugar; variant versions of those no-calorie colas can be found at their respective articles.",
"title": "Brand portfolio"
},
{
"paragraph_id": 51,
"text": "The Coca-Cola logo was created by John Pemberton's bookkeeper, Frank Mason Robinson, in 1885. Robinson came up with the name and chose the logo's distinctive cursive script. The writing style used, known as Spencerian script, was developed in the mid-19th century and was the dominant form of formal handwriting in the United States during that period.",
"title": "Brand portfolio"
},
{
"paragraph_id": 52,
"text": "Robinson also played a significant role in early Coca-Cola advertising. His promotional suggestions to Pemberton included giving away thousands of free drink coupons and plastering the city of Atlanta with publicity banners and streetcar signs.",
"title": "Brand portfolio"
},
{
"paragraph_id": 53,
"text": "Coca-Cola came under scrutiny in Egypt in 1951 because of a conspiracy theory that the Coca-Cola logo, when reflected in a mirror, spells out \"No Mohammed no Mecca\" in Arabic.",
"title": "Brand portfolio"
},
{
"paragraph_id": 54,
"text": "The Coca-Cola bottle, called the \"contour bottle\" within the company, was created by bottle designer Earl R. Dean and Coca-Cola's general counsel, Harold Hirsch. In 1915, the Coca-Cola Company was represented by their general counsel to launch a competition among its bottle suppliers as well as any competition entrants to create a new bottle for their beverage that would distinguish it from other beverage bottles, \"a bottle which a person could recognize even if they felt it in the dark, and so shaped that, even if broken, a person could tell at a glance what it was.\"",
"title": "Brand portfolio"
},
{
"paragraph_id": 55,
"text": "Chapman J. Root, president of The Root Glass Company of Terre Haute, Indiana, turned the project over to members of his supervisory staff, including company auditor T. Clyde Edwards, plant superintendent Alexander Samuelsson, and Earl R. Dean, bottle designer and supervisor of the bottle molding room. Root and his subordinates decided to base the bottle's design on one of the soda's two ingredients, the coca leaf or the kola nut, but were unaware of what either ingredient looked like. Dean and Edwards went to the Emeline Fairbanks Memorial Library and were unable to find any information about coca or kola. Instead, Dean was inspired by a picture of the gourd-shaped cocoa pod in the Encyclopædia Britannica. Dean made a rough sketch of the pod and returned to the plant to show Root. He explained to Root how he could transform the shape of the pod into a bottle. Root gave Dean his approval.",
"title": "Brand portfolio"
},
{
"paragraph_id": 56,
"text": "Faced with the upcoming scheduled maintenance of the mold-making machinery, over the next 24 hours Dean sketched out a concept drawing which was approved by Root the next morning. Chapman Root approved the prototype bottle and a design patent was issued on the bottle in November 1915. The prototype never made it to production since its middle diameter was larger than its base, making it unstable on conveyor belts. Dean resolved this issue by decreasing the bottle's middle diameter. During the 1916 bottler's convention, Dean's contour bottle was chosen over other entries and was on the market the same year. By 1920, the contour bottle became the standard for the Coca-Cola Company. A revised version was also patented in 1923. Because the Patent Office releases the Patent Gazette on Tuesday, the bottle was patented on December 25, 1923, and was nicknamed the \"Christmas bottle\". Today, the contour Coca-Cola bottle is one of the most recognized packages on the planet.",
"title": "Brand portfolio"
},
{
"paragraph_id": 57,
"text": "As a reward for his efforts, Dean was offered a choice between a $500 bonus or a lifetime job at The Root Glass Company. He chose the lifetime job and kept it until the Owens-Illinois Glass Company bought out The Root Glass Company in the mid-1930s. Dean went on to work in other Midwestern glass factories.",
"title": "Brand portfolio"
},
{
"paragraph_id": 58,
"text": "Raymond Loewy updated the design in 1955 to accommodate larger formats.",
"title": "Brand portfolio"
},
{
"paragraph_id": 59,
"text": "Others have attributed inspiration for the design not to the cocoa pod, but to a Victorian hooped dress.",
"title": "Brand portfolio"
},
{
"paragraph_id": 60,
"text": "In 1944, Associate Justice Roger J. Traynor of the Supreme Court of California took advantage of a case involving a waitress injured by an exploding Coca-Cola bottle to articulate the doctrine of strict liability for defective products. Traynor's concurring opinion in Escola v. Coca-Cola Bottling Co. is widely recognized as a landmark case in US law today.",
"title": "Brand portfolio"
},
{
"paragraph_id": 61,
"text": "Karl Lagerfeld is the latest designer to have created a collection of aluminum bottles for Coca-Cola. Lagerfeld is not the first fashion designer to create a special version of the famous Coca-Cola Contour bottle. A number of other limited edition bottles by fashion designers for Coca-Cola Light soda have been created in the last few years, including Jean Paul Gaultier.",
"title": "Brand portfolio"
},
{
"paragraph_id": 62,
"text": "In 2009, in Italy, Coca-Cola Light had a Tribute to Fashion to celebrate 100 years of the recognizable contour bottle. Well known Italian designers Alberta Ferretti, Blumarine, Etro, Fendi, Marni, Missoni, Moschino, and Versace each designed limited edition bottles.",
"title": "Brand portfolio"
},
{
"paragraph_id": 63,
"text": "In 2019, Coca-Cola shared the first beverage bottle made with ocean plastic.",
"title": "Brand portfolio"
},
{
"paragraph_id": 64,
"text": "Pepsi, the flagship product of PepsiCo, the Coca-Cola Company's main rival in the soft drink industry, is usually second to Coke in sales, and outsells Coca-Cola in some markets. RC Cola, now owned by the Dr Pepper Snapple Group, the third-largest soft drink manufacturer, is also widely available.",
"title": "Competitors"
},
{
"paragraph_id": 65,
"text": "Around the world, many local brands compete with Coke. In South and Central America Kola Real, also known as Big Cola, is a growing competitor to Coca-Cola. On the French island of Corsica, Corsica Cola, made by brewers of the local Pietra beer, is a growing competitor to Coca-Cola. In the French region of Brittany, Breizh Cola is available. In Peru, Inca Kola outsells Coca-Cola, which led the Coca-Cola Company to purchase the brand in 1999. In Sweden, Julmust outsells Coca-Cola during the Christmas season. In Scotland, the locally produced Irn-Bru was more popular than Coca-Cola until 2005, when Coca-Cola and Diet Coke began to outpace its sales. In the former East Germany, Vita Cola, invented during communist rule, is gaining popularity.",
"title": "Competitors"
},
{
"paragraph_id": 66,
"text": "While Coca-Cola does not have the majority of the market share in India, The Coca-Cola Company's other brands like Thums Up and Sprite perform well. The Coca-Cola Company purchased Thums Up in 1993 when they re-entered the Indian market. As of 2023, Coca-Cola held a 9% market-share in India while Thums Up and Sprite had a 16% and 20% market share respectively.",
"title": "Competitors"
},
{
"paragraph_id": 67,
"text": "Tropicola, a domestic drink, is served in Cuba instead of Coca-Cola, due to a United States embargo. French brand Mecca-Cola and British brand Qibla Cola are competitors to Coca-Cola in the Middle East.",
"title": "Competitors"
},
{
"paragraph_id": 68,
"text": "In Turkey, Cola Turka, in Iran and the Middle East, Zamzam and Parsi Cola, in some parts of China, Future Cola, in the Czech Republic and Slovakia, Kofola, in Slovenia, Cockta, and the inexpensive Mercator Cola, sold only in the country's biggest supermarket chain, Mercator, are some of the brand's competitors. Classiko Cola, made by Tiko Group, the largest manufacturing company in Madagascar, is a competitor to Coca-Cola in many regions.",
"title": "Competitors"
},
{
"paragraph_id": 69,
"text": "In 2021, Coca-Cola petitioned to cancel registrations for the marks Thums Up and Limca issued to Meenaxi Enterprise, Inc. based on misrepresentation of source. The Trademark Trial and Appeal Board concluded that \"Meenaxi engaged in blatant misuse in a manner calculated to trade on the goodwill and reputation of Coca-Cola in an attempt to confuse consumers in the United States that its Thums Up and Limca marks were licensed or produced by the source of the same types of cola and lemon-lime soda sold under these marks for decades in India.\"",
"title": "Competitors"
},
{
"paragraph_id": 70,
"text": "Coca-Cola's advertising has significantly affected American culture, and it is frequently credited with inventing the modern image of Santa Claus as an old man in a red-and-white suit. Although the company did start using the red-and-white Santa image in the 1930s, with its winter advertising campaigns illustrated by Haddon Sundblom, the motif was already common. Coca-Cola was not even the first soft drink company to use the modern image of Santa Claus in its advertising: White Rock Beverages used Santa in advertisements for its ginger ale in 1923, after first using him to sell mineral water in 1915. Before Santa Claus, Coca-Cola relied on images of smartly dressed young women to sell its beverages. Coca-Cola's first such advertisement appeared in 1895, featuring the young Bostonian actress Hilda Clark as its spokeswoman.",
"title": "Advertising"
},
{
"paragraph_id": 71,
"text": "1941 saw the first use of the nickname \"Coke\" as an official trademark for the product, with a series of advertisements informing consumers that \"Coke means Coca-Cola\". In 1971, a song from a Coca-Cola commercial called \"I'd Like to Teach the World to Sing\", produced by Billy Davis, became a hit single. During the 1950s the term cola wars emerged, describing the on-going battle between Coca-Cola and Pepsi for supremacy in the soft drink industry. Coca-Cola and Pepsi were competing with new products, global expansion, US marketing initiatives and sport sponsorships.",
"title": "Advertising"
},
{
"paragraph_id": 72,
"text": "Coke's advertising is pervasive, as one of Woodruff's stated goals was to ensure that everyone on Earth drank Coca-Cola as their preferred beverage. This is especially true in southern areas of the United States, such as Atlanta, where Coke was born.",
"title": "Advertising"
},
{
"paragraph_id": 73,
"text": "Some Coca-Cola television commercials between 1960 through 1986 were written and produced by former Atlanta radio veteran Don Naylor (WGST 1936–1950, WAGA 1951–1959) during his career as a producer for the McCann Erickson advertising agency. Many of these early television commercials for Coca-Cola featured movie stars, sports heroes, and popular singers.",
"title": "Advertising"
},
{
"paragraph_id": 74,
"text": "During the 1980s, Pepsi ran a series of television advertisements showing people participating in taste tests demonstrating that, according to the commercials, \"fifty percent of the participants who said they preferred Coke actually chose the Pepsi.\" Coca-Cola ran ads to combat Pepsi's ads in an incident sometimes referred to as the cola wars; one of Coke's ads compared the so-called Pepsi challenge to two chimpanzees deciding which tennis ball was furrier. Thereafter, Coca-Cola regained its leadership in the market.",
"title": "Advertising"
},
{
"paragraph_id": 75,
"text": "Selena was a spokesperson for Coca-Cola from 1989 until the time of her death. She filmed three commercials for the company. During 1994, to commemorate her five years with the company, Coca-Cola issued special Selena coke bottles.",
"title": "Advertising"
},
{
"paragraph_id": 76,
"text": "The Coca-Cola Company purchased Columbia Pictures in 1982, and began inserting Coke-product images into many of its films. After a few early successes during Coca-Cola's ownership, Columbia began to underperform, and the studio was sold to Sony in 1989.",
"title": "Advertising"
},
{
"paragraph_id": 77,
"text": "Coca-Cola has gone through a number of different advertising slogans in its long history, including \"It's the real thing\", \"The pause that refreshes\", \"I'd like to buy the world a Coke\", and \"Coke is it\".",
"title": "Advertising"
},
{
"paragraph_id": 78,
"text": "In 1999, the Coca-Cola Company introduced the Coke Card, a loyalty program that offered deals on items like clothes, entertainment and food when the cardholder purchased a Coca-Cola Classic. The scheme was cancelled after three years, with a Coca-Cola spokesperson declining to state why.",
"title": "Advertising"
},
{
"paragraph_id": 79,
"text": "The company then introduced another loyalty campaign in 2006, My Coke Rewards. This allows consumers to earn points by entering codes from specially marked packages of Coca-Cola products into a website. These points can be redeemed for various prizes or sweepstakes entries.",
"title": "Advertising"
},
{
"paragraph_id": 80,
"text": "In Australia in 2011, Coca-Cola began the \"share a Coke\" campaign, where the Coca-Cola logo was replaced on the bottles and replaced with first names. Coca-Cola used the 150 most popular names in Australia to print on the bottles. The campaign was paired with a website page, Facebook page, and an online \"share a virtual Coke\". The same campaign was introduced to Coca-Cola, Diet Coke and Coke Zero bottles and cans in the UK in 2013.",
"title": "Advertising"
},
{
"paragraph_id": 81,
"text": "Coca-Cola has also advertised its product to be consumed as a breakfast beverage, instead of coffee or tea for the morning caffeine.",
"title": "Advertising"
},
{
"paragraph_id": 82,
"text": "From 1886 to 1959, the price of Coca-Cola was fixed at five cents, in part due to an advertising campaign.",
"title": "Advertising"
},
{
"paragraph_id": 83,
"text": "Throughout the years, Coca-Cola has released limited-time collector bottles for Christmas.",
"title": "Advertising"
},
{
"paragraph_id": 84,
"text": "The \"Holidays are coming!\" advertisement features a train of red delivery trucks, emblazoned with the Coca-Cola name and decorated with Christmas lights, driving through a snowy landscape and causing everything that they pass to light up and people to watch as they pass through.",
"title": "Advertising"
},
{
"paragraph_id": 85,
"text": "The advertisement fell into disuse in 2001, as the Coca-Cola Company restructured its advertising campaigns so that advertising around the world was produced locally in each country, rather than centrally in the company's headquarters in Atlanta, Georgia. In 2007, the company brought back the campaign after, according to the company, many consumers telephoned its information center saying that they considered it to mark the beginning of Christmas. The advertisement was created by US advertising agency Doner, and has been part of the company's global advertising campaign for many years.",
"title": "Advertising"
},
{
"paragraph_id": 86,
"text": "Keith Law, a producer and writer of commercials for Belfast CityBeat, was not convinced by Coca-Cola's reintroduction of the advertisement in 2007, saying that \"I do not think there's anything Christmassy about HGVs and the commercial is too generic.\"",
"title": "Advertising"
},
{
"paragraph_id": 87,
"text": "In 2001, singer Melanie Thornton recorded the campaign's advertising jingle as a single, \"Wonderful Dream (Holidays Are Coming)\", which entered the pop-music charts in Germany at no. 9. In 2005, Coca-Cola expanded the advertising campaign to radio, employing several variations of the jingle.",
"title": "Advertising"
},
{
"paragraph_id": 88,
"text": "In 2011, Coca-Cola launched a campaign for the Indian holiday Diwali. The campaign included commercials, a song, and an integration with Shah Rukh Khan's film Ra.One.",
"title": "Advertising"
},
{
"paragraph_id": 89,
"text": "Coca-Cola was the first commercial sponsor of the Olympic Games, at the 1928 games in Amsterdam, and has been an Olympics sponsor ever since. This corporate sponsorship included the 1996 Summer Olympics hosted in Atlanta, which allowed Coca-Cola to spotlight its hometown. Most recently, Coca-Cola has released localized commercials for the 2010 Winter Olympics in Vancouver; one Canadian commercial referred to Canada's hockey heritage and was modified after Canada won the gold medal game on February 28, 2010, by changing the ending line of the commercial to say \"Now they know whose game they're playing\".",
"title": "Advertising"
},
{
"paragraph_id": 90,
"text": "Since 1978, Coca-Cola has sponsored the FIFA World Cup, and other competitions organized by FIFA. One FIFA tournament trophy, the FIFA World Youth Championship from Tunisia in 1977 to Malaysia in 1997, was called \"FIFA – Coca-Cola Cup\". In addition, Coca-Cola sponsors NASCAR's annual Coca-Cola 600 and Coke Zero Sugar 400 at Charlotte Motor Speedway in Concord, North Carolina and Daytona International Speedway in Daytona, Florida; since 2020, Coca-Cola has served as a premier partner of the NASCAR Cup Series, which includes holding the naming rights to the series' regular season championship trophy. Coca-Cola is also the sponsor of the iRacing Pro Series.",
"title": "Advertising"
},
{
"paragraph_id": 91,
"text": "Coca-Cola has a long history of sports marketing relationships, which over the years have included Major League Baseball, the National Football League, the National Basketball Association, and the National Hockey League, as well as with many teams within those leagues. Coca-Cola has had a longtime relationship with the NFL's Pittsburgh Steelers, due in part to the now-famous 1979 television commercial featuring \"Mean Joe\" Greene, leading to the two opening the Coca-Cola Great Hall at Heinz Field in 2001 and a more recent Coca-Cola Zero commercial featuring Troy Polamalu.",
"title": "Advertising"
},
{
"paragraph_id": 92,
"text": "Coca-Cola is the official soft drink of many collegiate football teams throughout the nation, partly due to Coca-Cola providing those schools with upgraded athletic facilities in exchange for Coca-Cola's sponsorship. This is especially prevalent at the high school level, which is more dependent on such contracts due to tighter budgets.",
"title": "Advertising"
},
{
"paragraph_id": 93,
"text": "Coca-Cola was one of the official sponsors of the 1996 Cricket World Cup held on the Indian subcontinent. Coca-Cola is also one of the associate sponsors of Delhi Capitals in the Indian Premier League.",
"title": "Advertising"
},
{
"paragraph_id": 94,
"text": "In England, Coca-Cola was the main sponsor of The Football League between 2004 and 2010, a name given to the three professional divisions below the Premier League in soccer (football). In 2005, Coca-Cola launched a competition for the 72 clubs of The Football League – it was called \"Win a Player\". This allowed fans to place one vote per day for their favorite club, with one entry being chosen at random earning £250,000 for the club; this was repeated in 2006. The \"Win A Player\" competition was very controversial, as at the end of the 2 competitions, Leeds United A.F.C. had the most votes by more than double, yet they did not win any money to spend on a new player for the club. In 2007, the competition changed to \"Buy a Player\". This competition allowed fans to buy a bottle of Coca-Cola or Coca-Cola Zero and submit the code on the wrapper on the Coca-Cola website. This code could then earn anything from 50p to £100,000 for a club of their choice. This competition was favored over the old \"Win a Player\" competition, as it allowed all clubs to win some money. Between 1992 and 1998, Coca-Cola was the title sponsor of the Football League Cup (Coca-Cola Cup), the secondary cup tournament of England. Starting in 2019–20 season, Coca-Cola has agreed its biggest UK sponsorship deal by becoming Premier League football's seventh and final commercial partner for the UK and Ireland, China, Malaysia, Indonesia, Singapore, Egyptian and the West African markets.",
"title": "Advertising"
},
{
"paragraph_id": 95,
"text": "Between 1994 and 1997, Coca-Cola was also the title sponsor of the Scottish League Cup, renaming it to the Coca-Cola Cup like its English counterpart. From 1998 to 2001, the company was the title sponsor of the Irish League Cup in Northern Ireland, where it was named the Coca-Cola League Cup.",
"title": "Advertising"
},
{
"paragraph_id": 96,
"text": "Coca-Cola is the presenting sponsor of the Tour Championship, the final event of the PGA Tour held each year at East Lake Golf Club in Atlanta, Georgia.",
"title": "Advertising"
},
{
"paragraph_id": 97,
"text": "Introduced March 1, 2010, in Canada, to celebrate the 2010 Winter Olympics, Coca-Cola sold gold colored cans in packs of 12 355 mL (12 imp fl oz; 12 US fl oz) each, in select stores.",
"title": "Advertising"
},
{
"paragraph_id": 98,
"text": "Coca-Cola which has been a partner with UEFA since 1988.",
"title": "Advertising"
},
{
"paragraph_id": 99,
"text": "Coca-Cola has been prominently featured in many films and television programs. It was a major plot element in films such as One, Two, Three, The Coca-Cola Kid, and The Gods Must Be Crazy, among many others. In music, such as in the Beatles' song, \"Come Together\", the lyrics say, \"He shoot Coca-Cola\". The Beach Boys also referenced Coca-Cola in their 1964 song \"All Summer Long\", singing \"Member when you spilled Coke all over your blouse?\"",
"title": "Advertising"
},
{
"paragraph_id": 100,
"text": "The best selling solo artist of all time Elvis Presley, promoted Coca-Cola during his last tour of 1977. The Coca-Cola Company used Presley's image to promote the product. For example, the company used a song performed by Presley, \"A Little Less Conversation\", in a Japanese Coca-Cola commercial.",
"title": "Advertising"
},
{
"paragraph_id": 101,
"text": "Other artists that promoted Coca-Cola include David Bowie, George Michael, Elton John, and Whitney Houston, who appeared in the Diet Coke commercial, among many others.",
"title": "Advertising"
},
{
"paragraph_id": 102,
"text": "Not all musical references to Coca-Cola went well. A line in \"Lola\" by the Kinks was originally recorded as \"You drink champagne and it tastes just like Coca-Cola.\" When the British Broadcasting Corporation refused to play the song because of the commercial reference, lead singer Ray Davies re-recorded the lyric as \"it tastes just like cherry cola\" to get airplay for the song.",
"title": "Advertising"
},
{
"paragraph_id": 103,
"text": "Political cartoonist Michel Kichka satirized a famous Coca-Cola billboard in his 1982 poster \"And I Love New York.\" On the billboard, the Coca-Cola wave is accompanied by the words \"Enjoy Coke.\" In Kichka's poster, the lettering and script above the Coca-Cola wave instead read \"Enjoy Cocaine.\"",
"title": "Advertising"
},
{
"paragraph_id": 104,
"text": "Coca-Cola has a high degree of identification with the United States, being considered by some an \"American Brand\" or as an item representing America, criticized as Cocacolonization. After World War II, this gave rise to the brief production of White Coke by the request of and for Soviet Marshal Georgy Zhukov, who did not want to be seen drinking a symbol of American imperialism. The bottles were given by the President Eisenhower during a conference, and Marshal Zhukov enjoyed the drink. The bottles were disguised as vodka bottles, with the cap having a red star design, to avoid suspicion of Soviet officials. The drink is also often a metonym for the Coca-Cola Company.",
"title": "Use as political and corporate symbol"
},
{
"paragraph_id": 105,
"text": "Coca-Cola was introduced to China in 1927, and was very popular until 1949. After the Chinese Civil War ended in 1949, the beverage was no longer imported into China, as it was perceived to be a symbol of decadent Western culture and capitalist lifestyle. Importation and sales of the beverage resumed in 1979, after diplomatic relations between the United States and China were restored.",
"title": "Use as political and corporate symbol"
},
{
"paragraph_id": 106,
"text": "There are some consumer boycotts of Coca-Cola in Arab countries due to Coke's early investment in Israel during the Arab League boycott of Israel (its competitor Pepsi stayed out of Israel). Mecca-Cola and Pepsi are popular alternatives in the Middle East.",
"title": "Use as political and corporate symbol"
},
{
"paragraph_id": 107,
"text": "A Coca-Cola fountain dispenser (officially a Fluids Generic Bioprocessing Apparatus or FGBA) was developed for use on the Space Shuttle as a test bed to determine if carbonated beverages can be produced from separately stored carbon dioxide, water, and flavored syrups and determine if the resulting fluids can be made available for consumption without bubble nucleation and resulting foam formation. FGBA-1 flew on STS-63 in 1995 and dispensed pre-mixed beverages, followed by FGBA-2 on STS-77 the next year. The latter mixed CO₂, water, and syrup to make beverages. It supplied 1.65 liters each of Coca-Cola and Diet Coke.",
"title": "Use as political and corporate symbol"
},
{
"paragraph_id": 108,
"text": "Coca-Cola is sometimes used for the treatment of gastric phytobezoars. In about 50% of cases studied, Coca-Cola alone was found to be effective in gastric phytobezoar dissolution. This treatment can however result in the potential of developing small bowel obstruction in a minority of cases, necessitating surgical intervention.",
"title": "Medicinal application"
},
{
"paragraph_id": 109,
"text": "Criticism of Coca-Cola has arisen from various groups around the world, concerning a variety of issues, including health effects, environmental issues, and business practices. The drink's coca flavoring, and the nickname \"Coke\", remain a common theme of criticism due to the relationship with the illegal drug cocaine. In 1911, the US government seized 40 barrels and 20 kegs of Coca-Cola syrup in Chattanooga, Tennessee, alleging the caffeine in its drink was \"injurious to health\", leading to amended food safety legislation.",
"title": "Criticism"
},
{
"paragraph_id": 110,
"text": "Beginning in the 1940s, PepsiCo started marketing their drinks to African Americans, a niche market that was largely ignored by white-owned manufacturers in the US, and was able to use its anti-racism stance as a selling point, attacking Coke's reluctance to hire blacks and support by the chairman of the Coca-Cola Company for segregationist Governor of Georgia Herman Talmadge. As a result of this campaign, PepsiCo's market share as compared to Coca-Cola's shot up dramatically in the 1950s with African American soft-drink consumers three times more likely to purchase Pepsi over Coke.",
"title": "Criticism"
},
{
"paragraph_id": 111,
"text": "The Coca-Cola Company, its subsidiaries and products have been subject to sustained criticism by consumer groups, environmentalists, and watchdogs, particularly since the early 2000s. In 2019, BreakFreeFromPlastic named Coca-Cola the single biggest plastic polluter in the world. After 72,541 volunteers collected 476,423 pieces of plastic waste from around where they lived, a total of 11,732 pieces were found to be labeled with a Coca-Cola brand (including the Dasani, Sprite, and Fanta brands) in 37 countries across four continents. At the 2020 World Economic Forum in Davos, Coca-Cola's Head of Sustainability, Bea Perez, said customers like them because they reseal and are lightweight, and \"business won't be in business if we don't accommodate consumers.\" In February 2022, Coca-Cola announced that it will aim to make 25 percent of its packaging reusable by 2030.",
"title": "Criticism"
},
{
"paragraph_id": 112,
"text": "Coca-Cola Classic is rich in sugars, especially sucrose, which causes dental caries when consumed regularly. Besides this, the high caloric value of the sugars themselves can contribute to obesity. Both are major health issues in the developed world.",
"title": "Criticism"
},
{
"paragraph_id": 113,
"text": "In February 2021, Coca-Cola received criticism after a video of a training session, which told employees to \"try to be less white\", was leaked by an employee. The session also said in order to be \"less white\" employees had to be less \"arrogant\" and \"defensive\".",
"title": "Criticism"
},
{
"paragraph_id": 114,
"text": "In July 2001, the Coca-Cola Company was sued over its alleged use of far-right death squads (the United Self-Defense Forces of Colombia) to kidnap, torture, and kill Colombian bottler workers that were linked with trade union activity. Coca-Cola was sued in a US federal court in Miami by the Colombian food and drink union Sinaltrainal. The suit alleged that Coca-Cola was indirectly responsible for having \"contracted with or otherwise directed paramilitary security forces that utilized extreme violence and murdered, tortured, unlawfully detained or otherwise silenced trade union leaders\". This sparked campaigns to boycott Coca-Cola in the UK, US, Germany, Italy, and Australia. Javier Correa, the president of Sinaltrainal, said the campaign aimed to put pressure on Coca-Cola \"to mitigate the pain and suffering\" that union members had suffered.",
"title": "Criticism"
},
{
"paragraph_id": 115,
"text": "Speaking from the Coca-Cola Company's headquarters in Atlanta, company spokesperson Rafael Fernandez Quiros said \"Coca-Cola denies any connection to any human-rights violation of this type\" and added \"We do not own or operate the plants\".",
"title": "Criticism"
},
{
"paragraph_id": 116,
"text": "Coca-Cola can be used to remove grease and oil stains from concrete, metal, and clothes. It is also used to delay concrete from setting.",
"title": "Other uses"
}
] | Coca-Cola, or Coke, is a carbonated soft drink manufactured by the Coca-Cola Company. In 2013, Coke products were sold in over 200 countries worldwide, with consumers drinking more than 1.8 billion company beverage servings each day. Coca-Cola ranked No. 87 in the 2018 Fortune 500 list of the largest United States corporations by total revenue. Based on Interbrand's "best global brand" study of 2020, Coca-Cola was the world's sixth most valuable brand. Originally marketed as a temperance drink and intended as a patent medicine, Coca-Cola was invented in the late 19th century by John Stith Pemberton in Atlanta, Georgia. In 1888, Pemberton sold the ownership rights to Asa Griggs Candler, a businessman, whose marketing tactics led Coca-Cola to its dominance of the global soft-drink market throughout the 20th and 21st century. The name refers to two of its original ingredients: coca leaves and kola nuts. The current formula of Coca-Cola remains a trade secret; however, a variety of reported recipes and experimental recreations have been published. The secrecy around the formula has been used by Coca-Cola in its marketing as only a handful of anonymous employees know the formula. The drink has inspired imitators and created a whole classification of soft drink: colas. The Coca-Cola Company produces concentrate, which is then sold to licensed Coca-Cola bottlers throughout the world. The bottlers, who hold exclusive territory contracts with the company, produce the finished product in cans and bottles from the concentrate, in combination with filtered water and sweeteners. A typical 12-US-fluid-ounce (350 ml) can contains 38 grams (1.3 oz) of sugar. The bottlers then sell, distribute, and merchandise Coca-Cola to retail stores, restaurants, and vending machines throughout the world. The Coca-Cola Company also sells concentrate for soda fountains of major restaurants and foodservice distributors. The Coca-Cola Company has on occasion introduced other cola drinks under the Coke name. The most common of these is Diet Coke, along with others including Caffeine-Free Coca-Cola, Diet Coke Caffeine-Free, Coca-Cola Zero Sugar, Coca-Cola Cherry, Coca-Cola Vanilla, and special versions with lemon, lime, and coffee. Coca-Cola was called Coca-Cola Classic from July 1985 to 2009, to distinguish it from "New Coke". | 2001-11-06T04:39:46Z | 2023-12-30T20:55:56Z | [
"Template:Use mdy dates",
"Template:See also",
"Template:Citation needed span",
"Template:Cite magazine",
"Template:Redirect",
"Template:Distinguish",
"Template:C.",
"Template:As of",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite encyclopedia",
"Template:Webarchive",
"Template:Short description",
"Template:Authority control",
"Template:Infobox drink",
"Template:Anchor",
"Template:Caselaw source",
"Template:Varieties of Coca-Cola",
"Template:Use American English",
"Template:More citations needed",
"Template:Div col",
"Template:Colas",
"Template:Convert",
"Template:Pp-move",
"Template:Em",
"Template:Cite journal",
"Template:Coca-Cola",
"Template:Pp",
"Template:Lang",
"Template:Cite web",
"Template:Cite news",
"Template:YouTube",
"Template:In lang",
"Template:Main",
"Template:Clear",
"Template:Dead link",
"Template:Citation",
"Template:Official website",
"Template:About",
"Template:Portal",
"Template:Div col end",
"Template:Cite book",
"Template:Commons category",
"Template:Bracket"
] | https://en.wikipedia.org/wiki/Coca-Cola |
6,693 | Cofinality | In mathematics, especially in order theory, the cofinality cf(A) of a partially ordered set A is the least of the cardinalities of the cofinal subsets of A.
This definition of cofinality relies on the axiom of choice, as it uses the fact that every non-empty set of cardinal numbers has a least member. The cofinality of a partially ordered set A can alternatively be defined as the least ordinal x such that there is a function from x to A with cofinal image. This second definition makes sense without the axiom of choice. If the axiom of choice is assumed, as will be the case in the rest of this article, then the two definitions are equivalent.
Cofinality can be similarly defined for a directed set and is used to generalize the notion of a subsequence in a net.
If A {\displaystyle A} admits a totally ordered cofinal subset, then we can find a subset B {\displaystyle B} that is well-ordered and cofinal in A . {\displaystyle A.} Any subset of B {\displaystyle B} is also well-ordered. Two cofinal subsets of B {\displaystyle B} with minimal cardinality (that is, their cardinality is the cofinality of B {\displaystyle B} ) need not be order isomorphic (for example if B = ω + ω , {\displaystyle B=\omega +\omega ,} then both ω + ω {\displaystyle \omega +\omega } and { ω + n : n < ω } {\displaystyle \{\omega +n:n<\omega \}} viewed as subsets of B {\displaystyle B} have the countable cardinality of the cofinality of B {\displaystyle B} but are not order isomorphic.) But cofinal subsets of B {\displaystyle B} with minimal order type will be order isomorphic.
The cofinality of an ordinal α {\displaystyle \alpha } is the smallest ordinal δ {\displaystyle \delta } that is the order type of a cofinal subset of α . {\displaystyle \alpha .} The cofinality of a set of ordinals or any other well-ordered set is the cofinality of the order type of that set.
Thus for a limit ordinal α , {\displaystyle \alpha ,} there exists a δ {\displaystyle \delta } -indexed strictly increasing sequence with limit α . {\displaystyle \alpha .} For example, the cofinality of ω 2 {\displaystyle \omega ^{2}} is ω , {\displaystyle \omega ,} because the sequence ω ⋅ m {\displaystyle \omega \cdot m} (where m {\displaystyle m} ranges over the natural numbers) tends to ω 2 ; {\displaystyle \omega ^{2};} but, more generally, any countable limit ordinal has cofinality ω . {\displaystyle \omega .} An uncountable limit ordinal may have either cofinality ω {\displaystyle \omega } as does ω ω {\displaystyle \omega _{\omega }} or an uncountable cofinality.
The cofinality of 0 is 0. The cofinality of any successor ordinal is 1. The cofinality of any nonzero limit ordinal is an infinite regular cardinal.
A regular ordinal is an ordinal that is equal to its cofinality. A singular ordinal is any ordinal that is not regular.
Every regular ordinal is the initial ordinal of a cardinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial but need not be regular. Assuming the axiom of choice, ω α + 1 {\displaystyle \omega _{\alpha +1}} is regular for each α . {\displaystyle \alpha .} In this case, the ordinals 0 , 1 , ω , ω 1 , {\displaystyle 0,1,\omega ,\omega _{1},} and ω 2 {\displaystyle \omega _{2}} are regular, whereas 2 , 3 , ω ω , {\displaystyle 2,3,\omega _{\omega },} and ω ω ⋅ 2 {\displaystyle \omega _{\omega \cdot 2}} are initial ordinals that are not regular.
The cofinality of any ordinal α {\displaystyle \alpha } is a regular ordinal, that is, the cofinality of the cofinality of α {\displaystyle \alpha } is the same as the cofinality of α . {\displaystyle \alpha .} So the cofinality operation is idempotent.
If κ {\displaystyle \kappa } is an infinite cardinal number, then cf ( κ ) {\displaystyle \operatorname {cf} (\kappa )} is the least cardinal such that there is an unbounded function from cf ( κ ) {\displaystyle \operatorname {cf} (\kappa )} to κ ; {\displaystyle \kappa ;} cf ( κ ) {\displaystyle \operatorname {cf} (\kappa )} is also the cardinality of the smallest set of strictly smaller cardinals whose sum is κ ; {\displaystyle \kappa ;} more precisely
That the set above is nonempty comes from the fact that
that is, the disjoint union of κ {\displaystyle \kappa } singleton sets. This implies immediately that cf ( κ ) ≤ κ . {\displaystyle \operatorname {cf} (\kappa )\leq \kappa .} The cofinality of any totally ordered set is regular, so cf ( κ ) = cf ( cf ( κ ) ) . {\displaystyle \operatorname {cf} (\kappa )=\operatorname {cf} (\operatorname {cf} (\kappa )).}
Using König's theorem, one can prove κ < κ cf ( κ ) {\displaystyle \kappa <\kappa ^{\operatorname {cf} (\kappa )}} and κ < cf ( 2 κ ) {\displaystyle \kappa <\operatorname {cf} \left(2^{\kappa }\right)} for any infinite cardinal κ . {\displaystyle \kappa .}
The last inequality implies that the cofinality of the cardinality of the continuum must be uncountable. On the other hand,
The ordinal number ω being the first infinite ordinal, so that the cofinality of ℵ ω {\displaystyle \aleph _{\omega }} is card(ω) = ℵ 0 . {\displaystyle \aleph _{0}.} (In particular, ℵ ω {\displaystyle \aleph _{\omega }} is singular.) Therefore,
(Compare to the continuum hypothesis, which states 2 ℵ 0 = ℵ 1 . {\displaystyle 2^{\aleph _{0}}=\aleph _{1}.} )
Generalizing this argument, one can prove that for a limit ordinal δ {\displaystyle \delta }
On the other hand, if the axiom of choice holds, then for a successor or zero ordinal δ {\displaystyle \delta } | [
{
"paragraph_id": 0,
"text": "In mathematics, especially in order theory, the cofinality cf(A) of a partially ordered set A is the least of the cardinalities of the cofinal subsets of A.",
"title": ""
},
{
"paragraph_id": 1,
"text": "This definition of cofinality relies on the axiom of choice, as it uses the fact that every non-empty set of cardinal numbers has a least member. The cofinality of a partially ordered set A can alternatively be defined as the least ordinal x such that there is a function from x to A with cofinal image. This second definition makes sense without the axiom of choice. If the axiom of choice is assumed, as will be the case in the rest of this article, then the two definitions are equivalent.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cofinality can be similarly defined for a directed set and is used to generalize the notion of a subsequence in a net.",
"title": ""
},
{
"paragraph_id": 3,
"text": "If A {\\displaystyle A} admits a totally ordered cofinal subset, then we can find a subset B {\\displaystyle B} that is well-ordered and cofinal in A . {\\displaystyle A.} Any subset of B {\\displaystyle B} is also well-ordered. Two cofinal subsets of B {\\displaystyle B} with minimal cardinality (that is, their cardinality is the cofinality of B {\\displaystyle B} ) need not be order isomorphic (for example if B = ω + ω , {\\displaystyle B=\\omega +\\omega ,} then both ω + ω {\\displaystyle \\omega +\\omega } and { ω + n : n < ω } {\\displaystyle \\{\\omega +n:n<\\omega \\}} viewed as subsets of B {\\displaystyle B} have the countable cardinality of the cofinality of B {\\displaystyle B} but are not order isomorphic.) But cofinal subsets of B {\\displaystyle B} with minimal order type will be order isomorphic.",
"title": "Properties"
},
{
"paragraph_id": 4,
"text": "The cofinality of an ordinal α {\\displaystyle \\alpha } is the smallest ordinal δ {\\displaystyle \\delta } that is the order type of a cofinal subset of α . {\\displaystyle \\alpha .} The cofinality of a set of ordinals or any other well-ordered set is the cofinality of the order type of that set.",
"title": "Cofinality of ordinals and other well-ordered sets"
},
{
"paragraph_id": 5,
"text": "Thus for a limit ordinal α , {\\displaystyle \\alpha ,} there exists a δ {\\displaystyle \\delta } -indexed strictly increasing sequence with limit α . {\\displaystyle \\alpha .} For example, the cofinality of ω 2 {\\displaystyle \\omega ^{2}} is ω , {\\displaystyle \\omega ,} because the sequence ω ⋅ m {\\displaystyle \\omega \\cdot m} (where m {\\displaystyle m} ranges over the natural numbers) tends to ω 2 ; {\\displaystyle \\omega ^{2};} but, more generally, any countable limit ordinal has cofinality ω . {\\displaystyle \\omega .} An uncountable limit ordinal may have either cofinality ω {\\displaystyle \\omega } as does ω ω {\\displaystyle \\omega _{\\omega }} or an uncountable cofinality.",
"title": "Cofinality of ordinals and other well-ordered sets"
},
{
"paragraph_id": 6,
"text": "The cofinality of 0 is 0. The cofinality of any successor ordinal is 1. The cofinality of any nonzero limit ordinal is an infinite regular cardinal.",
"title": "Cofinality of ordinals and other well-ordered sets"
},
{
"paragraph_id": 7,
"text": "A regular ordinal is an ordinal that is equal to its cofinality. A singular ordinal is any ordinal that is not regular.",
"title": "Regular and singular ordinals"
},
{
"paragraph_id": 8,
"text": "Every regular ordinal is the initial ordinal of a cardinal. Any limit of regular ordinals is a limit of initial ordinals and thus is also initial but need not be regular. Assuming the axiom of choice, ω α + 1 {\\displaystyle \\omega _{\\alpha +1}} is regular for each α . {\\displaystyle \\alpha .} In this case, the ordinals 0 , 1 , ω , ω 1 , {\\displaystyle 0,1,\\omega ,\\omega _{1},} and ω 2 {\\displaystyle \\omega _{2}} are regular, whereas 2 , 3 , ω ω , {\\displaystyle 2,3,\\omega _{\\omega },} and ω ω ⋅ 2 {\\displaystyle \\omega _{\\omega \\cdot 2}} are initial ordinals that are not regular.",
"title": "Regular and singular ordinals"
},
{
"paragraph_id": 9,
"text": "The cofinality of any ordinal α {\\displaystyle \\alpha } is a regular ordinal, that is, the cofinality of the cofinality of α {\\displaystyle \\alpha } is the same as the cofinality of α . {\\displaystyle \\alpha .} So the cofinality operation is idempotent.",
"title": "Regular and singular ordinals"
},
{
"paragraph_id": 10,
"text": "If κ {\\displaystyle \\kappa } is an infinite cardinal number, then cf ( κ ) {\\displaystyle \\operatorname {cf} (\\kappa )} is the least cardinal such that there is an unbounded function from cf ( κ ) {\\displaystyle \\operatorname {cf} (\\kappa )} to κ ; {\\displaystyle \\kappa ;} cf ( κ ) {\\displaystyle \\operatorname {cf} (\\kappa )} is also the cardinality of the smallest set of strictly smaller cardinals whose sum is κ ; {\\displaystyle \\kappa ;} more precisely",
"title": "Cofinality of cardinals"
},
{
"paragraph_id": 11,
"text": "That the set above is nonempty comes from the fact that",
"title": "Cofinality of cardinals"
},
{
"paragraph_id": 12,
"text": "that is, the disjoint union of κ {\\displaystyle \\kappa } singleton sets. This implies immediately that cf ( κ ) ≤ κ . {\\displaystyle \\operatorname {cf} (\\kappa )\\leq \\kappa .} The cofinality of any totally ordered set is regular, so cf ( κ ) = cf ( cf ( κ ) ) . {\\displaystyle \\operatorname {cf} (\\kappa )=\\operatorname {cf} (\\operatorname {cf} (\\kappa )).}",
"title": "Cofinality of cardinals"
},
{
"paragraph_id": 13,
"text": "Using König's theorem, one can prove κ < κ cf ( κ ) {\\displaystyle \\kappa <\\kappa ^{\\operatorname {cf} (\\kappa )}} and κ < cf ( 2 κ ) {\\displaystyle \\kappa <\\operatorname {cf} \\left(2^{\\kappa }\\right)} for any infinite cardinal κ . {\\displaystyle \\kappa .}",
"title": "Cofinality of cardinals"
},
{
"paragraph_id": 14,
"text": "The last inequality implies that the cofinality of the cardinality of the continuum must be uncountable. On the other hand,",
"title": "Cofinality of cardinals"
},
{
"paragraph_id": 15,
"text": "The ordinal number ω being the first infinite ordinal, so that the cofinality of ℵ ω {\\displaystyle \\aleph _{\\omega }} is card(ω) = ℵ 0 . {\\displaystyle \\aleph _{0}.} (In particular, ℵ ω {\\displaystyle \\aleph _{\\omega }} is singular.) Therefore,",
"title": "Cofinality of cardinals"
},
{
"paragraph_id": 16,
"text": "(Compare to the continuum hypothesis, which states 2 ℵ 0 = ℵ 1 . {\\displaystyle 2^{\\aleph _{0}}=\\aleph _{1}.} )",
"title": "Cofinality of cardinals"
},
{
"paragraph_id": 17,
"text": "Generalizing this argument, one can prove that for a limit ordinal δ {\\displaystyle \\delta }",
"title": "Cofinality of cardinals"
},
{
"paragraph_id": 18,
"text": "On the other hand, if the axiom of choice holds, then for a successor or zero ordinal δ {\\displaystyle \\delta }",
"title": "Cofinality of cardinals"
}
] | In mathematics, especially in order theory, the cofinality cf(A) of a partially ordered set A is the least of the cardinalities of the cofinal subsets of A. This definition of cofinality relies on the axiom of choice, as it uses the fact that every non-empty set of cardinal numbers has a least member. The cofinality of a partially ordered set A can alternatively be defined as the least ordinal x such that there is a function from x to A with cofinal image. This second definition makes sense without the axiom of choice. If the axiom of choice is assumed, as will be the case in the rest of this article, then the two definitions are equivalent. Cofinality can be similarly defined for a directed set and is used to generalize the notion of a subsequence in a net. | 2022-12-17T00:44:03Z | [
"Template:Short description",
"Template:Distinguish",
"Template:Main",
"Template:Annotated link",
"Template:Reflist",
"Template:ISBN",
"Template:Order theory"
] | https://en.wikipedia.org/wiki/Cofinality |
|
6,695 | Citadel | A citadel is the fortified area of a town or city. It may be a castle, fortress, or fortified center. The term is a diminutive of city, meaning "little city", because it is a smaller part of the city of which it is the defensive core.
In a fortification with bastions, the citadel is the strongest part of the system, sometimes well inside the outer walls and bastions, but often forming part of the outer wall for the sake of economy. It is positioned to be the last line of defence, should the enemy breach the other components of the fortification system. The functions of the police and the army, as well as the army barracks were developed in the citadel.
Some of the oldest known structures which have served as citadels were built by the Indus Valley civilisation, where citadels represented a centralised authority. Citadels in Indus Valley were almost 12 meters tall. The purpose of these structures, however, remains debated. Though the structures found in the ruins of Mohenjo-daro were walled, it is far from clear that these structures were defensive against enemy attacks. Rather, they may have been built to divert flood waters.
Several settlements in Anatolia, including the Assyrian city of Kaneš in modern-day Kültepe, featured citadels. Kaneš' citadel contained the city's palace, temples, and official buildings. The citadel of the Greek city of Mycenae was built atop a highly-defensible rectangular hill and was later surrounded by walls in order to increase its defensive capabilities.
In Ancient Greece, the Acropolis, which literally means "high city", placed on a commanding eminence, was important in the life of the people, serving as a lookout, a refuge, and a stronghold in peril, as well as containing military and food supplies, the shrine of the god and a royal palace. The most well known is the Acropolis of Athens, but nearly every Greek city-state had one – the Acrocorinth famed as a particularly strong fortress. In a much later period, when Greece was ruled by the Latin Empire, the same strong points were used by the new feudal rulers for much the same purpose.
In the first millennium BC, the Castro culture emerged in northwestern Portugal and Spain in the region extending from the Douro river up to the Minho, but soon expanding north along the coast, and east following the river valleys. It was an autochthonous evolution of Atlantic Bronze Age communities. In 2008, the origins of the Celts were attributed to this period by John T. Koch and supported by Barry Cunliffe. The Ave River Valley in Portugal was the core region of this culture, with a large number of small settlements (the castros), but also settlements known as citadels or oppida by the Roman conquerors. These had several rings of walls and the Roman conquest of the citadels of Abobriga, Lambriaca and Cinania around 138 BC was possible only by prolonged siege. Ruins of notable citadels still exist, and are known by archaeologists as Citânia de Briteiros, Citânia de Sanfins, Cividade de Terroso and Cividade de Bagunte.
Rebels who took power in a city, but with the citadel still held by the former rulers, could by no means regard their tenure of power as secure. One such incident played an important part in the history of the Maccabean Revolt against the Seleucid Empire. The Hellenistic garrison of Jerusalem and local supporters of the Seleucids held out for many years in the Acra citadel, making Maccabean rule in the rest of Jerusalem precarious. When finally gaining possession of the place, the Maccabeans pointedly destroyed and razed the Acra, though they constructed another citadel for their own use in a different part of Jerusalem.
At various periods, and particularly during the Middle Ages and the Renaissance, the citadel – having its own fortifications, independent of the city walls – was the last defence of a besieged army, often held after the town had been conquered. Locals and defending armies have often held out citadels long after the city had fallen. For example, in the 1543 Siege of Nice the Ottoman forces led by Barbarossa conquered and pillaged the town and took many captives, but the citadel held out.
In the Philippines, the Ivatan people of the northern islands of Batanes often built fortifications to protect themselves during times of war. They built their so-called idjangs on hills and elevated areas. These fortifications were likened to European castles because of their purpose. Usually, the only entrance to the castles would be via a rope ladder that would only be lowered for the villagers and could be kept away when invaders arrived.
In times of war, the citadel in many cases afforded retreat to the people living in the areas around the town. However, citadels were often used also to protect a garrison or political power from the inhabitants of the town where it was located, being designed to ensure loyalty from the town that they defended. This was used, for example, during the Dutch Wars of 1664–1667, King Charles II of England constructed a Royal Citadel at Plymouth, an important channel port which needed to be defended from a possible naval attack. However, due to Plymouth's support for the Parliamentarians, in the then-recent English Civil War, the Plymouth Citadel was so designed that its guns could fire on the town as well as on the sea approaches.
Barcelona had a great citadel built in 1714 to intimidate the Catalans against repeating their mid-17th- and early-18th-century rebellions against the Spanish central government. In the 19th century, when the political climate had liberalized enough to permit it, the people of Barcelona had the citadel torn down, and replaced it with the city's main central park, the Parc de la Ciutadella. A similar example is the Citadella in Budapest, Hungary.
The attack on the Bastille in the French Revolution – though afterwards remembered mainly for the release of the handful of prisoners incarcerated there – was to considerable degree motivated by the structure's being a Royal citadel in the midst of revolutionary Paris.
Similarly, after Garibaldi's overthrow of Bourbon rule in Palermo, during the 1860 Unification of Italy, Palermo's Castellamare Citadel – a symbol of the hated and oppressive former rule – was ceremoniously demolished.
Following Belgium gaining its independence in 1830, a Dutch garrison under General David Hendrik Chassé held out in Antwerp Citadel between 1830 and 1832, while the city had already become part of independent Belgium.
The Siege of the Alcázar in the Spanish Civil War, in which the Nationalists held out against a much larger Republican force for two months until relieved, shows that in some cases a citadel can be effective even in modern warfare; a similar case is the Battle of Huế during the Vietnam War, where a North Vietnamese Army division held the citadel of Huế for 26 days against roughly their own numbers of much better-equipped US and South Vietnamese troops.
The Citadelle of Québec (the construction was started in 1673 and completed in 1820) still survives as the largest citadel still in official military operation in North America. It is home to the Royal 22nd Regiment of the Canadian Army and forms part of the Ramparts of Quebec City dating back to 1620s.
Since the mid 20th century, citadels have commonly enclosed military command and control centres, rather than cities or strategic points of defence on the boundaries of a country. These modern citadels are built to protect the command centre from heavy attacks, such as aerial or nuclear bombardment. The military citadels under London in the UK, including the massive underground complex Pindar beneath the Ministry of Defence, are examples, as is the Cheyenne Mountain nuclear bunker in the US.
On armoured warships, the heavily armoured section of the ship that protects the ammunition and machinery spaces is called the armoured citadel.
A modern naval interpretation refers to the heaviest protected part of the hull as "the vitals", and the citadel is the semi-armoured freeboard above the vitals. Generally, Anglo-American and German languages follow this while Russian sources/language refer to "the vitals" as цитадель "citadel". Likewise, Russian literature often refers to the turret of a tank as the 'tower'.
The safe room on a ship is also called a citadel. | [
{
"paragraph_id": 0,
"text": "A citadel is the fortified area of a town or city. It may be a castle, fortress, or fortified center. The term is a diminutive of city, meaning \"little city\", because it is a smaller part of the city of which it is the defensive core.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In a fortification with bastions, the citadel is the strongest part of the system, sometimes well inside the outer walls and bastions, but often forming part of the outer wall for the sake of economy. It is positioned to be the last line of defence, should the enemy breach the other components of the fortification system. The functions of the police and the army, as well as the army barracks were developed in the citadel.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Some of the oldest known structures which have served as citadels were built by the Indus Valley civilisation, where citadels represented a centralised authority. Citadels in Indus Valley were almost 12 meters tall. The purpose of these structures, however, remains debated. Though the structures found in the ruins of Mohenjo-daro were walled, it is far from clear that these structures were defensive against enemy attacks. Rather, they may have been built to divert flood waters.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Several settlements in Anatolia, including the Assyrian city of Kaneš in modern-day Kültepe, featured citadels. Kaneš' citadel contained the city's palace, temples, and official buildings. The citadel of the Greek city of Mycenae was built atop a highly-defensible rectangular hill and was later surrounded by walls in order to increase its defensive capabilities.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In Ancient Greece, the Acropolis, which literally means \"high city\", placed on a commanding eminence, was important in the life of the people, serving as a lookout, a refuge, and a stronghold in peril, as well as containing military and food supplies, the shrine of the god and a royal palace. The most well known is the Acropolis of Athens, but nearly every Greek city-state had one – the Acrocorinth famed as a particularly strong fortress. In a much later period, when Greece was ruled by the Latin Empire, the same strong points were used by the new feudal rulers for much the same purpose.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In the first millennium BC, the Castro culture emerged in northwestern Portugal and Spain in the region extending from the Douro river up to the Minho, but soon expanding north along the coast, and east following the river valleys. It was an autochthonous evolution of Atlantic Bronze Age communities. In 2008, the origins of the Celts were attributed to this period by John T. Koch and supported by Barry Cunliffe. The Ave River Valley in Portugal was the core region of this culture, with a large number of small settlements (the castros), but also settlements known as citadels or oppida by the Roman conquerors. These had several rings of walls and the Roman conquest of the citadels of Abobriga, Lambriaca and Cinania around 138 BC was possible only by prolonged siege. Ruins of notable citadels still exist, and are known by archaeologists as Citânia de Briteiros, Citânia de Sanfins, Cividade de Terroso and Cividade de Bagunte.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Rebels who took power in a city, but with the citadel still held by the former rulers, could by no means regard their tenure of power as secure. One such incident played an important part in the history of the Maccabean Revolt against the Seleucid Empire. The Hellenistic garrison of Jerusalem and local supporters of the Seleucids held out for many years in the Acra citadel, making Maccabean rule in the rest of Jerusalem precarious. When finally gaining possession of the place, the Maccabeans pointedly destroyed and razed the Acra, though they constructed another citadel for their own use in a different part of Jerusalem.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "At various periods, and particularly during the Middle Ages and the Renaissance, the citadel – having its own fortifications, independent of the city walls – was the last defence of a besieged army, often held after the town had been conquered. Locals and defending armies have often held out citadels long after the city had fallen. For example, in the 1543 Siege of Nice the Ottoman forces led by Barbarossa conquered and pillaged the town and took many captives, but the citadel held out.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In the Philippines, the Ivatan people of the northern islands of Batanes often built fortifications to protect themselves during times of war. They built their so-called idjangs on hills and elevated areas. These fortifications were likened to European castles because of their purpose. Usually, the only entrance to the castles would be via a rope ladder that would only be lowered for the villagers and could be kept away when invaders arrived.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In times of war, the citadel in many cases afforded retreat to the people living in the areas around the town. However, citadels were often used also to protect a garrison or political power from the inhabitants of the town where it was located, being designed to ensure loyalty from the town that they defended. This was used, for example, during the Dutch Wars of 1664–1667, King Charles II of England constructed a Royal Citadel at Plymouth, an important channel port which needed to be defended from a possible naval attack. However, due to Plymouth's support for the Parliamentarians, in the then-recent English Civil War, the Plymouth Citadel was so designed that its guns could fire on the town as well as on the sea approaches.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Barcelona had a great citadel built in 1714 to intimidate the Catalans against repeating their mid-17th- and early-18th-century rebellions against the Spanish central government. In the 19th century, when the political climate had liberalized enough to permit it, the people of Barcelona had the citadel torn down, and replaced it with the city's main central park, the Parc de la Ciutadella. A similar example is the Citadella in Budapest, Hungary.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The attack on the Bastille in the French Revolution – though afterwards remembered mainly for the release of the handful of prisoners incarcerated there – was to considerable degree motivated by the structure's being a Royal citadel in the midst of revolutionary Paris.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Similarly, after Garibaldi's overthrow of Bourbon rule in Palermo, during the 1860 Unification of Italy, Palermo's Castellamare Citadel – a symbol of the hated and oppressive former rule – was ceremoniously demolished.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Following Belgium gaining its independence in 1830, a Dutch garrison under General David Hendrik Chassé held out in Antwerp Citadel between 1830 and 1832, while the city had already become part of independent Belgium.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "The Siege of the Alcázar in the Spanish Civil War, in which the Nationalists held out against a much larger Republican force for two months until relieved, shows that in some cases a citadel can be effective even in modern warfare; a similar case is the Battle of Huế during the Vietnam War, where a North Vietnamese Army division held the citadel of Huế for 26 days against roughly their own numbers of much better-equipped US and South Vietnamese troops.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The Citadelle of Québec (the construction was started in 1673 and completed in 1820) still survives as the largest citadel still in official military operation in North America. It is home to the Royal 22nd Regiment of the Canadian Army and forms part of the Ramparts of Quebec City dating back to 1620s.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Since the mid 20th century, citadels have commonly enclosed military command and control centres, rather than cities or strategic points of defence on the boundaries of a country. These modern citadels are built to protect the command centre from heavy attacks, such as aerial or nuclear bombardment. The military citadels under London in the UK, including the massive underground complex Pindar beneath the Ministry of Defence, are examples, as is the Cheyenne Mountain nuclear bunker in the US.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "On armoured warships, the heavily armoured section of the ship that protects the ammunition and machinery spaces is called the armoured citadel.",
"title": "Naval term"
},
{
"paragraph_id": 18,
"text": "A modern naval interpretation refers to the heaviest protected part of the hull as \"the vitals\", and the citadel is the semi-armoured freeboard above the vitals. Generally, Anglo-American and German languages follow this while Russian sources/language refer to \"the vitals\" as цитадель \"citadel\". Likewise, Russian literature often refers to the turret of a tank as the 'tower'.",
"title": "Naval term"
},
{
"paragraph_id": 19,
"text": "The safe room on a ship is also called a citadel.",
"title": "Naval term"
}
] | A citadel is the fortified area of a town or city. It may be a castle, fortress, or fortified center. The term is a diminutive of city, meaning "little city", because it is a smaller part of the city of which it is the defensive core. In a fortification with bastions, the citadel is the strongest part of the system, sometimes well inside the outer walls and bastions, but often forming part of the outer wall for the sake of economy. It is positioned to be the last line of defence, should the enemy breach the other components of the fortification system. The functions of the police and the army, as well as the army barracks were developed in the citadel. | 2001-10-04T09:08:59Z | 2023-09-24T18:28:36Z | [
"Template:Unreferenced section",
"Template:USS",
"Template:Reflist",
"Template:Cite book",
"Template:Cite web",
"Template:Fortifications",
"Template:Authority control",
"Template:Short description",
"Template:About",
"Template:Cite journal",
"Template:Commons category-inline"
] | https://en.wikipedia.org/wiki/Citadel |
6,696 | Chain mail | Chain mail is the name (also known as chainmail, mail or maille) of a type of armour consisting of small metal rings linked together in a pattern to form a mesh. It was in common military use between the 3rd century BC and the 16th century AD in Europe, while continued to be used in Asia, Africa, and the Middle East as late as the 17th century. A coat of this armour is often called a hauberk or sometimes a byrnie.
The earliest examples of surviving mail were found in the Carpathian Basin at a burial in Horný Jatov, Slovakia dated in the 3rd century BC, and in a chieftain's burial located in Ciumești, Romania. Its invention is commonly credited to the Celts, but there are examples of Etruscan pattern mail dating from at least the 4th century BC. Mail may have been inspired by the much earlier scale armour. Mail spread to North Africa, West Africa, the Middle East, Central Asia, India, Tibet, South East Asia, and Japan.
Herodotus wrote that the ancient Persians wore scale armour, but mail is also distinctly mentioned in the Avesta, the ancient holy scripture of the Persian religion of Zoroastrianism that was founded by the prophet Zoroaster in the 5th century BC.
Mail continues to be used in the 21st century as a component of stab-resistant body armour, cut-resistant gloves for butchers and woodworkers, shark-resistant wetsuits for defense against shark bites, and a number of other applications.
The origins of the word mail are not fully known. One theory is that it originally derives from the Latin word macula, meaning 'spot' or 'opacity' (as in macula of retina). Another theory relates the word to the old French maillier, meaning 'to hammer' (related to the modern English word malleable). In modern French, maille refers to a loop or stitch. The Arabic words burnus (برنوس 'burnoose, a hooded cloak', also a chasuble worn by Coptic priests) and barnaza (برنز 'to bronze') suggest an Arabic influence for the Carolingian armour known as byrnie (see below).
The first attestations of the word mail are in Old French and Anglo-Norman: maille, maile, or male or other variants, which became mailye, maille, maile, male, or meile in Middle English.
Civilizations that used mail invented specific terms for each garment made from it. The standard terms for European mail armour derive from French: leggings are called chausses, a hood is a mail coif, and mittens, mitons. A mail collar hanging from a helmet is a camail or aventail. A shirt made from mail is a hauberk if knee-length and a haubergeon if mid-thigh length. A layer (or layers) of mail sandwiched between layers of fabric is called a jazerant.
A waist-length coat in medieval Europe was called a byrnie, although the exact construction of a byrnie is unclear, including whether it was constructed of mail or other armour types. Noting that the byrnie was the "most highly valued piece of armour" to the Carolingian soldier, Bennet, Bradbury, DeVries, Dickie, and Jestice indicate that:
There is some dispute among historians as to what exactly constituted the Carolingian byrnie. Relying... only on artistic and some literary sources because of the lack of archaeological examples, some believe that it was a heavy leather jacket with metal scales sewn onto it. It was also quite long, reaching below the hips and covering most of the arms. Other historians claim instead that the Carolingian byrnie was nothing more than a coat of mail, but longer and perhaps heavier than traditional early medieval mail. Without more certain evidence, this dispute will continue.
The use of mail as battlefield armour was common during the Iron Age and the Middle Ages, becoming less common over the course of the 16th and 17th centuries when plate armour and more advanced firearms were developed. It is believed that the Roman Republic first came into contact with mail fighting the Gauls in Cisalpine Gaul, now Northern Italy. The Roman army adopted the technology for their troops in the form of the lorica hamata which was used as a primary form of armour through the Imperial period.
After the fall of the Western Empire, much of the infrastructure needed to create plate armour diminished. Eventually the word "mail" came to be synonymous with armour. It was typically an extremely prized commodity, as it was expensive and time-consuming to produce and could mean the difference between life and death in a battle. Mail from dead combatants was frequently looted and was used by the new owner or sold for a lucrative price. As time went on and infrastructure improved, it came to be used by more soldiers. The oldest intact mail hauberk still in existence is thought to have been worn by Leopold III, Duke of Austria, who died in 1386 during the Battle of Sempach.
By the 14th century, articulated plate armour was commonly used to supplement mail. Eventually mail was supplanted by plate for the most part, as it provided greater protection against windlass crossbows, bludgeoning weapons, and lance charges while maintaining most of the mobility of mail. However, it was still widely used by many soldiers, along with brigandines and padded jacks. These three types of armour made up the bulk of the equipment used by soldiers, with mail being the most expensive. It was sometimes more expensive than plate armour. Mail typically persisted longer in less technologically advanced areas such as Eastern Europe but was in use throughout Europe into the 16th century.
During the late 19th and early 20th century, mail was used as a material for bulletproof vests, most notably by the Wilkinson Sword Company. Results were unsatisfactory; Wilkinson mail worn by the Khedive of Egypt's regiment of "Iron Men" was manufactured from split rings which proved to be too brittle, and the rings would fragment when struck by bullets and aggravate the injury. The riveted mail armour worn by the opposing Sudanese Madhists did not have the same problem but also proved to be relatively useless against the firearms of British forces at the battle of Omdurman. During World War I, Wilkinson Sword transitioned from mail to a lamellar design which was the precursor to the flak jacket.
Chain mail was also used for face protection in World War I. Oculist Captain Cruise of the British Infantry designed a mail fringe to be attached to helmets to protect the upper face. This proved unpopular with soldiers, in spite of being proven to defend against a three-ounce (100 g) shrapnel round fired at a distance of one hundred yards (91 m). Another invention, a "splatter mask" or "splinter mask", consisted of rigid upper face protection and a mail veil to protect the lower face, and was used by early tank crews as a measure against flying steel fragments (spalling) inside the vehicle.
Mail armour was introduced to the Middle East and Asia through the Romans and was adopted by the Sassanid Persians starting in the 3rd century AD, where it was supplemental to the scale and lamellar armour already used. Mail was commonly also used as horse armour for cataphracts and heavy cavalry as well as armour for the soldiers themselves. Asian mail could be just as heavy as the European variety and sometimes had prayer symbols stamped on the rings as a sign of their craftsmanship as well as for divine protection.
Mail armour is mentioned in the Quran as being a gift revealed by Allah to David:
21:80 It was We Who taught him the making of coats of mail for your benefit, to guard you from each other's violence: will ye then be grateful? (Yusuf Ali's translation)
From the Abbasid Caliphate, mail was quickly adopted in Central Asia by Timur (Tamerlane) and the Sogdians and by India's Delhi Sultanate. Mail armour was introduced by the Turks in late 12th century and commonly used by Turk and the Mughal and Suri armies where it eventually became the armour of choice in India. Indian mail was constructed with alternating rows of solid links and round riveted links and it was often integrated with plate protection (mail and plate armour).
Mail was introduced to China when its allies in Central Asia paid tribute to the Tang Emperor in 718 by giving him a coat of "link armour" assumed to be mail. China first encountered the armour in 384 when its allies in the nation of Kuchi arrived wearing "armour similar to chains". Once in China, mail was imported but was not produced widely. Due to its flexibility, comfort, and rarity, it was typically the armour of high-ranking guards and those who could afford the exotic import (to show off their social status) rather than the armour of the rank and file, who used more common brigandine, scale, and lamellar types. However, it was one of the few military products that China imported from foreigners. Mail spread to Korea slightly later where it was imported as the armour of imperial guards and generals.
In Japan, mail is called kusari which means chain. When the word kusari is used in conjunction with an armoured item it usually means that mail makes up the majority of the armour composition. An example of this would be kusari gusoku which means chain armour. Kusari jackets, hoods, gloves, vests, shin guards, shoulder guards, thigh guards, and other armoured clothing were produced, even kusari tabi socks.
Kusari was used in samurai armour at least from the time of the Mongol invasion (1270s) but particularly from the Nambokucho Period (1336–1392). The Japanese used many different weave methods including a square 4-in-1 pattern (so gusari), a hexagonal 6-in-1 pattern (hana gusari) and a European 4-in-1 (nanban gusari). The rings of Japanese mail were much smaller than their European counterparts; they would be used in patches to link together plates and to drape over vulnerable areas such as the armpits.
Riveted kusari was known and used in Japan. On page 58 of the book Japanese Arms & Armor: Introduction by H. Russell Robinson, there is a picture of Japanese riveted kusari, and this quote from the translated reference of Sakakibara Kozan's 1800 book, The Manufacture of Armour and Helmets in Sixteenth-Century Japan, shows that the Japanese not only knew of and used riveted kusari but that they manufactured it as well.
... karakuri-namban (riveted namban), with stout links each closed by a rivet. Its invention is credited to Fukushima Dembei Kunitaka, pupil, of Hojo Awa no Kami Ujifusa, but it is also said to be derived directly from foreign models. It is heavy because the links are tinned (biakuro-nagashi) and these are also sharp-edged because they are punched out of iron plate
Butted or split (twisted) links made up the majority of kusari links used by the Japanese. Links were either butted together meaning that the ends touched each other and were not riveted, or the kusari was constructed with links where the wire was turned or twisted two or more times; these split links are similar to the modern split ring commonly used on keychains. The rings were lacquered black to prevent rusting, and were always stitched onto a backing of cloth or leather. The kusari was sometimes concealed entirely between layers of cloth.
Kusari gusoku or chain armour was commonly used during the Edo period 1603 to 1868 as a stand-alone defense. According to George Cameron Stone
Entire suits of mail kusari gusoku were worn on occasions, sometimes under the ordinary clothing
In his book Arms and Armor of the Samurai: The History of Weaponry in Ancient Japan, Ian Bottomley shows a picture of a kusari armour and mentions kusari katabira (chain jackets) with detachable arms being worn by samurai police officials during the Edo period. The end of the samurai era in the 1860s, along with the 1876 ban on wearing swords in public, marked the end of any practical use for mail and other armour in Japan. Japan turned to a conscription army and uniforms replaced armour.
Mail's resistance to weapons is determined by four factors: linkage type (riveted, butted, or welded), material used (iron versus bronze or steel), weave density (a tighter weave needs a thinner weapon to surpass), and ring thickness (generally ranging from 1.0–1.6 mm diameter (18 to 14 gauge) wire in most examples). Mail, if a warrior could afford it, provided a significant advantage when combined with competent fighting techniques.
When the mail was not riveted, a thrust from most sharp weapons could penetrate it. However, when mail was riveted, only a strong well-placed thrust from certain spears, or thin or dedicated mail-piercing swords like the estoc, could penetrate, and a pollaxe or halberd blow could break through the armour. Strong projectile weapons such as stronger self bows, recurve bows, and crossbows could also penetrate riveted mail. Some evidence indicates that during armoured combat, the intention was to actually get around the armour rather than through it—according to a study of skeletons found in Visby, Sweden, a majority of the skeletons showed wounds on less well protected legs. Although mail was a formidable protection, due to technological advances as time progressed, mail worn under plate armour (and stand-alone mail as well) could be penetrated by the conventional weaponry of another knight.
The flexibility of mail meant that a blow would often injure the wearer, potentially causing serious bruising or fractures, and it was a poor defence against head trauma. Mail-clad warriors typically wore separate rigid helms over their mail coifs for head protection. Likewise, blunt weapons such as maces and warhammers could harm the wearer by their impact without penetrating the armour; usually a soft armour, such as gambeson, was worn under the hauberk. Medieval surgeons were very well capable of setting and caring for bone fractures resulting from blunt weapons. With the poor understanding of hygiene, however, cuts that could get infected were much more of a problem. Thus mail armour proved to be sufficient protection in most situations.
Several patterns of linking the rings together have been known since ancient times, with the most common being the 4-to-1 pattern (where each ring is linked with four others). In Europe, the 4-to-1 pattern was completely dominant. Mail was also common in East Asia, primarily Japan, with several more patterns being utilised and an entire nomenclature developing around them.
Historically, in Europe, from the pre-Roman period on, the rings composing a piece of mail would be riveted closed to reduce the chance of the rings splitting open when subjected to a thrusting attack or a hit by an arrow.
Up until the 14th century European mail was made of alternating rows of round riveted rings and solid rings. Sometime during the 14th century European mail makers started to transition from round rivets to wedge shaped rivets but continued using alternating rows of solid rings. Eventually European mail makers stopped using solid rings and almost all European mail was made from wedge riveted rings only with no solid rings. Both were commonly made of wrought iron, but some later pieces were made of heat-treated steel. Wire for the riveted rings was formed by either of two methods. One was to hammer out wrought iron into plates and cut or slit the plates. These thin pieces were then pulled through a draw plate repeatedly until the desired diameter was achieved. Waterwheel powered drawing mills are pictured in several period manuscripts. Another method was to simply forge down an iron billet into a rod and then proceed to draw it out into wire. The solid links would have been made by punching from a sheet. Guild marks were often stamped on the rings to show their origin and craftsmanship. Forge welding was also used to create solid links, but there are few possible examples known; the only well documented example from Europe is that of the camail (mail neck-defence) of the 7th century Coppergate helmet. Outside of Europe this practice was more common such as "theta" links from India. Very few examples of historic butted mail have been found and it is generally accepted that butted mail was never in wide use historically except in Japan where mail (kusari) was commonly made from butted links. Butted link mail was also used by the Moros of the Philippines in their mail and plate armours.
Mail is used as protective clothing for butchers against meat-packing equipment. Workers may wear up to 4 kg (8.8 lb) of mail under their white coats. Butchers also commonly wear a single mail glove to protect themselves from self-inflicted injury while cutting meat, as do many oyster shuckers.
Scuba divers sometimes use mail to protect them from sharkbite, as do animal control officers for protection against the animals they handle. In 1980, marine biologist Jeremiah Sullivan patented his design for Neptunic full coverage chain mail shark resistant suits which he had developed for close encounters with sharks. Shark expert and underwater filmmaker Valerie Taylor was among the first to develop and test shark suits in 1979 while diving with sharks.
Mail is widely used in industrial settings as shrapnel guards and splash guards in metal working operations.
Electrical applications for mail include RF leakage testing and being worn as a Faraday cage suit by tesla coil enthusiasts and high voltage electrical workers.
Conventional textile-based ballistic vests are designed to stop soft-nosed bullets but offer little defense from knife attacks. Knife-resistant armour is designed to defend against knife attacks; some of these use layers of metal plates, mail and metallic wires.
Many historical reenactment groups, especially those whose focus is Antiquity or the Middle Ages, commonly use mail both as practical armour and for costuming. Mail is especially popular amongst those groups which use steel weapons. A modern hauberk made from 1.5 mm diameter wire with 10 mm inner diameter rings weighs roughly 10 kg (22 lb) and contains 15,000–45,000 rings.
One of the drawbacks of mail is the uneven weight distribution; the stress falls mainly on shoulders. Weight can be better distributed by wearing a belt over the mail, which provides another point of support.
Mail worn today for re-enactment and recreational use can be made in a variety of styles and materials. Most recreational mail today is made of butted links which are galvanised or stainless steel. This is historically inaccurate but is much less expensive to procure and especially to maintain than historically accurate reproductions. Mail can also be made of titanium, aluminium, bronze, or copper. Riveted mail offers significantly better protection ability as well as historical accuracy than mail constructed with butted links. Japanese mail (kusari) is one of the few historically correct examples of mail being constructed with such butted links.
Mail remained in use as a decorative and possibly high-status symbol with military overtones long after its practical usefulness had passed. It was frequently used for the epaulettes of military uniforms. It is still used in this form by some regiments of the British Army.
Mail has applications in sculpture and jewellery, especially when made out of precious metals or colourful anodized metals. Mail artwork includes headdresses, decorative wall hangings, ornaments, chess sets, macramé, and jewelry. For these non-traditional applications, hundreds of patterns (commonly referred to as "weaves") have been invented.
Large-linked mail is occasionally used as a fetish clothing material, with the large links intended to reveal – in part – the body beneath them.
In some films, knitted string spray-painted with a metallic paint is used instead of actual mail in order to cut down on cost (an example being Monty Python and the Holy Grail, which was filmed on a very small budget). Films more dedicated to costume accuracy often use ABS plastic rings, for the lower cost and weight. Such ABS mail coats were made for The Lord of the Rings film trilogy, in addition to many metal coats. The metal coats are used rarely because of their weight, except in close-up filming where the appearance of ABS rings is distinguishable. A large scale example of the ABS mail used in the Lord of the Rings can be seen in the entrance to the Royal Armouries museum in Leeds in the form of a large curtain bearing the logo of the museum. It was acquired from the makers of the film's armour, Weta Workshop, when the museum hosted an exhibition of WETA armour from their films. For the film Mad Max Beyond Thunderdome, Tina Turner is said to have worn actual mail and she complained how heavy this was. Game of Thrones makes use of mail, notably during the "Red Wedding" scene.
Typically worn under mail armour if thin or over mail armour if thick:
Can be worn over mail armour:
Others: | [
{
"paragraph_id": 0,
"text": "Chain mail is the name (also known as chainmail, mail or maille) of a type of armour consisting of small metal rings linked together in a pattern to form a mesh. It was in common military use between the 3rd century BC and the 16th century AD in Europe, while continued to be used in Asia, Africa, and the Middle East as late as the 17th century. A coat of this armour is often called a hauberk or sometimes a byrnie.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The earliest examples of surviving mail were found in the Carpathian Basin at a burial in Horný Jatov, Slovakia dated in the 3rd century BC, and in a chieftain's burial located in Ciumești, Romania. Its invention is commonly credited to the Celts, but there are examples of Etruscan pattern mail dating from at least the 4th century BC. Mail may have been inspired by the much earlier scale armour. Mail spread to North Africa, West Africa, the Middle East, Central Asia, India, Tibet, South East Asia, and Japan.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Herodotus wrote that the ancient Persians wore scale armour, but mail is also distinctly mentioned in the Avesta, the ancient holy scripture of the Persian religion of Zoroastrianism that was founded by the prophet Zoroaster in the 5th century BC.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Mail continues to be used in the 21st century as a component of stab-resistant body armour, cut-resistant gloves for butchers and woodworkers, shark-resistant wetsuits for defense against shark bites, and a number of other applications.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The origins of the word mail are not fully known. One theory is that it originally derives from the Latin word macula, meaning 'spot' or 'opacity' (as in macula of retina). Another theory relates the word to the old French maillier, meaning 'to hammer' (related to the modern English word malleable). In modern French, maille refers to a loop or stitch. The Arabic words burnus (برنوس 'burnoose, a hooded cloak', also a chasuble worn by Coptic priests) and barnaza (برنز 'to bronze') suggest an Arabic influence for the Carolingian armour known as byrnie (see below).",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "The first attestations of the word mail are in Old French and Anglo-Norman: maille, maile, or male or other variants, which became mailye, maille, maile, male, or meile in Middle English.",
"title": "Etymology"
},
{
"paragraph_id": 6,
"text": "Civilizations that used mail invented specific terms for each garment made from it. The standard terms for European mail armour derive from French: leggings are called chausses, a hood is a mail coif, and mittens, mitons. A mail collar hanging from a helmet is a camail or aventail. A shirt made from mail is a hauberk if knee-length and a haubergeon if mid-thigh length. A layer (or layers) of mail sandwiched between layers of fabric is called a jazerant.",
"title": "Etymology"
},
{
"paragraph_id": 7,
"text": "A waist-length coat in medieval Europe was called a byrnie, although the exact construction of a byrnie is unclear, including whether it was constructed of mail or other armour types. Noting that the byrnie was the \"most highly valued piece of armour\" to the Carolingian soldier, Bennet, Bradbury, DeVries, Dickie, and Jestice indicate that:",
"title": "Etymology"
},
{
"paragraph_id": 8,
"text": "There is some dispute among historians as to what exactly constituted the Carolingian byrnie. Relying... only on artistic and some literary sources because of the lack of archaeological examples, some believe that it was a heavy leather jacket with metal scales sewn onto it. It was also quite long, reaching below the hips and covering most of the arms. Other historians claim instead that the Carolingian byrnie was nothing more than a coat of mail, but longer and perhaps heavier than traditional early medieval mail. Without more certain evidence, this dispute will continue.",
"title": "Etymology"
},
{
"paragraph_id": 9,
"text": "The use of mail as battlefield armour was common during the Iron Age and the Middle Ages, becoming less common over the course of the 16th and 17th centuries when plate armour and more advanced firearms were developed. It is believed that the Roman Republic first came into contact with mail fighting the Gauls in Cisalpine Gaul, now Northern Italy. The Roman army adopted the technology for their troops in the form of the lorica hamata which was used as a primary form of armour through the Imperial period.",
"title": "In Europe"
},
{
"paragraph_id": 10,
"text": "After the fall of the Western Empire, much of the infrastructure needed to create plate armour diminished. Eventually the word \"mail\" came to be synonymous with armour. It was typically an extremely prized commodity, as it was expensive and time-consuming to produce and could mean the difference between life and death in a battle. Mail from dead combatants was frequently looted and was used by the new owner or sold for a lucrative price. As time went on and infrastructure improved, it came to be used by more soldiers. The oldest intact mail hauberk still in existence is thought to have been worn by Leopold III, Duke of Austria, who died in 1386 during the Battle of Sempach.",
"title": "In Europe"
},
{
"paragraph_id": 11,
"text": "By the 14th century, articulated plate armour was commonly used to supplement mail. Eventually mail was supplanted by plate for the most part, as it provided greater protection against windlass crossbows, bludgeoning weapons, and lance charges while maintaining most of the mobility of mail. However, it was still widely used by many soldiers, along with brigandines and padded jacks. These three types of armour made up the bulk of the equipment used by soldiers, with mail being the most expensive. It was sometimes more expensive than plate armour. Mail typically persisted longer in less technologically advanced areas such as Eastern Europe but was in use throughout Europe into the 16th century.",
"title": "In Europe"
},
{
"paragraph_id": 12,
"text": "During the late 19th and early 20th century, mail was used as a material for bulletproof vests, most notably by the Wilkinson Sword Company. Results were unsatisfactory; Wilkinson mail worn by the Khedive of Egypt's regiment of \"Iron Men\" was manufactured from split rings which proved to be too brittle, and the rings would fragment when struck by bullets and aggravate the injury. The riveted mail armour worn by the opposing Sudanese Madhists did not have the same problem but also proved to be relatively useless against the firearms of British forces at the battle of Omdurman. During World War I, Wilkinson Sword transitioned from mail to a lamellar design which was the precursor to the flak jacket.",
"title": "In Europe"
},
{
"paragraph_id": 13,
"text": "Chain mail was also used for face protection in World War I. Oculist Captain Cruise of the British Infantry designed a mail fringe to be attached to helmets to protect the upper face. This proved unpopular with soldiers, in spite of being proven to defend against a three-ounce (100 g) shrapnel round fired at a distance of one hundred yards (91 m). Another invention, a \"splatter mask\" or \"splinter mask\", consisted of rigid upper face protection and a mail veil to protect the lower face, and was used by early tank crews as a measure against flying steel fragments (spalling) inside the vehicle.",
"title": "In Europe"
},
{
"paragraph_id": 14,
"text": "Mail armour was introduced to the Middle East and Asia through the Romans and was adopted by the Sassanid Persians starting in the 3rd century AD, where it was supplemental to the scale and lamellar armour already used. Mail was commonly also used as horse armour for cataphracts and heavy cavalry as well as armour for the soldiers themselves. Asian mail could be just as heavy as the European variety and sometimes had prayer symbols stamped on the rings as a sign of their craftsmanship as well as for divine protection.",
"title": "In Asia"
},
{
"paragraph_id": 15,
"text": "Mail armour is mentioned in the Quran as being a gift revealed by Allah to David:",
"title": "In Asia"
},
{
"paragraph_id": 16,
"text": "21:80 It was We Who taught him the making of coats of mail for your benefit, to guard you from each other's violence: will ye then be grateful? (Yusuf Ali's translation)",
"title": "In Asia"
},
{
"paragraph_id": 17,
"text": "From the Abbasid Caliphate, mail was quickly adopted in Central Asia by Timur (Tamerlane) and the Sogdians and by India's Delhi Sultanate. Mail armour was introduced by the Turks in late 12th century and commonly used by Turk and the Mughal and Suri armies where it eventually became the armour of choice in India. Indian mail was constructed with alternating rows of solid links and round riveted links and it was often integrated with plate protection (mail and plate armour).",
"title": "In Asia"
},
{
"paragraph_id": 18,
"text": "Mail was introduced to China when its allies in Central Asia paid tribute to the Tang Emperor in 718 by giving him a coat of \"link armour\" assumed to be mail. China first encountered the armour in 384 when its allies in the nation of Kuchi arrived wearing \"armour similar to chains\". Once in China, mail was imported but was not produced widely. Due to its flexibility, comfort, and rarity, it was typically the armour of high-ranking guards and those who could afford the exotic import (to show off their social status) rather than the armour of the rank and file, who used more common brigandine, scale, and lamellar types. However, it was one of the few military products that China imported from foreigners. Mail spread to Korea slightly later where it was imported as the armour of imperial guards and generals.",
"title": "In Asia"
},
{
"paragraph_id": 19,
"text": "In Japan, mail is called kusari which means chain. When the word kusari is used in conjunction with an armoured item it usually means that mail makes up the majority of the armour composition. An example of this would be kusari gusoku which means chain armour. Kusari jackets, hoods, gloves, vests, shin guards, shoulder guards, thigh guards, and other armoured clothing were produced, even kusari tabi socks.",
"title": "In Asia"
},
{
"paragraph_id": 20,
"text": "Kusari was used in samurai armour at least from the time of the Mongol invasion (1270s) but particularly from the Nambokucho Period (1336–1392). The Japanese used many different weave methods including a square 4-in-1 pattern (so gusari), a hexagonal 6-in-1 pattern (hana gusari) and a European 4-in-1 (nanban gusari). The rings of Japanese mail were much smaller than their European counterparts; they would be used in patches to link together plates and to drape over vulnerable areas such as the armpits.",
"title": "In Asia"
},
{
"paragraph_id": 21,
"text": "Riveted kusari was known and used in Japan. On page 58 of the book Japanese Arms & Armor: Introduction by H. Russell Robinson, there is a picture of Japanese riveted kusari, and this quote from the translated reference of Sakakibara Kozan's 1800 book, The Manufacture of Armour and Helmets in Sixteenth-Century Japan, shows that the Japanese not only knew of and used riveted kusari but that they manufactured it as well.",
"title": "In Asia"
},
{
"paragraph_id": 22,
"text": "... karakuri-namban (riveted namban), with stout links each closed by a rivet. Its invention is credited to Fukushima Dembei Kunitaka, pupil, of Hojo Awa no Kami Ujifusa, but it is also said to be derived directly from foreign models. It is heavy because the links are tinned (biakuro-nagashi) and these are also sharp-edged because they are punched out of iron plate",
"title": "In Asia"
},
{
"paragraph_id": 23,
"text": "Butted or split (twisted) links made up the majority of kusari links used by the Japanese. Links were either butted together meaning that the ends touched each other and were not riveted, or the kusari was constructed with links where the wire was turned or twisted two or more times; these split links are similar to the modern split ring commonly used on keychains. The rings were lacquered black to prevent rusting, and were always stitched onto a backing of cloth or leather. The kusari was sometimes concealed entirely between layers of cloth.",
"title": "In Asia"
},
{
"paragraph_id": 24,
"text": "Kusari gusoku or chain armour was commonly used during the Edo period 1603 to 1868 as a stand-alone defense. According to George Cameron Stone",
"title": "In Asia"
},
{
"paragraph_id": 25,
"text": "Entire suits of mail kusari gusoku were worn on occasions, sometimes under the ordinary clothing",
"title": "In Asia"
},
{
"paragraph_id": 26,
"text": "In his book Arms and Armor of the Samurai: The History of Weaponry in Ancient Japan, Ian Bottomley shows a picture of a kusari armour and mentions kusari katabira (chain jackets) with detachable arms being worn by samurai police officials during the Edo period. The end of the samurai era in the 1860s, along with the 1876 ban on wearing swords in public, marked the end of any practical use for mail and other armour in Japan. Japan turned to a conscription army and uniforms replaced armour.",
"title": "In Asia"
},
{
"paragraph_id": 27,
"text": "Mail's resistance to weapons is determined by four factors: linkage type (riveted, butted, or welded), material used (iron versus bronze or steel), weave density (a tighter weave needs a thinner weapon to surpass), and ring thickness (generally ranging from 1.0–1.6 mm diameter (18 to 14 gauge) wire in most examples). Mail, if a warrior could afford it, provided a significant advantage when combined with competent fighting techniques.",
"title": "Effectiveness"
},
{
"paragraph_id": 28,
"text": "When the mail was not riveted, a thrust from most sharp weapons could penetrate it. However, when mail was riveted, only a strong well-placed thrust from certain spears, or thin or dedicated mail-piercing swords like the estoc, could penetrate, and a pollaxe or halberd blow could break through the armour. Strong projectile weapons such as stronger self bows, recurve bows, and crossbows could also penetrate riveted mail. Some evidence indicates that during armoured combat, the intention was to actually get around the armour rather than through it—according to a study of skeletons found in Visby, Sweden, a majority of the skeletons showed wounds on less well protected legs. Although mail was a formidable protection, due to technological advances as time progressed, mail worn under plate armour (and stand-alone mail as well) could be penetrated by the conventional weaponry of another knight.",
"title": "Effectiveness"
},
{
"paragraph_id": 29,
"text": "The flexibility of mail meant that a blow would often injure the wearer, potentially causing serious bruising or fractures, and it was a poor defence against head trauma. Mail-clad warriors typically wore separate rigid helms over their mail coifs for head protection. Likewise, blunt weapons such as maces and warhammers could harm the wearer by their impact without penetrating the armour; usually a soft armour, such as gambeson, was worn under the hauberk. Medieval surgeons were very well capable of setting and caring for bone fractures resulting from blunt weapons. With the poor understanding of hygiene, however, cuts that could get infected were much more of a problem. Thus mail armour proved to be sufficient protection in most situations.",
"title": "Effectiveness"
},
{
"paragraph_id": 30,
"text": "Several patterns of linking the rings together have been known since ancient times, with the most common being the 4-to-1 pattern (where each ring is linked with four others). In Europe, the 4-to-1 pattern was completely dominant. Mail was also common in East Asia, primarily Japan, with several more patterns being utilised and an entire nomenclature developing around them.",
"title": "Manufacture"
},
{
"paragraph_id": 31,
"text": "Historically, in Europe, from the pre-Roman period on, the rings composing a piece of mail would be riveted closed to reduce the chance of the rings splitting open when subjected to a thrusting attack or a hit by an arrow.",
"title": "Manufacture"
},
{
"paragraph_id": 32,
"text": "Up until the 14th century European mail was made of alternating rows of round riveted rings and solid rings. Sometime during the 14th century European mail makers started to transition from round rivets to wedge shaped rivets but continued using alternating rows of solid rings. Eventually European mail makers stopped using solid rings and almost all European mail was made from wedge riveted rings only with no solid rings. Both were commonly made of wrought iron, but some later pieces were made of heat-treated steel. Wire for the riveted rings was formed by either of two methods. One was to hammer out wrought iron into plates and cut or slit the plates. These thin pieces were then pulled through a draw plate repeatedly until the desired diameter was achieved. Waterwheel powered drawing mills are pictured in several period manuscripts. Another method was to simply forge down an iron billet into a rod and then proceed to draw it out into wire. The solid links would have been made by punching from a sheet. Guild marks were often stamped on the rings to show their origin and craftsmanship. Forge welding was also used to create solid links, but there are few possible examples known; the only well documented example from Europe is that of the camail (mail neck-defence) of the 7th century Coppergate helmet. Outside of Europe this practice was more common such as \"theta\" links from India. Very few examples of historic butted mail have been found and it is generally accepted that butted mail was never in wide use historically except in Japan where mail (kusari) was commonly made from butted links. Butted link mail was also used by the Moros of the Philippines in their mail and plate armours.",
"title": "Manufacture"
},
{
"paragraph_id": 33,
"text": "Mail is used as protective clothing for butchers against meat-packing equipment. Workers may wear up to 4 kg (8.8 lb) of mail under their white coats. Butchers also commonly wear a single mail glove to protect themselves from self-inflicted injury while cutting meat, as do many oyster shuckers.",
"title": "Modern uses"
},
{
"paragraph_id": 34,
"text": "Scuba divers sometimes use mail to protect them from sharkbite, as do animal control officers for protection against the animals they handle. In 1980, marine biologist Jeremiah Sullivan patented his design for Neptunic full coverage chain mail shark resistant suits which he had developed for close encounters with sharks. Shark expert and underwater filmmaker Valerie Taylor was among the first to develop and test shark suits in 1979 while diving with sharks.",
"title": "Modern uses"
},
{
"paragraph_id": 35,
"text": "Mail is widely used in industrial settings as shrapnel guards and splash guards in metal working operations.",
"title": "Modern uses"
},
{
"paragraph_id": 36,
"text": "Electrical applications for mail include RF leakage testing and being worn as a Faraday cage suit by tesla coil enthusiasts and high voltage electrical workers.",
"title": "Modern uses"
},
{
"paragraph_id": 37,
"text": "Conventional textile-based ballistic vests are designed to stop soft-nosed bullets but offer little defense from knife attacks. Knife-resistant armour is designed to defend against knife attacks; some of these use layers of metal plates, mail and metallic wires.",
"title": "Modern uses"
},
{
"paragraph_id": 38,
"text": "Many historical reenactment groups, especially those whose focus is Antiquity or the Middle Ages, commonly use mail both as practical armour and for costuming. Mail is especially popular amongst those groups which use steel weapons. A modern hauberk made from 1.5 mm diameter wire with 10 mm inner diameter rings weighs roughly 10 kg (22 lb) and contains 15,000–45,000 rings.",
"title": "Modern uses"
},
{
"paragraph_id": 39,
"text": "One of the drawbacks of mail is the uneven weight distribution; the stress falls mainly on shoulders. Weight can be better distributed by wearing a belt over the mail, which provides another point of support.",
"title": "Modern uses"
},
{
"paragraph_id": 40,
"text": "Mail worn today for re-enactment and recreational use can be made in a variety of styles and materials. Most recreational mail today is made of butted links which are galvanised or stainless steel. This is historically inaccurate but is much less expensive to procure and especially to maintain than historically accurate reproductions. Mail can also be made of titanium, aluminium, bronze, or copper. Riveted mail offers significantly better protection ability as well as historical accuracy than mail constructed with butted links. Japanese mail (kusari) is one of the few historically correct examples of mail being constructed with such butted links.",
"title": "Modern uses"
},
{
"paragraph_id": 41,
"text": "Mail remained in use as a decorative and possibly high-status symbol with military overtones long after its practical usefulness had passed. It was frequently used for the epaulettes of military uniforms. It is still used in this form by some regiments of the British Army.",
"title": "Modern uses"
},
{
"paragraph_id": 42,
"text": "Mail has applications in sculpture and jewellery, especially when made out of precious metals or colourful anodized metals. Mail artwork includes headdresses, decorative wall hangings, ornaments, chess sets, macramé, and jewelry. For these non-traditional applications, hundreds of patterns (commonly referred to as \"weaves\") have been invented.",
"title": "Modern uses"
},
{
"paragraph_id": 43,
"text": "Large-linked mail is occasionally used as a fetish clothing material, with the large links intended to reveal – in part – the body beneath them.",
"title": "Modern uses"
},
{
"paragraph_id": 44,
"text": "In some films, knitted string spray-painted with a metallic paint is used instead of actual mail in order to cut down on cost (an example being Monty Python and the Holy Grail, which was filmed on a very small budget). Films more dedicated to costume accuracy often use ABS plastic rings, for the lower cost and weight. Such ABS mail coats were made for The Lord of the Rings film trilogy, in addition to many metal coats. The metal coats are used rarely because of their weight, except in close-up filming where the appearance of ABS rings is distinguishable. A large scale example of the ABS mail used in the Lord of the Rings can be seen in the entrance to the Royal Armouries museum in Leeds in the form of a large curtain bearing the logo of the museum. It was acquired from the makers of the film's armour, Weta Workshop, when the museum hosted an exhibition of WETA armour from their films. For the film Mad Max Beyond Thunderdome, Tina Turner is said to have worn actual mail and she complained how heavy this was. Game of Thrones makes use of mail, notably during the \"Red Wedding\" scene.",
"title": "In film"
},
{
"paragraph_id": 45,
"text": "Typically worn under mail armour if thin or over mail armour if thick:",
"title": "See also"
},
{
"paragraph_id": 46,
"text": "Can be worn over mail armour:",
"title": "See also"
},
{
"paragraph_id": 47,
"text": "Others:",
"title": "See also"
}
] | Chain mail is the name of a type of armour consisting of small metal rings linked together in a pattern to form a mesh. It was in common military use between the 3rd century BC and the 16th century AD in Europe, while continued to be used in Asia, Africa, and the Middle East as late as the 17th century. A coat of this armour is often called a hauberk or sometimes a byrnie. | 2001-10-04T13:03:17Z | 2023-12-26T00:00:23Z | [
"Template:Cite book",
"Template:Cbignore",
"Template:Cite magazine",
"Template:Short description",
"Template:Other uses",
"Template:Lang",
"Template:Convert",
"Template:Reflist",
"Template:Elements of Medieval armor",
"Template:Types of armour",
"Template:Authority control",
"Template:Unreferenced section",
"Template:Webarchive",
"Template:ISBN",
"Template:Cite news",
"Template:Cite AV media",
"Template:Commons category",
"Template:Citation needed",
"Template:Main",
"Template:Cite journal",
"Template:Cite web",
"Template:Cite OED"
] | https://en.wikipedia.org/wiki/Chain_mail |
6,697 | Cerberus | In Greek mythology, Cerberus (/ˈsɜːrbərəs/ or /ˈkɜːrbərəs/; Greek: Κέρβερος Kérberos [ˈkerberos]), often referred to as the hound of Hades, is a multi-headed dog that guards the gates of the Underworld to prevent the dead from leaving. He was the offspring of the monsters Echidna and Typhon, and was usually described as having three heads, a serpent for a tail, and snakes protruding from his body. Cerberus is primarily known for his capture by Heracles, the last of Heracles' twelve labours.
The etymology of Cerberus' name is uncertain. Ogden refers to attempts to establish an Indo-European etymology as "not yet successful". It has been claimed to be related to the Sanskrit word सर्वरा sarvarā, used as an epithet of one of the dogs of Yama, from a Proto-Indo-European word *k̑érberos, meaning "spotted". Lincoln (1991), among others, critiques this etymology. This etymology was also rejected by Manfred Mayrhofer, who proposed an Austro-Asiatic origin for the word, and Beekes. Lincoln notes a similarity between Cerberus and the Norse mythological dog Garmr, relating both names to a Proto-Indo-European root *ger- "to growl" (perhaps with the suffixes -*m/*b and -*r). However, as Ogden observes, this analysis actually requires Kerberos and Garmr to be derived from two different Indo-European roots (*ker- and *gher- respectively), and so does not actually establish a relationship between the two names.
Though probably not Greek, Greek etymologies for Cerberus have been offered. An etymology given by Servius (the late-fourth-century commentator on Virgil)—but rejected by Ogden—derives Cerberus from the Greek word creoboros meaning "flesh-devouring". Another suggested etymology derives Cerberus from "Ker berethrou", meaning "evil of the pit".
Descriptions of Cerberus vary, including the number of his heads. Cerberus was usually three-headed, though not always. Cerberus had several multi-headed relatives. His father was the multi snake-footed Typhon, and Cerberus was the brother of three other multi-headed monsters, the multi-snake-headed Lernaean Hydra; Orthrus, the two-headed dog that guarded the Cattle of Geryon; and the Chimera, who had three heads: that of a lion, a goat, and a snake. And, like these close relatives, Cerberus was, with only the rare iconographic exception, multi-headed.
In the earliest description of Cerberus, Hesiod's Theogony (c. 8th – 7th century BC), Cerberus has fifty heads, while Pindar (c. 522 – c. 443 BC) gave him one hundred heads. However, later writers almost universally give Cerberus three heads. An exception is the Latin poet Horace's Cerberus which has a single dog head, and one hundred snake heads. Perhaps trying to reconcile these competing traditions, Apollodorus's Cerberus has three dog heads and the heads of "all sorts of snakes" along his back, while the Byzantine poet John Tzetzes (who probably based his account on Apollodorus) gives Cerberus fifty heads, three of which were dog heads, the rest being the "heads of other beasts of all sorts".
In art Cerberus is most commonly depicted with two dog heads (visible), never more than three, but occasionally with only one. On one of the two earliest depictions (c. 590–580 BC), a Corinthian cup from Argos (see below), now lost, Cerberus was shown as a normal single-headed dog. The first appearance of a three-headed Cerberus occurs on a mid-sixth-century BC Laconian cup (see below).
Horace's many snake-headed Cerberus followed a long tradition of Cerberus being part snake. This is perhaps already implied as early as in Hesiod's Theogony, where Cerberus' mother is the half-snake Echidna, and his father the snake-headed Typhon. In art, Cerberus is often shown as being part snake, for example the lost Corinthian cup showed snakes protruding from Cerberus' body, while the mid sixth-century BC Laconian cup gives Cerberus a snake for a tail. In the literary record, the first certain indication of Cerberus' serpentine nature comes from the rationalized account of Hecataeus of Miletus (fl. 500–494 BC), who makes Cerberus a large poisonous snake. Plato refers to Cerberus' composite nature, and Euphorion of Chalcis (3rd century BC) describes Cerberus as having multiple snake tails, and presumably in connection to his serpentine nature, associates Cerberus with the creation of the poisonous aconite plant. Virgil has snakes writhe around Cerberus' neck, Ovid's Cerberus has a venomous mouth, necks "vile with snakes", and "hair inwoven with the threatening snake", while Seneca gives Cerberus a mane consisting of snakes, and a single snake tail.
Cerberus was given various other traits. According to Euripides, Cerberus not only had three heads but three bodies, and according to Virgil he had multiple backs. Cerberus ate raw flesh (according to Hesiod), had eyes which flashed fire (according to Euphorion), a three-tongued mouth (according to Horace), and acute hearing (according to Seneca).
Cerberus' only mythology concerns his capture by Heracles. As early as Homer we learn that Heracles was sent by Eurystheus, the king of Tiryns, to bring back Cerberus from Hades the king of the underworld. According to Apollodorus, this was the twelfth and final labour imposed on Heracles. In a fragment from a lost play Pirithous, (attributed to either Euripides or Critias) Heracles says that, although Eurystheus commanded him to bring back Cerberus, it was not from any desire to see Cerberus, but only because Eurystheus thought that the task was impossible.
Heracles was aided in his mission by his being an initiate of the Eleusinian Mysteries. Euripides has his initiation being "lucky" for Heracles in capturing Cerberus. And both Diodorus Siculus and Apollodorus say that Heracles was initiated into the Mysteries, in preparation for his descent into the underworld. According to Diodorus, Heracles went to Athens, where Musaeus, the son of Orpheus, was in charge of the initiation rites, while according to Apollodorus, he went to Eumolpus at Eleusis.
Heracles also had the help of Hermes, the usual guide of the underworld, as well as Athena. In the Odyssey, Homer has Hermes and Athena as his guides. And Hermes and Athena are often shown with Heracles on vase paintings depicting Cerberus' capture. By most accounts, Heracles made his descent into the underworld through an entrance at Tainaron, the most famous of the various Greek entrances to the underworld. The place is first mentioned in connection with the Cerberus story in the rationalized account of Hecataeus of Miletus (fl. 500–494 BC), and Euripides, Seneca, and Apolodorus, all have Heracles descend into the underworld there. However Xenophon reports that Heracles was said to have descended at the Acherusian Chersonese near Heraclea Pontica, on the Black Sea, a place more usually associated with Heracles' exit from the underworld (see below). Heraclea, founded c. 560 BC, perhaps took its name from the association of its site with Heracles' Cerberian exploit.
While in the underworld, Heracles met the heroes Theseus and Pirithous, where the two companions were being held prisoner by Hades for attempting to carry off Hades' wife Persephone. Along with bringing back Cerberus, Heracles also managed (usually) to rescue Theseus, and in some versions Pirithous as well. According to Apollodorus, Heracles found Theseus and Pirithous near the gates of Hades, bound to the "Chair of Forgetfulness, to which they grew and were held fast by coils of serpents", and when they saw Heracles, "they stretched out their hands as if they should be raised from the dead by his might", and Heracles was able to free Theseus, but when he tried to raise up Pirithous, "the earth quaked and he let go."
The earliest evidence for the involvement of Theseus and Pirithous in the Cerberus story, is found on a shield-band relief (c. 560 BC) from Olympia, where Theseus and Pirithous (named) are seated together on a chair, arms held out in supplication, while Heracles approaches, about to draw his sword. The earliest literary mention of the rescue occurs in Euripides, where Heracles saves Theseus (with no mention of Pirithous). In the lost play Pirithous, both heroes are rescued, while in the rationalized account of Philochorus, Heracles was able to rescue Theseus, but not Pirithous. In one place Diodorus says Heracles brought back both Theseus and Pirithous, by the favor of Persephone, while in another he says that Pirithous remained in Hades, or according to "some writers of myth" that neither Theseus, nor Pirithous returned. Both are rescued in Hyginus.
There are various versions of how Heracles accomplished Cerberus' capture. According to Apollodorus, Heracles asked Hades for Cerberus, and Hades told Heracles he would allow him to take Cerberus only if he "mastered him without the use of the weapons which he carried", and so, using his lion-skin as a shield, Heracles squeezed Cerberus around the head until he submitted.
In some early sources Cerberus' capture seems to involve Heracles fighting Hades. Homer (Iliad 5.395–397) has Hades injured by an arrow shot by Heracles. A scholium to the Iliad passage, explains that Hades had commanded that Heracles "master Cerberus without shield or Iron". Heracles did this, by (as in Apollodorus) using his lion-skin instead of his shield, and making stone points for his arrows, but when Hades still opposed him, Heracles shot Hades in anger. Consistent with the no iron requirement, on an early-sixth-century BC lost Corinthian cup, Heracles is shown attacking Hades with a stone, while the iconographic tradition, from c. 560 BC, often shows Heracles using his wooden club against Cerberus.
Euripides has Amphitryon ask Heracles: "Did you conquer him in fight, or receive him from the goddess [i.e. Persephone]? To which Heracles answers: "In fight", and the Pirithous fragment says that Heracles "overcame the beast by force". However, according to Diodorus, Persephone welcomed Heracles "like a brother" and gave Cerberus "in chains" to Heracles. Aristophanes has Heracles seize Cerberus in a stranglehold and run off, while Seneca has Heracles again use his lion-skin as shield, and his wooden club, to subdue Cerberus, after which a quailing Hades and Persephone allow Heracles to lead a chained and submissive Cerberus away. Cerberus is often shown being chained, and Ovid tells that Heracles dragged the three headed Cerberus with chains of adamant.
There were several locations which were said to be the place where Heracles brought up Cerberus from the underworld. The geographer Strabo (63/64 BC – c. AD 24) reports that "according to the myth writers" Cerberus was brought up at Tainaron, the same place where Euripides has Heracles enter the underworld. Seneca has Heracles enter and exit at Tainaron. Apollodorus, although he has Heracles enter at Tainaron, has him exit at Troezen. The geographer Pausanias tells us that there was a temple at Troezen with "altars to the gods said to rule under the earth", where it was said that, in addition to Cerberus being "dragged" up by Heracles, Semele was supposed to have been brought up out of the underworld by Dionysus.
Another tradition had Cerberus brought up at Heraclea Pontica (the same place which Xenophon had earlier associated with Heracles' descent) and the cause of the poisonous plant aconite which grew there in abundance. Herodorus of Heraclea and Euphorion said that when Heracles brought Cerberus up from the underworld at Heraclea, Cerberus "vomited bile" from which the aconite plant grew up. Ovid, also makes Cerberus the cause of the poisonous aconite, saying that on the "shores of Scythia", upon leaving the underworld, as Cerberus was being dragged by Heracles from a cave, dazzled by the unaccustomed daylight, Cerberus spewed out a "poison-foam", which made the aconite plants growing there poisonous. Seneca's Cerberus too, like Ovid's, reacts violently to his first sight of daylight. Enraged, the previously submissive Cerberus struggles furiously, and Heracles and Theseus must together drag Cerberus into the light.
Pausanias reports that according to local legend Cerberus was brought up through a chasm in the earth dedicated to Clymenus (Hades) next to the sanctuary of Chthonia at Hermione, and in Euripides' Heracles, though Euripides does not say that Cerberus was brought out there, he has Cerberus kept for a while in the "grove of Chthonia" at Hermione. Pausanias also mentions that at Mount Laphystion in Boeotia, that there was a statue of Heracles Charops ("with bright eyes"), where the Boeotians said Heracles brought up Cerberus. Other locations which perhaps were also associated with Cerberus being brought out of the underworld include, Hierapolis, Thesprotia, and Emeia near Mycenae.
In some accounts, after bringing Cerberus up from the underworld, Heracles paraded the captured Cerberus through Greece. Euphorion has Heracles lead Cerberus through Midea in Argolis, as women and children watch in fear, and Diodorus Siculus says of Cerberus, that Heracles "carried him away to the amazement of all and exhibited him to men." Seneca has Juno complain of Heracles "highhandedly parading the black hound through Argive cities" and Heracles greeted by laurel-wreathed crowds, "singing" his praises.
Then, according to Apollodorus, Heracles showed Cerberus to Eurystheus, as commanded, after which he returned Cerberus to the underworld. However, according to Hesychius of Alexandria, Cerberus escaped, presumably returning to the underworld on his own.
The earliest mentions of Cerberus (c. 8th – 7th century BC) occur in Homer's Iliad and Odyssey, and Hesiod's Theogony. Homer does not name or describe Cerberus, but simply refers to Heracles being sent by Eurystheus to fetch the "hound of Hades", with Hermes and Athena as his guides, and, in a possible reference to Cerberus' capture, that Heracles shot Hades with an arrow. According to Hesiod, Cerberus was the offspring of the monsters Echidna and Typhon, was fifty-headed, ate raw flesh, and was the "brazen-voiced hound of Hades", who fawns on those that enter the house of Hades, but eats those who try to leave.
Stesichorus (c. 630 – 555 BC) apparently wrote a poem called Cerberus, of which virtually nothing remains. However the early-sixth-century BC-lost Corinthian cup from Argos, which showed a single head, and snakes growing out from many places on his body, was possibly influenced by Stesichorus' poem. The mid-sixth-century BC cup from Laconia gives Cerberus three heads and a snake tail, which eventually becomes the standard representation.
Pindar (c. 522 – c. 443 BC) apparently gave Cerberus one hundred heads. Bacchylides (5th century BC) also mentions Heracles bringing Cerberus up from the underworld, with no further details. Sophocles (c. 495 – c. 405 BC), in his Women of Trachis, makes Cerberus three-headed, and in his Oedipus at Colonus, the Chorus asks that Oedipus be allowed to pass the gates of the underworld undisturbed by Cerberus, called here the "untamable Watcher of Hades". Euripides (c. 480 – 406 BC) describes Cerberus as three-headed, and three-bodied, says that Heracles entered the underworld at Tainaron, has Heracles say that Cerberus was not given to him by Persephone, but rather he fought and conquered Cerberus, "for I had been lucky enough to witness the rites of the initiated", an apparent reference to his initiation into the Eleusinian Mysteries, and says that the capture of Cerberus was the last of Heracles' labors. The lost play Pirthous (attributed to either Euripides or his late contemporary Critias) has Heracles say that he came to the underworld at the command of Eurystheus, who had ordered him to bring back Cerberus alive, not because he wanted to see Cerberus, but only because Eurystheus thought Heracles would not be able to accomplish the task, and that Heracles "overcame the beast" and "received favour from the gods".
Plato (c. 425 – 348 BC) refers to Cerberus' composite nature, citing Cerberus, along with Scylla and the Chimera, as an example from "ancient fables" of a creature composed of many animal forms "grown together in one". Euphorion of Chalcis (3rd century BC) describes Cerberus as having multiple snake tails, and eyes that flashed, like sparks from a blacksmith's forge, or the volcanic Mount Etna. From Euphorion, also comes the first mention of a story which told that at Heraclea Pontica, where Cerberus was brought out of the underworld, by Heracles, Cerberus "vomited bile" from which the poisonous aconite plant grew up.
According to Diodorus Siculus (1st century BC), the capture of Cerberus was the eleventh of Heracles' labors, the twelfth and last being stealing the Apples of the Hesperides. Diodorus says that Heracles thought it best to first go to Athens to take part in the Eleusinian Mysteries, "Musaeus, the son of Orpheus, being at that time in charge of the initiatory rites", after which, he entered into the underworld "welcomed like a brother by Persephone", and "receiving the dog Cerberus in chains he carried him away to the amazement of all and exhibited him to men."
In Virgil's Aeneid (1st century BC), Aeneas and the Sibyl encounter Cerberus in a cave, where he "lay at vast length", filling the cave "from end to end", blocking the entrance to the underworld. Cerberus is described as "triple-throated", with "three fierce mouths", multiple "large backs", and serpents writhing around his neck. The Sibyl throws Cerberus a loaf laced with honey and herbs to induce sleep, enabling Aeneas to enter the underworld, and so apparently for Virgil—contradicting Hesiod—Cerberus guarded the underworld against entrance. Later Virgil describes Cerberus, in his bloody cave, crouching over half-gnawed bones. In his Georgics, Virgil refers to Cerberus, his "triple jaws agape" being tamed by Orpheus' playing his lyre.
Horace (65 – 8 BC) also refers to Cerberus yielding to Orpheus' lyre, here Cerberus has a single dog head, which "like a Fury's is fortified by a hundred snakes", with a "triple-tongued mouth" oozing "fetid breath and gore".
Ovid (43 BC – AD 17/18) has Cerberus' mouth produce venom, and like Euphorion, makes Cerberus the cause of the poisonous plant aconite. According to Ovid, Heracles dragged Cerberus from the underworld, emerging from a cave "where 'tis fabled, the plant grew / on soil infected by Cerberian teeth", and dazzled by the daylight, Cerberus spewed out a "poison-foam", which made the aconite plants growing there poisonous.
Seneca, in his tragedy Hercules Furens gives a detailed description of Cerberus and his capture. Seneca's Cerberus has three heads, a mane of snakes, and a snake tail, with his three heads being covered in gore, and licked by the many snakes which surround them, and with hearing so acute that he can hear "even ghosts". Seneca has Heracles use his lion-skin as shield, and his wooden club, to beat Cerberus into submission, after which Hades and Persephone, quailing on their thrones, let Heracles lead a chained and submissive Cerberus away. But upon leaving the underworld, at his first sight of daylight, a frightened Cerberus struggles furiously, and Heracles, with the help of Theseus (who had been held captive by Hades, but released, at Heracles' request) drag Cerberus into the light. Seneca, like Diodorus, has Heracles parade the captured Cerberus through Greece.
Apollodorus' Cerberus has three dog-heads, a serpent for a tail, and the heads of many snakes on his back. According to Apollodorus, Heracles' twelfth and final labor was to bring back Cerberus from Hades. Heracles first went to Eumolpus to be initiated into the Eleusinian Mysteries. Upon his entering the underworld, all the dead flee Heracles except for Meleager and the Gorgon Medusa. Heracles drew his sword against Medusa, but Hermes told Heracles that the dead are mere "empty phantoms". Heracles asked Hades (here called Pluto) for Cerberus, and Hades said that Heracles could take Cerberus provided he was able to subdue him without using weapons. Heracles found Cerberus at the gates of Acheron, and with his arms around Cerberus, though being bitten by Cerberus' serpent tail, Heracles squeezed until Cerberus submitted. Heracles carried Cerberus away, showed him to Eurystheus, then returned Cerberus to the underworld.
In an apparently unique version of the story, related by the sixth-century AD Pseudo-Nonnus, Heracles descended into Hades to abduct Persephone, and killed Cerberus on his way back up.
The capture of Cerberus was a popular theme in ancient Greek and Roman art. The earliest depictions date from the beginning of the sixth century BC. One of the two earliest depictions, a Corinthian cup (c. 590–580 BC) from Argos (now lost), shows a naked Heracles, with quiver on his back and bow in his right hand, striding left, accompanied by Hermes. Heracles threatens Hades with a stone, who flees left, while a goddess, perhaps Persephone or possibly Athena, standing in front of Hades' throne, prevents the attack. Cerberus, with a single canine head and snakes rising from his head and body, flees right. On the far right a column indicates the entrance to Hades' palace. Many of the elements of this scene—Hermes, Athena, Hades, Persephone, and a column or portico—are common occurrences in later works. The other earliest depiction, a relief pithos fragment from Crete (c. 590–570 BC), is thought to show a single lion-headed Cerberus with a snake (open-mouthed) over his back being led to the right.
A mid-sixth-century BC Laconian cup by the Hunt Painter adds several new features to the scene which also become common in later works: three heads, a snake tail, Cerberus' chain and Heracles' club. Here Cerberus has three canine heads, is covered by a shaggy coat of snakes, and has a tail which ends in a snake head. He is being held on a chain leash by Heracles who holds his club raised over head.
In Greek art, the vast majority of depictions of Heracles and Cerberus occur on Attic vases. Although the lost Corinthian cup shows Cerberus with a single dog head, and the relief pithos fragment (c. 590–570 BC) apparently shows a single lion-headed Cerberus, in Attic vase painting Cerberus usually has two dog heads. In other art, as in the Laconian cup, Cerberus is usually three-headed. Occasionally in Roman art Cerberus is shown with a large central lion head and two smaller dog heads on either side.
As in the Corinthian and Laconian cups (and possibly the relief pithos fragment), Cerberus is often depicted as part snake. In Attic vase painting, Cerberus is usually shown with a snake for a tail or a tail which ends in the head of a snake. Snakes are also often shown rising from various parts of his body including snout, head, neck, back, ankles, and paws.
Two Attic amphoras from Vulci, one (c. 530–515 BC) by the Bucci Painter (Munich 1493), the other (c. 525–510 BC) by the Andokides painter (Louvre F204), in addition to the usual two heads and snake tail, show Cerberus with a mane down his necks and back, another typical Cerberian feature of Attic vase painting. Andokides' amphora also has a small snake curling up from each of Cerberus' two heads.
Besides this lion-like mane and the occasional lion-head mentioned above, Cerberus was sometimes shown with other leonine features. A pitcher (c. 530–500) shows Cerberus with mane and claws, while a first-century BC sardonyx cameo shows Cerberus with leonine body and paws. In addition, a limestone relief fragment from Taranto (c. 320–300 BC) shows Cerberus with three lion-like heads.
During the second quarter of the 5th century BC the capture of Cerberus disappears from Attic vase painting. After the early third century BC, the subject becomes rare everywhere until the Roman period. In Roman art the capture of Cerberus is usually shown together with other labors. Heracles and Cerberus are usually alone, with Heracles leading Cerberus.
At least as early as the 6th century BC, some ancient writers attempted to explain away various fantastical features of Greek mythology; included in these are various rationalized accounts of the Cerberus story. The earliest such account (late 6th century BC) is that of Hecataeus of Miletus. In his account Cerberus was not a dog at all, but rather simply a large venomous snake, which lived on Tainaron. The serpent was called the "hound of Hades" only because anyone bitten by it died immediately, and it was this snake that Heracles brought to Eurystheus. The geographer Pausanias (who preserves for us Hecataeus' version of the story) points out that, since Homer does not describe Cerberus, Hecataeus' account does not necessarily conflict with Homer, since Homer's "Hound of Hades" may not in fact refer to an actual dog.
Other rationalized accounts make Cerberus out to be a normal dog. According to Palaephatus (4th century BC) Cerberus was one of the two dogs who guarded the cattle of Geryon, the other being Orthrus. Geryon lived in a city named Tricranium (in Greek Tricarenia, "Three-Heads"), from which name both Cerberus and Geryon came to be called "three-headed". Heracles killed Orthus, and drove away Geryon's cattle, with Cerberus following along behind. Molossus, a Mycenaen, offered to buy Cerberus from Eurystheus (presumably having received the dog, along with the cattle, from Heracles). But when Eurystheus refused, Molossus stole the dog and penned him up in a cave in Tainaron. Eurystheus commanded Heracles to find Cerberus and bring him back. After searching the entire Peloponnesus, Heracles found where it was said Cerberus was being held, went down into the cave, and brought up Cerberus, after which it was said: "Heracles descended through the cave into Hades and brought up Cerberus."
In the rationalized account of Philochorus, in which Heracles rescues Theseus, Perithous is eaten by Cerberus. In this version of the story, Aidoneus (i.e., "Hades") is the mortal king of the Molossians, with a wife named Persephone, a daughter named Kore (another name for the goddess Persephone) and a large mortal dog named Cerberus, with whom all suitors of his daughter were required to fight. After having stolen Helen, to be Theseus' wife, Theseus and Perithous, attempt to abduct Kore, for Perithous, but Aidoneus catches the two heroes, imprisons Theseus, and feeds Perithous to Cerberus. Later, while a guest of Aidoneus, Heracles asks Aidoneus to release Theseus, as a favor, which Aidoneus grants.
A 2nd-century AD Greek known as Heraclitus the paradoxographer (not to be confused with the 5th-century BC Greek philosopher Heraclitus)—claimed that Cerberus had two pups that were never away from their father, which made Cerberus appear to be three-headed.
Servius, a medieval commentator on Virgil's Aeneid, derived Cerberus' name from the Greek word creoboros meaning "flesh-devouring" (see above), and held that Cerberus symbolized the corpse-consuming earth, with Heracles' triumph over Cerberus representing his victory over earthly desires. Later, the mythographer Fulgentius, allegorizes Cerberus' three heads as representing the three origins of human strife: "nature, cause, and accident", and (drawing on the same flesh-devouring etymology as Servius) as symbolizing "the three ages—infancy, youth, old age, at which death enters the world." The Byzantine historian and bishop Eusebius wrote that Cerberus was represented with three heads, because the positions of the sun above the earth are three—rising, midday, and setting.
The later Vatican Mythographers repeat and expand upon the traditions of Servius and Fulgentius. All three Vatican Mythographers repeat Servius' derivation of Cerberus' name from creoboros. The Second Vatican Mythographer repeats (nearly word for word) what Fulgentius had to say about Cerberus, while the Third Vatican Mythographer, in another very similar passage to Fugentius', says (more specifically than Fugentius), that for "the philosophers" Cerberus represented hatred, his three heads symbolizing the three kinds of human hatred: natural, causal, and casual (i.e. accidental).
The Second and Third Vatican Mythographers, note that the three brothers Zeus, Poseidon and Hades each have tripartite insignia, associating Hades' three-headed Cerberus, with Zeus' three-forked thunderbolt, and Poseidon's three-pronged trident, while the Third Vatican Mythographer adds that "some philosophers think of Cerberus as the tripartite earth: Asia, Africa, and Europe. This earth, swallowing up bodies, sends souls to Tartarus."
Virgil described Cerberus as "ravenous" (fame rabida), and a rapacious Cerberus became proverbial. Thus Cerberus came to symbolize avarice, and so, for example, in Dante's Inferno, Cerberus is placed in the Third Circle of Hell, guarding over the gluttons, where he "rends the spirits, flays and quarters them," and Dante (perhaps echoing Servius' association of Cerberus with earth) has his guide Virgil take up handfuls of earth and throw them into Cerberus' "rapacious gullets."
In the constellation Cerberus introduced by Johannes Hevelius in 1687, Cerberus is drawn as a three-headed snake, held in Hercules' hand (previously these stars had been depicted as a branch of the tree on which grew the Apples of the Hesperides).
In 1829, French naturalist Georges Cuvier gave the name Cerberus to a genus of Asian snakes, which are commonly called "dog-faced water snakes" in English.
The Serbian hard rock band Kerber, formed in 1981 by members Goran Šepa, Tomislav Nikolić and Branislav Božinović, is named after Cerberus. | [
{
"paragraph_id": 0,
"text": "In Greek mythology, Cerberus (/ˈsɜːrbərəs/ or /ˈkɜːrbərəs/; Greek: Κέρβερος Kérberos [ˈkerberos]), often referred to as the hound of Hades, is a multi-headed dog that guards the gates of the Underworld to prevent the dead from leaving. He was the offspring of the monsters Echidna and Typhon, and was usually described as having three heads, a serpent for a tail, and snakes protruding from his body. Cerberus is primarily known for his capture by Heracles, the last of Heracles' twelve labours.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The etymology of Cerberus' name is uncertain. Ogden refers to attempts to establish an Indo-European etymology as \"not yet successful\". It has been claimed to be related to the Sanskrit word सर्वरा sarvarā, used as an epithet of one of the dogs of Yama, from a Proto-Indo-European word *k̑érberos, meaning \"spotted\". Lincoln (1991), among others, critiques this etymology. This etymology was also rejected by Manfred Mayrhofer, who proposed an Austro-Asiatic origin for the word, and Beekes. Lincoln notes a similarity between Cerberus and the Norse mythological dog Garmr, relating both names to a Proto-Indo-European root *ger- \"to growl\" (perhaps with the suffixes -*m/*b and -*r). However, as Ogden observes, this analysis actually requires Kerberos and Garmr to be derived from two different Indo-European roots (*ker- and *gher- respectively), and so does not actually establish a relationship between the two names.",
"title": "Etymology"
},
{
"paragraph_id": 2,
"text": "Though probably not Greek, Greek etymologies for Cerberus have been offered. An etymology given by Servius (the late-fourth-century commentator on Virgil)—but rejected by Ogden—derives Cerberus from the Greek word creoboros meaning \"flesh-devouring\". Another suggested etymology derives Cerberus from \"Ker berethrou\", meaning \"evil of the pit\".",
"title": "Etymology"
},
{
"paragraph_id": 3,
"text": "Descriptions of Cerberus vary, including the number of his heads. Cerberus was usually three-headed, though not always. Cerberus had several multi-headed relatives. His father was the multi snake-footed Typhon, and Cerberus was the brother of three other multi-headed monsters, the multi-snake-headed Lernaean Hydra; Orthrus, the two-headed dog that guarded the Cattle of Geryon; and the Chimera, who had three heads: that of a lion, a goat, and a snake. And, like these close relatives, Cerberus was, with only the rare iconographic exception, multi-headed.",
"title": "Descriptions"
},
{
"paragraph_id": 4,
"text": "In the earliest description of Cerberus, Hesiod's Theogony (c. 8th – 7th century BC), Cerberus has fifty heads, while Pindar (c. 522 – c. 443 BC) gave him one hundred heads. However, later writers almost universally give Cerberus three heads. An exception is the Latin poet Horace's Cerberus which has a single dog head, and one hundred snake heads. Perhaps trying to reconcile these competing traditions, Apollodorus's Cerberus has three dog heads and the heads of \"all sorts of snakes\" along his back, while the Byzantine poet John Tzetzes (who probably based his account on Apollodorus) gives Cerberus fifty heads, three of which were dog heads, the rest being the \"heads of other beasts of all sorts\".",
"title": "Descriptions"
},
{
"paragraph_id": 5,
"text": "In art Cerberus is most commonly depicted with two dog heads (visible), never more than three, but occasionally with only one. On one of the two earliest depictions (c. 590–580 BC), a Corinthian cup from Argos (see below), now lost, Cerberus was shown as a normal single-headed dog. The first appearance of a three-headed Cerberus occurs on a mid-sixth-century BC Laconian cup (see below).",
"title": "Descriptions"
},
{
"paragraph_id": 6,
"text": "Horace's many snake-headed Cerberus followed a long tradition of Cerberus being part snake. This is perhaps already implied as early as in Hesiod's Theogony, where Cerberus' mother is the half-snake Echidna, and his father the snake-headed Typhon. In art, Cerberus is often shown as being part snake, for example the lost Corinthian cup showed snakes protruding from Cerberus' body, while the mid sixth-century BC Laconian cup gives Cerberus a snake for a tail. In the literary record, the first certain indication of Cerberus' serpentine nature comes from the rationalized account of Hecataeus of Miletus (fl. 500–494 BC), who makes Cerberus a large poisonous snake. Plato refers to Cerberus' composite nature, and Euphorion of Chalcis (3rd century BC) describes Cerberus as having multiple snake tails, and presumably in connection to his serpentine nature, associates Cerberus with the creation of the poisonous aconite plant. Virgil has snakes writhe around Cerberus' neck, Ovid's Cerberus has a venomous mouth, necks \"vile with snakes\", and \"hair inwoven with the threatening snake\", while Seneca gives Cerberus a mane consisting of snakes, and a single snake tail.",
"title": "Descriptions"
},
{
"paragraph_id": 7,
"text": "Cerberus was given various other traits. According to Euripides, Cerberus not only had three heads but three bodies, and according to Virgil he had multiple backs. Cerberus ate raw flesh (according to Hesiod), had eyes which flashed fire (according to Euphorion), a three-tongued mouth (according to Horace), and acute hearing (according to Seneca).",
"title": "Descriptions"
},
{
"paragraph_id": 8,
"text": "Cerberus' only mythology concerns his capture by Heracles. As early as Homer we learn that Heracles was sent by Eurystheus, the king of Tiryns, to bring back Cerberus from Hades the king of the underworld. According to Apollodorus, this was the twelfth and final labour imposed on Heracles. In a fragment from a lost play Pirithous, (attributed to either Euripides or Critias) Heracles says that, although Eurystheus commanded him to bring back Cerberus, it was not from any desire to see Cerberus, but only because Eurystheus thought that the task was impossible.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 9,
"text": "Heracles was aided in his mission by his being an initiate of the Eleusinian Mysteries. Euripides has his initiation being \"lucky\" for Heracles in capturing Cerberus. And both Diodorus Siculus and Apollodorus say that Heracles was initiated into the Mysteries, in preparation for his descent into the underworld. According to Diodorus, Heracles went to Athens, where Musaeus, the son of Orpheus, was in charge of the initiation rites, while according to Apollodorus, he went to Eumolpus at Eleusis.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 10,
"text": "Heracles also had the help of Hermes, the usual guide of the underworld, as well as Athena. In the Odyssey, Homer has Hermes and Athena as his guides. And Hermes and Athena are often shown with Heracles on vase paintings depicting Cerberus' capture. By most accounts, Heracles made his descent into the underworld through an entrance at Tainaron, the most famous of the various Greek entrances to the underworld. The place is first mentioned in connection with the Cerberus story in the rationalized account of Hecataeus of Miletus (fl. 500–494 BC), and Euripides, Seneca, and Apolodorus, all have Heracles descend into the underworld there. However Xenophon reports that Heracles was said to have descended at the Acherusian Chersonese near Heraclea Pontica, on the Black Sea, a place more usually associated with Heracles' exit from the underworld (see below). Heraclea, founded c. 560 BC, perhaps took its name from the association of its site with Heracles' Cerberian exploit.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 11,
"text": "While in the underworld, Heracles met the heroes Theseus and Pirithous, where the two companions were being held prisoner by Hades for attempting to carry off Hades' wife Persephone. Along with bringing back Cerberus, Heracles also managed (usually) to rescue Theseus, and in some versions Pirithous as well. According to Apollodorus, Heracles found Theseus and Pirithous near the gates of Hades, bound to the \"Chair of Forgetfulness, to which they grew and were held fast by coils of serpents\", and when they saw Heracles, \"they stretched out their hands as if they should be raised from the dead by his might\", and Heracles was able to free Theseus, but when he tried to raise up Pirithous, \"the earth quaked and he let go.\"",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 12,
"text": "The earliest evidence for the involvement of Theseus and Pirithous in the Cerberus story, is found on a shield-band relief (c. 560 BC) from Olympia, where Theseus and Pirithous (named) are seated together on a chair, arms held out in supplication, while Heracles approaches, about to draw his sword. The earliest literary mention of the rescue occurs in Euripides, where Heracles saves Theseus (with no mention of Pirithous). In the lost play Pirithous, both heroes are rescued, while in the rationalized account of Philochorus, Heracles was able to rescue Theseus, but not Pirithous. In one place Diodorus says Heracles brought back both Theseus and Pirithous, by the favor of Persephone, while in another he says that Pirithous remained in Hades, or according to \"some writers of myth\" that neither Theseus, nor Pirithous returned. Both are rescued in Hyginus.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 13,
"text": "There are various versions of how Heracles accomplished Cerberus' capture. According to Apollodorus, Heracles asked Hades for Cerberus, and Hades told Heracles he would allow him to take Cerberus only if he \"mastered him without the use of the weapons which he carried\", and so, using his lion-skin as a shield, Heracles squeezed Cerberus around the head until he submitted.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 14,
"text": "In some early sources Cerberus' capture seems to involve Heracles fighting Hades. Homer (Iliad 5.395–397) has Hades injured by an arrow shot by Heracles. A scholium to the Iliad passage, explains that Hades had commanded that Heracles \"master Cerberus without shield or Iron\". Heracles did this, by (as in Apollodorus) using his lion-skin instead of his shield, and making stone points for his arrows, but when Hades still opposed him, Heracles shot Hades in anger. Consistent with the no iron requirement, on an early-sixth-century BC lost Corinthian cup, Heracles is shown attacking Hades with a stone, while the iconographic tradition, from c. 560 BC, often shows Heracles using his wooden club against Cerberus.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 15,
"text": "Euripides has Amphitryon ask Heracles: \"Did you conquer him in fight, or receive him from the goddess [i.e. Persephone]? To which Heracles answers: \"In fight\", and the Pirithous fragment says that Heracles \"overcame the beast by force\". However, according to Diodorus, Persephone welcomed Heracles \"like a brother\" and gave Cerberus \"in chains\" to Heracles. Aristophanes has Heracles seize Cerberus in a stranglehold and run off, while Seneca has Heracles again use his lion-skin as shield, and his wooden club, to subdue Cerberus, after which a quailing Hades and Persephone allow Heracles to lead a chained and submissive Cerberus away. Cerberus is often shown being chained, and Ovid tells that Heracles dragged the three headed Cerberus with chains of adamant.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 16,
"text": "There were several locations which were said to be the place where Heracles brought up Cerberus from the underworld. The geographer Strabo (63/64 BC – c. AD 24) reports that \"according to the myth writers\" Cerberus was brought up at Tainaron, the same place where Euripides has Heracles enter the underworld. Seneca has Heracles enter and exit at Tainaron. Apollodorus, although he has Heracles enter at Tainaron, has him exit at Troezen. The geographer Pausanias tells us that there was a temple at Troezen with \"altars to the gods said to rule under the earth\", where it was said that, in addition to Cerberus being \"dragged\" up by Heracles, Semele was supposed to have been brought up out of the underworld by Dionysus.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 17,
"text": "Another tradition had Cerberus brought up at Heraclea Pontica (the same place which Xenophon had earlier associated with Heracles' descent) and the cause of the poisonous plant aconite which grew there in abundance. Herodorus of Heraclea and Euphorion said that when Heracles brought Cerberus up from the underworld at Heraclea, Cerberus \"vomited bile\" from which the aconite plant grew up. Ovid, also makes Cerberus the cause of the poisonous aconite, saying that on the \"shores of Scythia\", upon leaving the underworld, as Cerberus was being dragged by Heracles from a cave, dazzled by the unaccustomed daylight, Cerberus spewed out a \"poison-foam\", which made the aconite plants growing there poisonous. Seneca's Cerberus too, like Ovid's, reacts violently to his first sight of daylight. Enraged, the previously submissive Cerberus struggles furiously, and Heracles and Theseus must together drag Cerberus into the light.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 18,
"text": "Pausanias reports that according to local legend Cerberus was brought up through a chasm in the earth dedicated to Clymenus (Hades) next to the sanctuary of Chthonia at Hermione, and in Euripides' Heracles, though Euripides does not say that Cerberus was brought out there, he has Cerberus kept for a while in the \"grove of Chthonia\" at Hermione. Pausanias also mentions that at Mount Laphystion in Boeotia, that there was a statue of Heracles Charops (\"with bright eyes\"), where the Boeotians said Heracles brought up Cerberus. Other locations which perhaps were also associated with Cerberus being brought out of the underworld include, Hierapolis, Thesprotia, and Emeia near Mycenae.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 19,
"text": "In some accounts, after bringing Cerberus up from the underworld, Heracles paraded the captured Cerberus through Greece. Euphorion has Heracles lead Cerberus through Midea in Argolis, as women and children watch in fear, and Diodorus Siculus says of Cerberus, that Heracles \"carried him away to the amazement of all and exhibited him to men.\" Seneca has Juno complain of Heracles \"highhandedly parading the black hound through Argive cities\" and Heracles greeted by laurel-wreathed crowds, \"singing\" his praises.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 20,
"text": "Then, according to Apollodorus, Heracles showed Cerberus to Eurystheus, as commanded, after which he returned Cerberus to the underworld. However, according to Hesychius of Alexandria, Cerberus escaped, presumably returning to the underworld on his own.",
"title": "The Twelfth Labour of Heracles"
},
{
"paragraph_id": 21,
"text": "The earliest mentions of Cerberus (c. 8th – 7th century BC) occur in Homer's Iliad and Odyssey, and Hesiod's Theogony. Homer does not name or describe Cerberus, but simply refers to Heracles being sent by Eurystheus to fetch the \"hound of Hades\", with Hermes and Athena as his guides, and, in a possible reference to Cerberus' capture, that Heracles shot Hades with an arrow. According to Hesiod, Cerberus was the offspring of the monsters Echidna and Typhon, was fifty-headed, ate raw flesh, and was the \"brazen-voiced hound of Hades\", who fawns on those that enter the house of Hades, but eats those who try to leave.",
"title": "Principal sources"
},
{
"paragraph_id": 22,
"text": "Stesichorus (c. 630 – 555 BC) apparently wrote a poem called Cerberus, of which virtually nothing remains. However the early-sixth-century BC-lost Corinthian cup from Argos, which showed a single head, and snakes growing out from many places on his body, was possibly influenced by Stesichorus' poem. The mid-sixth-century BC cup from Laconia gives Cerberus three heads and a snake tail, which eventually becomes the standard representation.",
"title": "Principal sources"
},
{
"paragraph_id": 23,
"text": "Pindar (c. 522 – c. 443 BC) apparently gave Cerberus one hundred heads. Bacchylides (5th century BC) also mentions Heracles bringing Cerberus up from the underworld, with no further details. Sophocles (c. 495 – c. 405 BC), in his Women of Trachis, makes Cerberus three-headed, and in his Oedipus at Colonus, the Chorus asks that Oedipus be allowed to pass the gates of the underworld undisturbed by Cerberus, called here the \"untamable Watcher of Hades\". Euripides (c. 480 – 406 BC) describes Cerberus as three-headed, and three-bodied, says that Heracles entered the underworld at Tainaron, has Heracles say that Cerberus was not given to him by Persephone, but rather he fought and conquered Cerberus, \"for I had been lucky enough to witness the rites of the initiated\", an apparent reference to his initiation into the Eleusinian Mysteries, and says that the capture of Cerberus was the last of Heracles' labors. The lost play Pirthous (attributed to either Euripides or his late contemporary Critias) has Heracles say that he came to the underworld at the command of Eurystheus, who had ordered him to bring back Cerberus alive, not because he wanted to see Cerberus, but only because Eurystheus thought Heracles would not be able to accomplish the task, and that Heracles \"overcame the beast\" and \"received favour from the gods\".",
"title": "Principal sources"
},
{
"paragraph_id": 24,
"text": "Plato (c. 425 – 348 BC) refers to Cerberus' composite nature, citing Cerberus, along with Scylla and the Chimera, as an example from \"ancient fables\" of a creature composed of many animal forms \"grown together in one\". Euphorion of Chalcis (3rd century BC) describes Cerberus as having multiple snake tails, and eyes that flashed, like sparks from a blacksmith's forge, or the volcanic Mount Etna. From Euphorion, also comes the first mention of a story which told that at Heraclea Pontica, where Cerberus was brought out of the underworld, by Heracles, Cerberus \"vomited bile\" from which the poisonous aconite plant grew up.",
"title": "Principal sources"
},
{
"paragraph_id": 25,
"text": "According to Diodorus Siculus (1st century BC), the capture of Cerberus was the eleventh of Heracles' labors, the twelfth and last being stealing the Apples of the Hesperides. Diodorus says that Heracles thought it best to first go to Athens to take part in the Eleusinian Mysteries, \"Musaeus, the son of Orpheus, being at that time in charge of the initiatory rites\", after which, he entered into the underworld \"welcomed like a brother by Persephone\", and \"receiving the dog Cerberus in chains he carried him away to the amazement of all and exhibited him to men.\"",
"title": "Principal sources"
},
{
"paragraph_id": 26,
"text": "In Virgil's Aeneid (1st century BC), Aeneas and the Sibyl encounter Cerberus in a cave, where he \"lay at vast length\", filling the cave \"from end to end\", blocking the entrance to the underworld. Cerberus is described as \"triple-throated\", with \"three fierce mouths\", multiple \"large backs\", and serpents writhing around his neck. The Sibyl throws Cerberus a loaf laced with honey and herbs to induce sleep, enabling Aeneas to enter the underworld, and so apparently for Virgil—contradicting Hesiod—Cerberus guarded the underworld against entrance. Later Virgil describes Cerberus, in his bloody cave, crouching over half-gnawed bones. In his Georgics, Virgil refers to Cerberus, his \"triple jaws agape\" being tamed by Orpheus' playing his lyre.",
"title": "Principal sources"
},
{
"paragraph_id": 27,
"text": "Horace (65 – 8 BC) also refers to Cerberus yielding to Orpheus' lyre, here Cerberus has a single dog head, which \"like a Fury's is fortified by a hundred snakes\", with a \"triple-tongued mouth\" oozing \"fetid breath and gore\".",
"title": "Principal sources"
},
{
"paragraph_id": 28,
"text": "Ovid (43 BC – AD 17/18) has Cerberus' mouth produce venom, and like Euphorion, makes Cerberus the cause of the poisonous plant aconite. According to Ovid, Heracles dragged Cerberus from the underworld, emerging from a cave \"where 'tis fabled, the plant grew / on soil infected by Cerberian teeth\", and dazzled by the daylight, Cerberus spewed out a \"poison-foam\", which made the aconite plants growing there poisonous.",
"title": "Principal sources"
},
{
"paragraph_id": 29,
"text": "Seneca, in his tragedy Hercules Furens gives a detailed description of Cerberus and his capture. Seneca's Cerberus has three heads, a mane of snakes, and a snake tail, with his three heads being covered in gore, and licked by the many snakes which surround them, and with hearing so acute that he can hear \"even ghosts\". Seneca has Heracles use his lion-skin as shield, and his wooden club, to beat Cerberus into submission, after which Hades and Persephone, quailing on their thrones, let Heracles lead a chained and submissive Cerberus away. But upon leaving the underworld, at his first sight of daylight, a frightened Cerberus struggles furiously, and Heracles, with the help of Theseus (who had been held captive by Hades, but released, at Heracles' request) drag Cerberus into the light. Seneca, like Diodorus, has Heracles parade the captured Cerberus through Greece.",
"title": "Principal sources"
},
{
"paragraph_id": 30,
"text": "Apollodorus' Cerberus has three dog-heads, a serpent for a tail, and the heads of many snakes on his back. According to Apollodorus, Heracles' twelfth and final labor was to bring back Cerberus from Hades. Heracles first went to Eumolpus to be initiated into the Eleusinian Mysteries. Upon his entering the underworld, all the dead flee Heracles except for Meleager and the Gorgon Medusa. Heracles drew his sword against Medusa, but Hermes told Heracles that the dead are mere \"empty phantoms\". Heracles asked Hades (here called Pluto) for Cerberus, and Hades said that Heracles could take Cerberus provided he was able to subdue him without using weapons. Heracles found Cerberus at the gates of Acheron, and with his arms around Cerberus, though being bitten by Cerberus' serpent tail, Heracles squeezed until Cerberus submitted. Heracles carried Cerberus away, showed him to Eurystheus, then returned Cerberus to the underworld.",
"title": "Principal sources"
},
{
"paragraph_id": 31,
"text": "In an apparently unique version of the story, related by the sixth-century AD Pseudo-Nonnus, Heracles descended into Hades to abduct Persephone, and killed Cerberus on his way back up.",
"title": "Principal sources"
},
{
"paragraph_id": 32,
"text": "The capture of Cerberus was a popular theme in ancient Greek and Roman art. The earliest depictions date from the beginning of the sixth century BC. One of the two earliest depictions, a Corinthian cup (c. 590–580 BC) from Argos (now lost), shows a naked Heracles, with quiver on his back and bow in his right hand, striding left, accompanied by Hermes. Heracles threatens Hades with a stone, who flees left, while a goddess, perhaps Persephone or possibly Athena, standing in front of Hades' throne, prevents the attack. Cerberus, with a single canine head and snakes rising from his head and body, flees right. On the far right a column indicates the entrance to Hades' palace. Many of the elements of this scene—Hermes, Athena, Hades, Persephone, and a column or portico—are common occurrences in later works. The other earliest depiction, a relief pithos fragment from Crete (c. 590–570 BC), is thought to show a single lion-headed Cerberus with a snake (open-mouthed) over his back being led to the right.",
"title": "Iconography"
},
{
"paragraph_id": 33,
"text": "A mid-sixth-century BC Laconian cup by the Hunt Painter adds several new features to the scene which also become common in later works: three heads, a snake tail, Cerberus' chain and Heracles' club. Here Cerberus has three canine heads, is covered by a shaggy coat of snakes, and has a tail which ends in a snake head. He is being held on a chain leash by Heracles who holds his club raised over head.",
"title": "Iconography"
},
{
"paragraph_id": 34,
"text": "In Greek art, the vast majority of depictions of Heracles and Cerberus occur on Attic vases. Although the lost Corinthian cup shows Cerberus with a single dog head, and the relief pithos fragment (c. 590–570 BC) apparently shows a single lion-headed Cerberus, in Attic vase painting Cerberus usually has two dog heads. In other art, as in the Laconian cup, Cerberus is usually three-headed. Occasionally in Roman art Cerberus is shown with a large central lion head and two smaller dog heads on either side.",
"title": "Iconography"
},
{
"paragraph_id": 35,
"text": "As in the Corinthian and Laconian cups (and possibly the relief pithos fragment), Cerberus is often depicted as part snake. In Attic vase painting, Cerberus is usually shown with a snake for a tail or a tail which ends in the head of a snake. Snakes are also often shown rising from various parts of his body including snout, head, neck, back, ankles, and paws.",
"title": "Iconography"
},
{
"paragraph_id": 36,
"text": "Two Attic amphoras from Vulci, one (c. 530–515 BC) by the Bucci Painter (Munich 1493), the other (c. 525–510 BC) by the Andokides painter (Louvre F204), in addition to the usual two heads and snake tail, show Cerberus with a mane down his necks and back, another typical Cerberian feature of Attic vase painting. Andokides' amphora also has a small snake curling up from each of Cerberus' two heads.",
"title": "Iconography"
},
{
"paragraph_id": 37,
"text": "Besides this lion-like mane and the occasional lion-head mentioned above, Cerberus was sometimes shown with other leonine features. A pitcher (c. 530–500) shows Cerberus with mane and claws, while a first-century BC sardonyx cameo shows Cerberus with leonine body and paws. In addition, a limestone relief fragment from Taranto (c. 320–300 BC) shows Cerberus with three lion-like heads.",
"title": "Iconography"
},
{
"paragraph_id": 38,
"text": "During the second quarter of the 5th century BC the capture of Cerberus disappears from Attic vase painting. After the early third century BC, the subject becomes rare everywhere until the Roman period. In Roman art the capture of Cerberus is usually shown together with other labors. Heracles and Cerberus are usually alone, with Heracles leading Cerberus.",
"title": "Iconography"
},
{
"paragraph_id": 39,
"text": "At least as early as the 6th century BC, some ancient writers attempted to explain away various fantastical features of Greek mythology; included in these are various rationalized accounts of the Cerberus story. The earliest such account (late 6th century BC) is that of Hecataeus of Miletus. In his account Cerberus was not a dog at all, but rather simply a large venomous snake, which lived on Tainaron. The serpent was called the \"hound of Hades\" only because anyone bitten by it died immediately, and it was this snake that Heracles brought to Eurystheus. The geographer Pausanias (who preserves for us Hecataeus' version of the story) points out that, since Homer does not describe Cerberus, Hecataeus' account does not necessarily conflict with Homer, since Homer's \"Hound of Hades\" may not in fact refer to an actual dog.",
"title": "Cerberus rationalized"
},
{
"paragraph_id": 40,
"text": "Other rationalized accounts make Cerberus out to be a normal dog. According to Palaephatus (4th century BC) Cerberus was one of the two dogs who guarded the cattle of Geryon, the other being Orthrus. Geryon lived in a city named Tricranium (in Greek Tricarenia, \"Three-Heads\"), from which name both Cerberus and Geryon came to be called \"three-headed\". Heracles killed Orthus, and drove away Geryon's cattle, with Cerberus following along behind. Molossus, a Mycenaen, offered to buy Cerberus from Eurystheus (presumably having received the dog, along with the cattle, from Heracles). But when Eurystheus refused, Molossus stole the dog and penned him up in a cave in Tainaron. Eurystheus commanded Heracles to find Cerberus and bring him back. After searching the entire Peloponnesus, Heracles found where it was said Cerberus was being held, went down into the cave, and brought up Cerberus, after which it was said: \"Heracles descended through the cave into Hades and brought up Cerberus.\"",
"title": "Cerberus rationalized"
},
{
"paragraph_id": 41,
"text": "In the rationalized account of Philochorus, in which Heracles rescues Theseus, Perithous is eaten by Cerberus. In this version of the story, Aidoneus (i.e., \"Hades\") is the mortal king of the Molossians, with a wife named Persephone, a daughter named Kore (another name for the goddess Persephone) and a large mortal dog named Cerberus, with whom all suitors of his daughter were required to fight. After having stolen Helen, to be Theseus' wife, Theseus and Perithous, attempt to abduct Kore, for Perithous, but Aidoneus catches the two heroes, imprisons Theseus, and feeds Perithous to Cerberus. Later, while a guest of Aidoneus, Heracles asks Aidoneus to release Theseus, as a favor, which Aidoneus grants.",
"title": "Cerberus rationalized"
},
{
"paragraph_id": 42,
"text": "A 2nd-century AD Greek known as Heraclitus the paradoxographer (not to be confused with the 5th-century BC Greek philosopher Heraclitus)—claimed that Cerberus had two pups that were never away from their father, which made Cerberus appear to be three-headed.",
"title": "Cerberus rationalized"
},
{
"paragraph_id": 43,
"text": "Servius, a medieval commentator on Virgil's Aeneid, derived Cerberus' name from the Greek word creoboros meaning \"flesh-devouring\" (see above), and held that Cerberus symbolized the corpse-consuming earth, with Heracles' triumph over Cerberus representing his victory over earthly desires. Later, the mythographer Fulgentius, allegorizes Cerberus' three heads as representing the three origins of human strife: \"nature, cause, and accident\", and (drawing on the same flesh-devouring etymology as Servius) as symbolizing \"the three ages—infancy, youth, old age, at which death enters the world.\" The Byzantine historian and bishop Eusebius wrote that Cerberus was represented with three heads, because the positions of the sun above the earth are three—rising, midday, and setting.",
"title": "Cerberus allegorized"
},
{
"paragraph_id": 44,
"text": "The later Vatican Mythographers repeat and expand upon the traditions of Servius and Fulgentius. All three Vatican Mythographers repeat Servius' derivation of Cerberus' name from creoboros. The Second Vatican Mythographer repeats (nearly word for word) what Fulgentius had to say about Cerberus, while the Third Vatican Mythographer, in another very similar passage to Fugentius', says (more specifically than Fugentius), that for \"the philosophers\" Cerberus represented hatred, his three heads symbolizing the three kinds of human hatred: natural, causal, and casual (i.e. accidental).",
"title": "Cerberus allegorized"
},
{
"paragraph_id": 45,
"text": "The Second and Third Vatican Mythographers, note that the three brothers Zeus, Poseidon and Hades each have tripartite insignia, associating Hades' three-headed Cerberus, with Zeus' three-forked thunderbolt, and Poseidon's three-pronged trident, while the Third Vatican Mythographer adds that \"some philosophers think of Cerberus as the tripartite earth: Asia, Africa, and Europe. This earth, swallowing up bodies, sends souls to Tartarus.\"",
"title": "Cerberus allegorized"
},
{
"paragraph_id": 46,
"text": "Virgil described Cerberus as \"ravenous\" (fame rabida), and a rapacious Cerberus became proverbial. Thus Cerberus came to symbolize avarice, and so, for example, in Dante's Inferno, Cerberus is placed in the Third Circle of Hell, guarding over the gluttons, where he \"rends the spirits, flays and quarters them,\" and Dante (perhaps echoing Servius' association of Cerberus with earth) has his guide Virgil take up handfuls of earth and throw them into Cerberus' \"rapacious gullets.\"",
"title": "Cerberus allegorized"
},
{
"paragraph_id": 47,
"text": "In the constellation Cerberus introduced by Johannes Hevelius in 1687, Cerberus is drawn as a three-headed snake, held in Hercules' hand (previously these stars had been depicted as a branch of the tree on which grew the Apples of the Hesperides).",
"title": "Namesakes"
},
{
"paragraph_id": 48,
"text": "In 1829, French naturalist Georges Cuvier gave the name Cerberus to a genus of Asian snakes, which are commonly called \"dog-faced water snakes\" in English.",
"title": "Namesakes"
},
{
"paragraph_id": 49,
"text": "The Serbian hard rock band Kerber, formed in 1981 by members Goran Šepa, Tomislav Nikolić and Branislav Božinović, is named after Cerberus.",
"title": "Namesakes"
}
] | In Greek mythology, Cerberus, often referred to as the hound of Hades, is a multi-headed dog that guards the gates of the Underworld to prevent the dead from leaving. He was the offspring of the monsters Echidna and Typhon, and was usually described as having three heads, a serpent for a tail, and snakes protruding from his body. Cerberus is primarily known for his capture by Heracles, the last of Heracles' twelve labours. | 2001-10-04T13:41:01Z | 2023-12-30T19:31:46Z | [
"Template:Short description",
"Template:Twelve tasks of Hercules",
"Template:Metamorphoses in Greco-Roman mythology",
"Template:Reflist",
"Template:Cite book",
"Template:ISBN",
"Template:Wiktionary-inline",
"Template:Cite web",
"Template:Refend",
"Template:Commonscat-inline",
"Template:EB1911",
"Template:Use dmy dates",
"Template:IPAc-en",
"Template:IPA-el",
"Template:Webarchive",
"Template:Authority control",
"Template:Other uses",
"Template:Lang-grc-gre",
"Template:Refbegin",
"Template:Greek religion"
] | https://en.wikipedia.org/wiki/Cerberus |
6,698 | Camel case | Camel case (sometimes stylized as camelCase or CamelCase, also known as camel caps or more formally as medial capitals) is the practice of writing phrases without spaces or punctuation and with capitalized words. The format indicates the first word starting with either case, then the following words having an initial uppercase letter. Common examples include "YouTube", "iPhone" and "eBay". Camel case is often used as a naming convention in computer programming. It is also sometimes used in online usernames such as "JohnSmith", and to make multi-word domain names more legible, for example in promoting "EasyWidgetCompany.com".
The more specific terms Pascal case and upper camel case refer to a joined phrase where the first letter of each word is capitalized, including the initial letter of the first word. Similarly, lower camel case (also known as dromedary case) requires an initial lowercase letter. Some people and organizations, notably Microsoft, use the term camel case only for lower camel case, designating Pascal case for the upper camel case. Some programming styles prefer camel case with the first letter capitalized, others not. For clarity, this article leaves the definition of camel case ambiguous with respect to capitalization, and uses the more specific terms when necessary.
Camel case is distinct from several other styles: title case, which capitalizes all words but retains the spaces between them; Tall Man lettering, which uses capitals to emphasize the differences between similar-looking product names such as "predniSONE" and "predniSOLONE"; and snake case, which uses underscores interspersed with lowercase letters (sometimes with the first letter capitalized). A combination of snake and camel case (identifiers Written_Like_This) is recommended in the Ada 95 style guide.
The practice has various names, including:
The earliest known occurrence of the term "InterCaps" on Usenet is in an April 1990 post to the group alt.folklore.computers by Avi Rappoport. The earliest use of the name "Camel Case" occurs in 1995, in a post by Newton Love. Love has since said, "With the advent of programming languages having these sorts of constructs, the humpiness of the style made me call it HumpyCase at first, before I settled on CamelCase. I had been calling it CamelCase for years. ... The citation above was just the first time I had used the name on USENET."
The use of medial capitals as a convention in the regular spelling of everyday texts is rare, but is used in some languages as a solution to particular problems which arise when two words or segments are combined.
In Italian, pronouns can be suffixed to verbs, and because the honorific form of second-person pronouns is capitalized, this can produce a sentence like non ho trovato il tempo di risponderLe ("I have not found time to answer you" – where Le means "to you").
In German, the medial capital letter I, called Binnen-I, is sometimes used in a word like StudentInnen ("students") to indicate that both Studenten ("male students") and Studentinnen ("female students") are intended simultaneously. However, mid-word capitalization does not conform to German orthography apart from proper names like McDonald; the previous example could be correctly written using parentheses as Student(inn)en, analogous to "congress(wo)men" in English.
In Irish, camel case is used when an inflectional prefix is attached to a proper noun, for example i nGaillimh ("in Galway"), from Gaillimh ("Galway"); an tAlbanach ("the Scottish person"), from Albanach ("Scottish person"); and go hÉirinn ("to Ireland"), from Éire ("Ireland"). In recent Scottish Gaelic orthography, a hyphen has been inserted: an t-Albannach.
This convention is also used by several written Bantu languages (e.g. isiZulu, "Zulu language") and several indigenous languages of Mexico (e.g. Nahuatl, Totonacan, Mixe–Zoque, and some Oto-Manguean languages).
In Dutch, when capitalizing the digraph ij, both the letter I and the letter J are capitalized, for example in the country name IJsland ("Iceland").
In Chinese pinyin, camel case is sometimes used for place names so that readers can more easily pick out the different parts of the name. For example, places like Beijing (北京), Qinhuangdao (秦皇岛), and Daxing'anling (大兴安岭) can be written as BeiJing, QinHuangDao, and DaXingAnLing respectively, with the number of capital letters equaling the number of Chinese characters. Writing word compounds only by the initial letter of each character is also acceptable in some cases, so Beijing can be written as BJ, Qinghuangdao as QHD, and Daxing'anling as DXAL.
In English, medial capitals are usually only found in Scottish or Irish "Mac-" or "Mc-" names, where for example MacDonald, McDonald, and Macdonald are common spelling variants of the same name, and in Anglo-Norman "Fitz-" names, where for example both FitzGerald and Fitzgerald are found.
In their English style guide The King's English, first published in 1906, H. W. and F. G. Fowler suggested that medial capitals could be used in triple compound words where hyphens would cause ambiguity—the examples they give are KingMark-like (as against King Mark-like) and Anglo-SouthAmerican (as against Anglo-South American). However, they described the system as "too hopelessly contrary to use at present".
In the scholarly transliteration of languages written in other scripts, medial capitals are used in similar situations. For example, in transliterated Hebrew, haIvri means "the Hebrew person" or "the Jew" and b'Yerushalayim means "in Jerusalem". In Tibetan proper names like rLobsang, the "r" stands for a prefix glyph in the original script that functions as tone marker rather than a normal letter. Another example is tsIurku, a Latin transcription of the Chechen term for the capping stone of the characteristic Medieval defensive towers of Chechnya and Ingushetia; the letter "I" (palochka) is not actually capital, denoting a phoneme distinct from the one transcribed as "i".
Medial capitals are traditionally used in abbreviations to reflect the capitalization that the words would have when written out in full, for example in the academic titles PhD or BSc. A more recent example is NaNoWriMo, a contraction of National Novel Writing Month and the designation for both the annual event and the nonprofit organization that runs it. In German, the names of statutes are abbreviated using embedded capitals, e.g. StGB for Strafgesetzbuch (Criminal Code), PatG for Patentgesetz (Patent Act), BVerfG for Bundesverfassungsgericht (Federal Constitutional Court), or the very common GmbH, for Gesellschaft mit beschränkter Haftung (private limited company). In this context, there can even be three or more camel case capitals, e.g. in TzBfG for Teilzeit- und Befristungsgesetz (Act on Part-Time and Limited Term Occupations). In French, camel case acronyms such as OuLiPo (1960) were favored for a time as alternatives to initialisms.
Camel case is often used to transliterate initialisms into alphabets where two letters may be required to represent a single character of the original alphabet, e.g., DShK from Cyrillic ДШК.
The first systematic and widespread use of medial capitals for technical purposes was the notation for chemical formulas invented by the Swedish chemist Jacob Berzelius in 1813. To replace the multitude of naming and symbol conventions used by chemists until that time, he proposed to indicate each chemical element by a symbol of one or two letters, the first one being capitalized. The capitalization allowed formulas like "NaCl" to be written without spaces and still be parsed without ambiguity.
Berzelius' system continues to be used, augmented with three-letter symbols such as "Uue" for unconfirmed or unknown elements and abbreviations for some common substituents (especially in the field of organic chemistry, for instance "Et" for "ethyl-"). This has been further extended to describe the amino acid sequences of proteins and other similar domains.
Since the early 20th century, medial capitals have occasionally been used for corporate names and product trademarks, such as
In the 1970s and 1980s, medial capitals were adopted as a standard or alternative naming convention for multi-word identifiers in several programming languages. The precise origin of the convention in computer programming has not yet been settled. A 1954 conference proceedings occasionally informally referred to IBM's Speedcoding system as "SpeedCo". Christopher Strachey's paper on GPM (1965), shows a program that includes some medial capital identifiers, including "NextCh" and "WriteSymbol".
Multiple-word descriptive identifiers with embedded spaces such as end of file or char table cannot be used in most programming languages because the spaces between the words would be parsed as delimiters between tokens. The alternative of running the words together as in endoffile or chartable is difficult to understand and possibly misleading; for example, chartable is an English word (able to be charted), whereas charTable means a table of chars .
Some early programming languages, notably Lisp (1958) and COBOL (1959), addressed this problem by allowing a hyphen ("-") to be used between words of compound identifiers, as in "END-OF-FILE": Lisp because it worked well with prefix notation (a Lisp parser would not treat a hyphen in the middle of a symbol as a subtraction operator) and COBOL because its operators were individual English words. This convention remains in use in these languages, and is also common in program names entered on a command line, as in Unix.
However, this solution was not adequate for mathematically oriented languages such as FORTRAN (1955) and ALGOL (1958), which used the hyphen as an infix subtraction operator. FORTRAN ignored blanks altogether, so programmers could use embedded spaces in variable names. However, this feature was not very useful since the early versions of the language restricted identifiers to no more than six characters.
Exacerbating the problem, common punched card character sets of the time were uppercase only and lacked other special characters. It was only in the late 1960s that the widespread adoption of the ASCII character set made both lowercase and the underscore character _ universally available. Some languages, notably C, promptly adopted underscores as word separators, and identifiers such as end_of_file are still prevalent in C programs and libraries (as well as in later languages influenced by C, such as Perl and Python). However, some languages and programmers chose to avoid underscores—among other reasons to prevent confusing them with whitespace—and adopted camel case instead.
Charles Simonyi, who worked at Xerox PARC in the 1970s and later oversaw the creation of Microsoft's Office suite of applications, invented and taught the use of Hungarian Notation, one version of which uses the lowercase letter(s) at the start of a (capitalized) variable name to denote its type. One account claims that the camel case style first became popular at Xerox PARC around 1978, with the Mesa programming language developed for the Xerox Alto computer. This machine lacked an underscore key (whose place was taken by a left arrow "←"), and the hyphen and space characters were not permitted in identifiers, leaving camel case as the only viable scheme for readable multiword names. The PARC Mesa Language Manual (1979) included a coding standard with specific rules for upper and lower camel case that was strictly followed by the Mesa libraries and the Alto operating system. Niklaus Wirth, the inventor of Pascal, came to appreciate camel case during a sabbatical at PARC and used it in Modula, his next programming language.
The Smalltalk language, which was developed originally on the Alto, also uses camel case instead of underscores. This language became quite popular in the early 1980s, and thus may also have been instrumental in spreading the style outside PARC.
Upper camel case (or "Pascal case") is used in Wolfram Language in computer algebraic system Mathematica for predefined identifiers. User defined identifiers should start with a lower case letter. This avoids the conflict between predefined and user defined identifiers both today and in all future versions.
C# variable names are recommended to follow the lower camel case convention.
Whatever its origins in the computing field, the convention was used in the names of computer companies and their commercial brands, since the late 1970s — a trend that continues to this day:
In the 1980s and 1990s, after the advent of the personal computer exposed hacker culture to the world, camel case then became fashionable for corporate trade names in non-computer fields as well. Mainstream usage was well established by 1990:
During the dot-com bubble of the late 1990s, the lowercase prefixes "e" (for "electronic") and "i" (for "Internet", "information", "intelligent", etc.) became quite common, giving rise to names like Apple's iMac and the eBox software platform.
In 1998, Dave Yost suggested that chemists use medial capitals to aid readability of long chemical names, e.g. write AmidoPhosphoRibosylTransferase instead of amidophosphoribosyltransferase. This usage was not widely adopted.
Camel case is sometimes used for abbreviated names of certain neighborhoods, e.g. New York City neighborhoods SoHo (South of Houston Street) and TriBeCa (Triangle Below Canal Street) and San Francisco's SoMa (South of Market). Such usages erode quickly, so the neighborhoods are now typically rendered as Soho, Tribeca, and Soma.
Internal capitalization has also been used for other technical codes like HeLa (1983).
The use of medial caps for compound identifiers is recommended by the coding style guidelines of many organizations or software projects. For some languages (such as Mesa, Pascal, Modula, Java and Microsoft's .NET) this practice is recommended by the language developers or by authoritative manuals and has therefore become part of the language's "culture".
Style guidelines often distinguish between upper and lower camel case, typically specifying which variety should be used for specific kinds of entities: variables, record fields, methods, procedures, functions, subroutines, types, etc. These rules are sometimes supported by static analysis tools that check source code for adherence.
The original Hungarian notation for programming, for example, specifies that a lowercase abbreviation for the "usage type" (not data type) should prefix all variable names, with the remainder of the name in upper camel case; as such it is a form of lower camel case.
Programming identifiers often need to contain acronyms and initialisms that are already in uppercase, such as "old HTML file". By analogy with the title case rules, the natural camel case rendering would have the abbreviation all in uppercase, namely "oldHTMLFile". However, this approach is problematic when two acronyms occur together (e.g., "parse DBM XML" would become "parseDBMXML") or when the standard mandates lower camel case but the name begins with an abbreviation (e.g. "SQL server" would become "sQLServer"). For this reason, some programmers prefer to treat abbreviations as if they were words and write "oldHtmlFile", "parseDbmXml" or "sqlServer". However, this can make it harder to recognize that a given word is intended as an acronym.
Difficulties arise when identifiers have different meaning depending only on the case, as can occur with mathematical functions or trademarks. In this situation changing the case of an identifier might not be an option and an alternative name need be chosen.
Camel case is used in some wiki markup languages for terms that should be automatically linked to other wiki pages. This convention was originally used in Ward Cunningham's original wiki software, WikiWikiWeb, and can be activated in most other wikis. Some wiki engines such as TiddlyWiki, Trac and PmWiki make use of it in the default settings, but usually also provide a configuration mechanism or plugin to disable it. Wikipedia formerly used camel case linking as well, but switched to explicit link markup using square brackets and many other wiki sites have done the same. MediaWiki, for example, does not support camel case for linking. Some wikis that do not use camel case linking may still use the camel case as a naming convention, such as AboutUs.
The NIEM registry requires that XML data elements use upper camel case and XML attributes use lower camel case.
Most popular command-line interfaces and scripting languages cannot easily handle file names that contain embedded spaces (usually requiring the name to be put in quotes). Therefore, users of those systems often resort to camel case (or underscores, hyphens and other "safe" characters) for compound file names like MyJobResume.pdf.
Microblogging and social networking services that limit the number of characters in a message are potential outlets for medial capitals. Using camel case between words reduces the number of spaces, and thus the number of characters, in a given message, allowing more content to fit into the limited space. Hashtags, especially long ones, often use camel case to maintain readability (e.g. #CollegeStudentProblems is easier to read than #collegestudentproblems); this practice improves accessibility as screen readers recognize CamelCase in parsing composite hashtags.
In website URLs, spaces are percent-encoded as "%20", making the address longer and less human readable. By omitting spaces, camel case does not have this problem.
Camel case has been criticized as negatively impacting readability due to the removal of spaces and uppercasing of every word.
A 2009 study of 135 subjects comparing snake case (underscored identifiers) to camel case found that camel case identifiers were recognized with higher accuracy among all subjects. Subjects recognized snake case identifiers more quickly than camel case identifiers. Training in camel case sped up camel case recognition and slowed snake case recognition, although this effect involved coefficients with high p-values. The study also conducted a subjective survey and found that non-programmers either preferred underscores or had no preference, and 38% of programmers trained in camel case stated a preference for underscores. However, these preferences had no statistical correlation to accuracy or speed when controlling for other variables.
A 2010 follow-up study used a similar study design with 15 subjects consisting of expert programmers trained primarily in snake case. It used a static rather than animated stimulus and found perfect accuracy in both styles except for one incorrect camel case response. Subjects recognized identifiers in snake case more quickly than camel case. The study used eye-tracking equipment and found that the difference in speed for its subjects was primarily due to the fact that average duration of fixations for camel-case was significantly higher than that of snake case for 3-part identifiers. The survey recorded a mixture of preferred identifier styles but again there was no correlation of preferred style to accuracy or speed. | [
{
"paragraph_id": 0,
"text": "Camel case (sometimes stylized as camelCase or CamelCase, also known as camel caps or more formally as medial capitals) is the practice of writing phrases without spaces or punctuation and with capitalized words. The format indicates the first word starting with either case, then the following words having an initial uppercase letter. Common examples include \"YouTube\", \"iPhone\" and \"eBay\". Camel case is often used as a naming convention in computer programming. It is also sometimes used in online usernames such as \"JohnSmith\", and to make multi-word domain names more legible, for example in promoting \"EasyWidgetCompany.com\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "The more specific terms Pascal case and upper camel case refer to a joined phrase where the first letter of each word is capitalized, including the initial letter of the first word. Similarly, lower camel case (also known as dromedary case) requires an initial lowercase letter. Some people and organizations, notably Microsoft, use the term camel case only for lower camel case, designating Pascal case for the upper camel case. Some programming styles prefer camel case with the first letter capitalized, others not. For clarity, this article leaves the definition of camel case ambiguous with respect to capitalization, and uses the more specific terms when necessary.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Camel case is distinct from several other styles: title case, which capitalizes all words but retains the spaces between them; Tall Man lettering, which uses capitals to emphasize the differences between similar-looking product names such as \"predniSONE\" and \"predniSOLONE\"; and snake case, which uses underscores interspersed with lowercase letters (sometimes with the first letter capitalized). A combination of snake and camel case (identifiers Written_Like_This) is recommended in the Ada 95 style guide.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The practice has various names, including:",
"title": "Variations and synonyms"
},
{
"paragraph_id": 4,
"text": "The earliest known occurrence of the term \"InterCaps\" on Usenet is in an April 1990 post to the group alt.folklore.computers by Avi Rappoport. The earliest use of the name \"Camel Case\" occurs in 1995, in a post by Newton Love. Love has since said, \"With the advent of programming languages having these sorts of constructs, the humpiness of the style made me call it HumpyCase at first, before I settled on CamelCase. I had been calling it CamelCase for years. ... The citation above was just the first time I had used the name on USENET.\"",
"title": "Variations and synonyms"
},
{
"paragraph_id": 5,
"text": "The use of medial capitals as a convention in the regular spelling of everyday texts is rare, but is used in some languages as a solution to particular problems which arise when two words or segments are combined.",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 6,
"text": "In Italian, pronouns can be suffixed to verbs, and because the honorific form of second-person pronouns is capitalized, this can produce a sentence like non ho trovato il tempo di risponderLe (\"I have not found time to answer you\" – where Le means \"to you\").",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 7,
"text": "In German, the medial capital letter I, called Binnen-I, is sometimes used in a word like StudentInnen (\"students\") to indicate that both Studenten (\"male students\") and Studentinnen (\"female students\") are intended simultaneously. However, mid-word capitalization does not conform to German orthography apart from proper names like McDonald; the previous example could be correctly written using parentheses as Student(inn)en, analogous to \"congress(wo)men\" in English.",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 8,
"text": "In Irish, camel case is used when an inflectional prefix is attached to a proper noun, for example i nGaillimh (\"in Galway\"), from Gaillimh (\"Galway\"); an tAlbanach (\"the Scottish person\"), from Albanach (\"Scottish person\"); and go hÉirinn (\"to Ireland\"), from Éire (\"Ireland\"). In recent Scottish Gaelic orthography, a hyphen has been inserted: an t-Albannach.",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 9,
"text": "This convention is also used by several written Bantu languages (e.g. isiZulu, \"Zulu language\") and several indigenous languages of Mexico (e.g. Nahuatl, Totonacan, Mixe–Zoque, and some Oto-Manguean languages).",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 10,
"text": "In Dutch, when capitalizing the digraph ij, both the letter I and the letter J are capitalized, for example in the country name IJsland (\"Iceland\").",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 11,
"text": "In Chinese pinyin, camel case is sometimes used for place names so that readers can more easily pick out the different parts of the name. For example, places like Beijing (北京), Qinhuangdao (秦皇岛), and Daxing'anling (大兴安岭) can be written as BeiJing, QinHuangDao, and DaXingAnLing respectively, with the number of capital letters equaling the number of Chinese characters. Writing word compounds only by the initial letter of each character is also acceptable in some cases, so Beijing can be written as BJ, Qinghuangdao as QHD, and Daxing'anling as DXAL.",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 12,
"text": "In English, medial capitals are usually only found in Scottish or Irish \"Mac-\" or \"Mc-\" names, where for example MacDonald, McDonald, and Macdonald are common spelling variants of the same name, and in Anglo-Norman \"Fitz-\" names, where for example both FitzGerald and Fitzgerald are found.",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 13,
"text": "In their English style guide The King's English, first published in 1906, H. W. and F. G. Fowler suggested that medial capitals could be used in triple compound words where hyphens would cause ambiguity—the examples they give are KingMark-like (as against King Mark-like) and Anglo-SouthAmerican (as against Anglo-South American). However, they described the system as \"too hopelessly contrary to use at present\".",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 14,
"text": "In the scholarly transliteration of languages written in other scripts, medial capitals are used in similar situations. For example, in transliterated Hebrew, haIvri means \"the Hebrew person\" or \"the Jew\" and b'Yerushalayim means \"in Jerusalem\". In Tibetan proper names like rLobsang, the \"r\" stands for a prefix glyph in the original script that functions as tone marker rather than a normal letter. Another example is tsIurku, a Latin transcription of the Chechen term for the capping stone of the characteristic Medieval defensive towers of Chechnya and Ingushetia; the letter \"I\" (palochka) is not actually capital, denoting a phoneme distinct from the one transcribed as \"i\".",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 15,
"text": "Medial capitals are traditionally used in abbreviations to reflect the capitalization that the words would have when written out in full, for example in the academic titles PhD or BSc. A more recent example is NaNoWriMo, a contraction of National Novel Writing Month and the designation for both the annual event and the nonprofit organization that runs it. In German, the names of statutes are abbreviated using embedded capitals, e.g. StGB for Strafgesetzbuch (Criminal Code), PatG for Patentgesetz (Patent Act), BVerfG for Bundesverfassungsgericht (Federal Constitutional Court), or the very common GmbH, for Gesellschaft mit beschränkter Haftung (private limited company). In this context, there can even be three or more camel case capitals, e.g. in TzBfG for Teilzeit- und Befristungsgesetz (Act on Part-Time and Limited Term Occupations). In French, camel case acronyms such as OuLiPo (1960) were favored for a time as alternatives to initialisms.",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 16,
"text": "Camel case is often used to transliterate initialisms into alphabets where two letters may be required to represent a single character of the original alphabet, e.g., DShK from Cyrillic ДШК.",
"title": "Traditional use in natural language"
},
{
"paragraph_id": 17,
"text": "The first systematic and widespread use of medial capitals for technical purposes was the notation for chemical formulas invented by the Swedish chemist Jacob Berzelius in 1813. To replace the multitude of naming and symbol conventions used by chemists until that time, he proposed to indicate each chemical element by a symbol of one or two letters, the first one being capitalized. The capitalization allowed formulas like \"NaCl\" to be written without spaces and still be parsed without ambiguity.",
"title": "History of modern technical use"
},
{
"paragraph_id": 18,
"text": "Berzelius' system continues to be used, augmented with three-letter symbols such as \"Uue\" for unconfirmed or unknown elements and abbreviations for some common substituents (especially in the field of organic chemistry, for instance \"Et\" for \"ethyl-\"). This has been further extended to describe the amino acid sequences of proteins and other similar domains.",
"title": "History of modern technical use"
},
{
"paragraph_id": 19,
"text": "Since the early 20th century, medial capitals have occasionally been used for corporate names and product trademarks, such as",
"title": "History of modern technical use"
},
{
"paragraph_id": 20,
"text": "In the 1970s and 1980s, medial capitals were adopted as a standard or alternative naming convention for multi-word identifiers in several programming languages. The precise origin of the convention in computer programming has not yet been settled. A 1954 conference proceedings occasionally informally referred to IBM's Speedcoding system as \"SpeedCo\". Christopher Strachey's paper on GPM (1965), shows a program that includes some medial capital identifiers, including \"NextCh\" and \"WriteSymbol\".",
"title": "History of modern technical use"
},
{
"paragraph_id": 21,
"text": "Multiple-word descriptive identifiers with embedded spaces such as end of file or char table cannot be used in most programming languages because the spaces between the words would be parsed as delimiters between tokens. The alternative of running the words together as in endoffile or chartable is difficult to understand and possibly misleading; for example, chartable is an English word (able to be charted), whereas charTable means a table of chars .",
"title": "History of modern technical use"
},
{
"paragraph_id": 22,
"text": "Some early programming languages, notably Lisp (1958) and COBOL (1959), addressed this problem by allowing a hyphen (\"-\") to be used between words of compound identifiers, as in \"END-OF-FILE\": Lisp because it worked well with prefix notation (a Lisp parser would not treat a hyphen in the middle of a symbol as a subtraction operator) and COBOL because its operators were individual English words. This convention remains in use in these languages, and is also common in program names entered on a command line, as in Unix.",
"title": "History of modern technical use"
},
{
"paragraph_id": 23,
"text": "However, this solution was not adequate for mathematically oriented languages such as FORTRAN (1955) and ALGOL (1958), which used the hyphen as an infix subtraction operator. FORTRAN ignored blanks altogether, so programmers could use embedded spaces in variable names. However, this feature was not very useful since the early versions of the language restricted identifiers to no more than six characters.",
"title": "History of modern technical use"
},
{
"paragraph_id": 24,
"text": "Exacerbating the problem, common punched card character sets of the time were uppercase only and lacked other special characters. It was only in the late 1960s that the widespread adoption of the ASCII character set made both lowercase and the underscore character _ universally available. Some languages, notably C, promptly adopted underscores as word separators, and identifiers such as end_of_file are still prevalent in C programs and libraries (as well as in later languages influenced by C, such as Perl and Python). However, some languages and programmers chose to avoid underscores—among other reasons to prevent confusing them with whitespace—and adopted camel case instead.",
"title": "History of modern technical use"
},
{
"paragraph_id": 25,
"text": "Charles Simonyi, who worked at Xerox PARC in the 1970s and later oversaw the creation of Microsoft's Office suite of applications, invented and taught the use of Hungarian Notation, one version of which uses the lowercase letter(s) at the start of a (capitalized) variable name to denote its type. One account claims that the camel case style first became popular at Xerox PARC around 1978, with the Mesa programming language developed for the Xerox Alto computer. This machine lacked an underscore key (whose place was taken by a left arrow \"←\"), and the hyphen and space characters were not permitted in identifiers, leaving camel case as the only viable scheme for readable multiword names. The PARC Mesa Language Manual (1979) included a coding standard with specific rules for upper and lower camel case that was strictly followed by the Mesa libraries and the Alto operating system. Niklaus Wirth, the inventor of Pascal, came to appreciate camel case during a sabbatical at PARC and used it in Modula, his next programming language.",
"title": "History of modern technical use"
},
{
"paragraph_id": 26,
"text": "The Smalltalk language, which was developed originally on the Alto, also uses camel case instead of underscores. This language became quite popular in the early 1980s, and thus may also have been instrumental in spreading the style outside PARC.",
"title": "History of modern technical use"
},
{
"paragraph_id": 27,
"text": "Upper camel case (or \"Pascal case\") is used in Wolfram Language in computer algebraic system Mathematica for predefined identifiers. User defined identifiers should start with a lower case letter. This avoids the conflict between predefined and user defined identifiers both today and in all future versions.",
"title": "History of modern technical use"
},
{
"paragraph_id": 28,
"text": "C# variable names are recommended to follow the lower camel case convention.",
"title": "History of modern technical use"
},
{
"paragraph_id": 29,
"text": "Whatever its origins in the computing field, the convention was used in the names of computer companies and their commercial brands, since the late 1970s — a trend that continues to this day:",
"title": "History of modern technical use"
},
{
"paragraph_id": 30,
"text": "In the 1980s and 1990s, after the advent of the personal computer exposed hacker culture to the world, camel case then became fashionable for corporate trade names in non-computer fields as well. Mainstream usage was well established by 1990:",
"title": "History of modern technical use"
},
{
"paragraph_id": 31,
"text": "During the dot-com bubble of the late 1990s, the lowercase prefixes \"e\" (for \"electronic\") and \"i\" (for \"Internet\", \"information\", \"intelligent\", etc.) became quite common, giving rise to names like Apple's iMac and the eBox software platform.",
"title": "History of modern technical use"
},
{
"paragraph_id": 32,
"text": "In 1998, Dave Yost suggested that chemists use medial capitals to aid readability of long chemical names, e.g. write AmidoPhosphoRibosylTransferase instead of amidophosphoribosyltransferase. This usage was not widely adopted.",
"title": "History of modern technical use"
},
{
"paragraph_id": 33,
"text": "Camel case is sometimes used for abbreviated names of certain neighborhoods, e.g. New York City neighborhoods SoHo (South of Houston Street) and TriBeCa (Triangle Below Canal Street) and San Francisco's SoMa (South of Market). Such usages erode quickly, so the neighborhoods are now typically rendered as Soho, Tribeca, and Soma.",
"title": "History of modern technical use"
},
{
"paragraph_id": 34,
"text": "Internal capitalization has also been used for other technical codes like HeLa (1983).",
"title": "History of modern technical use"
},
{
"paragraph_id": 35,
"text": "The use of medial caps for compound identifiers is recommended by the coding style guidelines of many organizations or software projects. For some languages (such as Mesa, Pascal, Modula, Java and Microsoft's .NET) this practice is recommended by the language developers or by authoritative manuals and has therefore become part of the language's \"culture\".",
"title": "Current usage in computing"
},
{
"paragraph_id": 36,
"text": "Style guidelines often distinguish between upper and lower camel case, typically specifying which variety should be used for specific kinds of entities: variables, record fields, methods, procedures, functions, subroutines, types, etc. These rules are sometimes supported by static analysis tools that check source code for adherence.",
"title": "Current usage in computing"
},
{
"paragraph_id": 37,
"text": "The original Hungarian notation for programming, for example, specifies that a lowercase abbreviation for the \"usage type\" (not data type) should prefix all variable names, with the remainder of the name in upper camel case; as such it is a form of lower camel case.",
"title": "Current usage in computing"
},
{
"paragraph_id": 38,
"text": "Programming identifiers often need to contain acronyms and initialisms that are already in uppercase, such as \"old HTML file\". By analogy with the title case rules, the natural camel case rendering would have the abbreviation all in uppercase, namely \"oldHTMLFile\". However, this approach is problematic when two acronyms occur together (e.g., \"parse DBM XML\" would become \"parseDBMXML\") or when the standard mandates lower camel case but the name begins with an abbreviation (e.g. \"SQL server\" would become \"sQLServer\"). For this reason, some programmers prefer to treat abbreviations as if they were words and write \"oldHtmlFile\", \"parseDbmXml\" or \"sqlServer\". However, this can make it harder to recognize that a given word is intended as an acronym.",
"title": "Current usage in computing"
},
{
"paragraph_id": 39,
"text": "Difficulties arise when identifiers have different meaning depending only on the case, as can occur with mathematical functions or trademarks. In this situation changing the case of an identifier might not be an option and an alternative name need be chosen.",
"title": "Current usage in computing"
},
{
"paragraph_id": 40,
"text": "Camel case is used in some wiki markup languages for terms that should be automatically linked to other wiki pages. This convention was originally used in Ward Cunningham's original wiki software, WikiWikiWeb, and can be activated in most other wikis. Some wiki engines such as TiddlyWiki, Trac and PmWiki make use of it in the default settings, but usually also provide a configuration mechanism or plugin to disable it. Wikipedia formerly used camel case linking as well, but switched to explicit link markup using square brackets and many other wiki sites have done the same. MediaWiki, for example, does not support camel case for linking. Some wikis that do not use camel case linking may still use the camel case as a naming convention, such as AboutUs.",
"title": "Current usage in computing"
},
{
"paragraph_id": 41,
"text": "The NIEM registry requires that XML data elements use upper camel case and XML attributes use lower camel case.",
"title": "Current usage in computing"
},
{
"paragraph_id": 42,
"text": "Most popular command-line interfaces and scripting languages cannot easily handle file names that contain embedded spaces (usually requiring the name to be put in quotes). Therefore, users of those systems often resort to camel case (or underscores, hyphens and other \"safe\" characters) for compound file names like MyJobResume.pdf.",
"title": "Current usage in computing"
},
{
"paragraph_id": 43,
"text": "Microblogging and social networking services that limit the number of characters in a message are potential outlets for medial capitals. Using camel case between words reduces the number of spaces, and thus the number of characters, in a given message, allowing more content to fit into the limited space. Hashtags, especially long ones, often use camel case to maintain readability (e.g. #CollegeStudentProblems is easier to read than #collegestudentproblems); this practice improves accessibility as screen readers recognize CamelCase in parsing composite hashtags.",
"title": "Current usage in computing"
},
{
"paragraph_id": 44,
"text": "In website URLs, spaces are percent-encoded as \"%20\", making the address longer and less human readable. By omitting spaces, camel case does not have this problem.",
"title": "Current usage in computing"
},
{
"paragraph_id": 45,
"text": "Camel case has been criticized as negatively impacting readability due to the removal of spaces and uppercasing of every word.",
"title": "Readability studies"
},
{
"paragraph_id": 46,
"text": "A 2009 study of 135 subjects comparing snake case (underscored identifiers) to camel case found that camel case identifiers were recognized with higher accuracy among all subjects. Subjects recognized snake case identifiers more quickly than camel case identifiers. Training in camel case sped up camel case recognition and slowed snake case recognition, although this effect involved coefficients with high p-values. The study also conducted a subjective survey and found that non-programmers either preferred underscores or had no preference, and 38% of programmers trained in camel case stated a preference for underscores. However, these preferences had no statistical correlation to accuracy or speed when controlling for other variables.",
"title": "Readability studies"
},
{
"paragraph_id": 47,
"text": "A 2010 follow-up study used a similar study design with 15 subjects consisting of expert programmers trained primarily in snake case. It used a static rather than animated stimulus and found perfect accuracy in both styles except for one incorrect camel case response. Subjects recognized identifiers in snake case more quickly than camel case. The study used eye-tracking equipment and found that the difference in speed for its subjects was primarily due to the fact that average duration of fixations for camel-case was significantly higher than that of snake case for 3-part identifiers. The survey recorded a mixture of preferred identifier styles but again there was no correlation of preferred style to accuracy or speed.",
"title": "Readability studies"
}
] | Camel case is the practice of writing phrases without spaces or punctuation and with capitalized words. The format indicates the first word starting with either case, then the following words having an initial uppercase letter. Common examples include "YouTube", "iPhone" and "eBay". Camel case is often used as a naming convention in computer programming. It is also sometimes used in online usernames such as "JohnSmith", and to make multi-word domain names more legible, for example in promoting "EasyWidgetCompany.com". The more specific terms Pascal case and upper camel case refer to a joined phrase where the first letter of each word is capitalized, including the initial letter of the first word. Similarly, lower camel case requires an initial lowercase letter. Some people and organizations, notably Microsoft, use the term camel case only for lower camel case, designating Pascal case for the upper camel case. Some programming styles prefer camel case with the first letter capitalized, others not. For clarity, this article leaves the definition of camel case ambiguous with respect to capitalization, and uses the more specific terms when necessary. Camel case is distinct from several other styles: title case, which capitalizes all words but retains the spaces between them; Tall Man lettering, which uses capitals to emphasize the differences between similar-looking product names such as "predniSONE" and "predniSOLONE"; and snake case, which uses underscores interspersed with lowercase letters. A combination of snake and camel case is recommended in the Ada 95 style guide. | 2001-10-11T19:58:54Z | 2023-12-06T12:47:32Z | [
"Template:Use dmy dates",
"Template:Lang",
"Template:Main article",
"Template:Reflist",
"Template:Short description",
"Template:Col-begin",
"Template:Original research",
"Template:Citation needed",
"Template:Columns-list",
"Template:Use American English",
"Template:Cite book",
"Template:Dead link",
"Template:Typography terms",
"Template:Cite journal",
"Template:Cite news",
"Template:Commons category",
"Template:Col-break",
"Template:Col-end",
"Template:Cite web",
"Template:Webarchive",
"Template:Cite newsgroup",
"Template:Wiktionary"
] | https://en.wikipedia.org/wiki/Camel_case |
6,700 | Cereal | A cereal is any grass cultivated for its edible grain (botanically, a type of fruit called a caryopsis), which is composed of an endosperm, a germ, and a bran. Cereal grain crops are grown in greater quantities and provide more food energy worldwide than any other type of crop and are therefore staple crops. They include rice, wheat, rye, oats, barley, millet, and maize. Edible grains from other plant families, such as buckwheat, quinoa, and chia, are referred to as pseudocereals.
In their unprocessed whole grain form, cereals are a rich source of vitamins, minerals, carbohydrates, fats, oils, and protein. When processed by the removal of the bran and germ, the remaining endosperm is mostly carbohydrate. In some developing countries, cereals constitute a majority of daily sustenance. In developed countries, cereal consumption is moderate and varied but still substantial, primarily in the form of refined and processed grains. Because of the dietary importance of cereals, the cereal trade is often at the heart of the food trade, with many cereals sold as commodities.
Agriculture allowed for the support of an increased population, leading to larger societies and eventually the development of cities. It also created the need for greater organization of political power (and the creation of social stratification), as decisions had to be made regarding labor and harvest allocation and access rights to water and land. Agriculture bred immobility, as populations settled down for long periods of time, which led to the accumulation of material goods.
Early Neolithic villages show evidence of the development of processing grain. The Levant is the ancient home of the ancestors of wheat, barley and peas, in which many of these villages were based. There is evidence of the cultivation of cereals in Syria approximately 9,000 years ago. Wheat, barley, rye, oats and flaxseeds were all domesticated in the Fertile Crescent during the early Neolithic. During the same period, farmers in China began to farm rice and millet, using human-made floods and fires as part of their cultivation regimen. Fiber crops were domesticated as early as food crops, with China domesticating hemp, cotton being developed independently in Africa and South America, and Western Asia domesticating flax. The use of soil conditioners, including manure, fish, compost and ashes, appears to have begun early, and developed independently in several areas of the world, including Mesopotamia, the Nile Valley and Eastern Asia.
The first cereal grains were domesticated by early primitive humans. About 8,000 years ago, they were domesticated by ancient farming communities in the Fertile Crescent region. Emmer wheat, einkorn wheat, and barley were three of the so-called Neolithic founder crops in the development of agriculture. Around the same time, millets and kinds of rice were starting to become domesticated in East Asia. Sorghum and millets were also being domesticated in sub-Saharan West Africa, which were both used primarily as feed for livestock.
Cereals were the foundation of human civilization. In the Mesopotamian creation myth, an era of civilization is inaugurated by the grain goddess Ashnan. Cereal frontiers coincided with civilizational frontiers. The term Fertile Crescent implies the spatial dependence of civilization on cereals. The Great Wall of China and the Roman limes demarcated the same northern limit of cereal cultivation. The Silk Road stretched along the cereal belt of Eurasia. Numerous Chinese imperial edicts stated: "Agriculture is the foundation of this empire," while the foundation of agriculture were the Five Grains. The word cereal is derived from Ceres, the Roman goddess of harvest and agriculture.
Cereals determined how large and for how long an army could be mobilized. For this reason, Shang Yang called agriculture and war "the One". Guan Zhong, Chanakya (the author of Arthashastra) and Hannibal expressed similar concepts. At the dawn of history, the Sumerians believed that if the agriculture of a state declines, Inanna, the goddess of war, leaves this state. Several gods of antiquity combined the functions of what Shang Yang called "the One" – agriculture and war: the Hittite Sun goddess of Arinna, the Canaanite Lahmu and the Roman Janus. These were highly important gods in their time leaving their legacy until today. We still begin the year with the month of Janus (January). The Jews believe that Messiah's family will originate in the town of Lahmu (Bethlehem); in Hebrew, beit lehem literally means "house of bread". Christians believe that Jesus Christ, who is said to have been born in Bethlehem, is the Messiah. In Hebrew, bread (lehem) and warfare (milhama) are of the same root. In fact, most persistent and flourishing empires throughout history in both hemispheres were centered in regions fertile for cereals.
The bond between cereal and imperial powers was not broken, not even in the Industrial Age. All modern great powers have traditionally remained first and foremost great cereal powers. The "finest hour" of the Axis powers "ended precisely the moment they threw themselves against the two largest cereal lebensraums" (the United States and the USSR). The outcome of the Cold War followed the Soviet grave and long-lasting cereal crisis, exacerbated by the cereal embargo imposed on the USSR in 1980. And, called "the grain basket of the world," the most productive "cereal lebensraum" dominates the world ever since.
Having analyzed the mechanism at work behind this pattern, Ostrovsky outlined that the cereal power determines the percentage of manpower available to non-agricultural sectors including the heavy industry vital for military power. He emphasized that chronologically the Industrial Revolution follows the modern Agricultural Revolution and spatially the world's industrial regions are bound to cereal regions. Taken from space, map of the global illumination is said to indicate by its brightest parts the industrial regions. These regions coincide with cereal regions. Ostrovsky formulized a universal indicator of national power valid for all periods: total cereal tonnage produced by one percent of nation's manpower. For the present, this indicator demonstrates a unipolar international hierarchy.
Cereals are the most traded commodity by quantity in 2021: the Americas and Europe are the largest exporters and Asia is the largest importer.
During the second half of the 20th century there was a significant increase in the production of high-yield cereal crops worldwide, especially wheat and rice, due to an initiative known as the Green Revolution. The strategies developed by the Green Revolution focused on fending off starvation and increasing yield-per-plant, and were very successful in raising overall yields of cereal grains, but did not give sufficient relevance to nutritional quality. These modern high-yield cereal crops tend to have low quality proteins, with essential amino acid deficiencies, are high in carbohydrates, and lack balanced essential fatty acids, vitamins, minerals and other quality factors. So-called ancient grains and heirloom varieties have seen an increase in popularity with the "organic" movements of the early 21st century, but there is a tradeoff in yield-per-plant, putting pressure on resource-poor areas as food crops are replaced with cash crops.
Cereals belong to the family Poaceae, commonly known as grasses. Grasses have stems that are hollow except at the nodes and narrow alternate leaves borne in two ranks. The lower part of each leaf encloses the stem, forming a leaf-sheath. The leaf grows from the base of the blade, an adaptation allowing it to cope with frequent grazing. The flowers are usually hermaphroditic—maize being an important exception—and mainly anemophilous or wind-pollinated, although insects occasionally play a role.
Some of the most-well known cereals are maize, rice, wheat, barley, sorghum, millet, oat, rye and triticale. Some pseudocereals are colloquially called cereal, even though botanically they do not belong to the Poaceae family; these include buckwheat, quinoa, and amaranth.
Some cereals are deficient in the essential amino acid lysine. That is why many vegetarian cultures, in order to get a balanced diet, combine their diet of cereal grains with legumes. Many legumes, however, are deficient in the essential amino acid methionine, which grains contain. Thus, a combination of legumes with grains forms a well-balanced diet for vegetarians. Common examples of such combinations are dal (lentils) with rice by South Indians and Bengalis, dal with wheat in Pakistan and North India, beans with maize tortillas, tofu with rice, and peanut butter with wholegrain wheat bread (as sandwiches) in several other cultures, including the Americas. The amount of crude protein measured in grains is expressed as grain crude protein concentration.
Cereals contain exogenous opioid food peptides called exorphins such as gluten exorphin. They mimic the actions of endorphins because they bind to the same opioid receptors in the brain.
While each individual species has its own peculiarities, the cultivation of all cereal crops is similar. Most are annual plants; consequently one planting yields one harvest. Cereals that are adapted to grow in temperate climate are called cold-season cereals, and those grown in tropical climate are called warm-season cereals. Wheat, rye, triticale, oats, barley, and spelt are the "cool-season" cereals. These are hardy plants that grow well in moderate weather and cease to grow in hot weather (approximately 30 °C or 85 °F, but this varies by species and variety). The "warm-season" cereals are tender and prefer hot weather. Barley and rye are the hardiest cereals, able to overwinter in the subarctic and Siberia. Many cool-season cereals are grown in the tropics. However, some are only grown in cooler highlands, where it may be possible to grow multiple crops per year.
For the past few decades, there has also been increasing interest in perennial grain plants. This interest developed due to advantages in erosion control, reduced need for fertilizer, and potentially lowered costs to the farmer. Though research is still in early stages, The Land Institute in Salina, Kansas, has been able to create a few cultivars that produce a fairly good crop yield.
The warm-season cereals are grown in tropical lowlands year-round and in temperate climates during the frost-free season. Rice is commonly grown in flooded fields, though some strains are grown on dry land. Other warm climate cereals, such as sorghum, are adapted to arid conditions.
Cool-season cereals are well-adapted to temperate climates. Most varieties of a particular species are either winter or spring types. Winter varieties are sown in the autumn, germinate and grow vegetatively, then become dormant during winter. They resume growing in the springtime and mature in late spring or early summer. This cultivation system makes optimal use of water and frees the land for another crop early in the growing season.
Winter varieties do not flower until springtime because they need vernalization: exposure to low temperatures for a genetically determined length of time. Where winters are too warm for vernalization or exceed the hardiness of the crop (which varies by species and variety), farmers grow spring varieties. Spring cereals are planted in early springtime and mature later that same summer, without vernalization. Spring cereals typically require more irrigation and yield less than winter cereals.
The greatest constraints on yield are cereal diseases, especially rusts (mostly the Puccinia spp.) and powdery mildews. Fusarium Head Blight (FHB) caused by Fusarium graminearum is also significant on a wide variety of cereals.
Once the cereal plants have grown their seeds, they have completed their life cycle. The plants die, become brown, and dry. As soon as the parent plants and their seed kernels are reasonably dry, harvest can begin.
In developed countries, cereal crops are almost universally machine-harvested, typically using a combine harvester, which cuts, threshes, and winnows the grain during a single pass across the field. In developing countries, a variety of harvesting methods are in use, depending on the cost of labor, from combines to hand tools such as the scythe or grain cradle.
If a crop is harvested during humid weather, the grain may not dry adequately in the field to prevent spoilage during its storage. In this case, the grain is sent to a dehydrating facility, where artificial heat dries it.
In North America, farmers commonly deliver their newly harvested grain to a grain elevator, a large storage facility that consolidates the crops of many farmers. The farmer may sell the grain at the time of delivery or maintain ownership of a share of grain in the pool for later sale.
Rice is an example of a cereal that requires little preparation before human consumption. For example, to make plain cooked rice, raw milled rice needs to be washed and submerged in simmering water for 10–12 minutes.
Cereals can be ground to make flour. Cereal flour, particularly wheat flour, is the main ingredient of bread, which is a staple food for many cultures. Maize flour has been important in Mesoamerican cuisine since ancient times and remains a staple in the Americas. Rye flour is a constituent of bread in central and northern Europe, while rice flour is common in Asia.
Cereal flour consists either of the endosperm, germ, and bran together (whole-grain flour) or of the endosperm alone (refined flour). Meal is either differentiable from flour as having slightly coarser particle size (degree of comminution) or is synonymous with flour; the word is used both ways. For example, the word cornmeal often connotes a grittier texture whereas corn flour connotes fine powder, although there is no codified dividing line.
Because of cereals' high starch content, they are often used to make Industrial alcohol and alcoholic drinks via fermentation. For instance, beer is produced by the brewing and fermentation of starches, mainly derived from cereal grains—most commonly from malted barley, though wheat, maize, rice, and oats are also used. During the brewing process, fermentation of the starch sugars in the wort produces ethanol and carbonation in the resulting beer.
The following table shows the annual production of cereals in 1961, 1980, 2000, 2010, and 2019/2020.
Maize, wheat, and rice together accounted for 89% of all cereal production worldwide in 2012, and 43% of the global supply of food energy in 2009, while the production of oats and rye have drastically fallen from their 1960s levels.
Other cereals not included in the U.N.'s Food and Agriculture Organization statistics include:
This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO. | [
{
"paragraph_id": 0,
"text": "A cereal is any grass cultivated for its edible grain (botanically, a type of fruit called a caryopsis), which is composed of an endosperm, a germ, and a bran. Cereal grain crops are grown in greater quantities and provide more food energy worldwide than any other type of crop and are therefore staple crops. They include rice, wheat, rye, oats, barley, millet, and maize. Edible grains from other plant families, such as buckwheat, quinoa, and chia, are referred to as pseudocereals.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In their unprocessed whole grain form, cereals are a rich source of vitamins, minerals, carbohydrates, fats, oils, and protein. When processed by the removal of the bran and germ, the remaining endosperm is mostly carbohydrate. In some developing countries, cereals constitute a majority of daily sustenance. In developed countries, cereal consumption is moderate and varied but still substantial, primarily in the form of refined and processed grains. Because of the dietary importance of cereals, the cereal trade is often at the heart of the food trade, with many cereals sold as commodities.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Agriculture allowed for the support of an increased population, leading to larger societies and eventually the development of cities. It also created the need for greater organization of political power (and the creation of social stratification), as decisions had to be made regarding labor and harvest allocation and access rights to water and land. Agriculture bred immobility, as populations settled down for long periods of time, which led to the accumulation of material goods.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Early Neolithic villages show evidence of the development of processing grain. The Levant is the ancient home of the ancestors of wheat, barley and peas, in which many of these villages were based. There is evidence of the cultivation of cereals in Syria approximately 9,000 years ago. Wheat, barley, rye, oats and flaxseeds were all domesticated in the Fertile Crescent during the early Neolithic. During the same period, farmers in China began to farm rice and millet, using human-made floods and fires as part of their cultivation regimen. Fiber crops were domesticated as early as food crops, with China domesticating hemp, cotton being developed independently in Africa and South America, and Western Asia domesticating flax. The use of soil conditioners, including manure, fish, compost and ashes, appears to have begun early, and developed independently in several areas of the world, including Mesopotamia, the Nile Valley and Eastern Asia.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The first cereal grains were domesticated by early primitive humans. About 8,000 years ago, they were domesticated by ancient farming communities in the Fertile Crescent region. Emmer wheat, einkorn wheat, and barley were three of the so-called Neolithic founder crops in the development of agriculture. Around the same time, millets and kinds of rice were starting to become domesticated in East Asia. Sorghum and millets were also being domesticated in sub-Saharan West Africa, which were both used primarily as feed for livestock.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Cereals were the foundation of human civilization. In the Mesopotamian creation myth, an era of civilization is inaugurated by the grain goddess Ashnan. Cereal frontiers coincided with civilizational frontiers. The term Fertile Crescent implies the spatial dependence of civilization on cereals. The Great Wall of China and the Roman limes demarcated the same northern limit of cereal cultivation. The Silk Road stretched along the cereal belt of Eurasia. Numerous Chinese imperial edicts stated: \"Agriculture is the foundation of this empire,\" while the foundation of agriculture were the Five Grains. The word cereal is derived from Ceres, the Roman goddess of harvest and agriculture.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Cereals determined how large and for how long an army could be mobilized. For this reason, Shang Yang called agriculture and war \"the One\". Guan Zhong, Chanakya (the author of Arthashastra) and Hannibal expressed similar concepts. At the dawn of history, the Sumerians believed that if the agriculture of a state declines, Inanna, the goddess of war, leaves this state. Several gods of antiquity combined the functions of what Shang Yang called \"the One\" – agriculture and war: the Hittite Sun goddess of Arinna, the Canaanite Lahmu and the Roman Janus. These were highly important gods in their time leaving their legacy until today. We still begin the year with the month of Janus (January). The Jews believe that Messiah's family will originate in the town of Lahmu (Bethlehem); in Hebrew, beit lehem literally means \"house of bread\". Christians believe that Jesus Christ, who is said to have been born in Bethlehem, is the Messiah. In Hebrew, bread (lehem) and warfare (milhama) are of the same root. In fact, most persistent and flourishing empires throughout history in both hemispheres were centered in regions fertile for cereals.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The bond between cereal and imperial powers was not broken, not even in the Industrial Age. All modern great powers have traditionally remained first and foremost great cereal powers. The \"finest hour\" of the Axis powers \"ended precisely the moment they threw themselves against the two largest cereal lebensraums\" (the United States and the USSR). The outcome of the Cold War followed the Soviet grave and long-lasting cereal crisis, exacerbated by the cereal embargo imposed on the USSR in 1980. And, called \"the grain basket of the world,\" the most productive \"cereal lebensraum\" dominates the world ever since.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Having analyzed the mechanism at work behind this pattern, Ostrovsky outlined that the cereal power determines the percentage of manpower available to non-agricultural sectors including the heavy industry vital for military power. He emphasized that chronologically the Industrial Revolution follows the modern Agricultural Revolution and spatially the world's industrial regions are bound to cereal regions. Taken from space, map of the global illumination is said to indicate by its brightest parts the industrial regions. These regions coincide with cereal regions. Ostrovsky formulized a universal indicator of national power valid for all periods: total cereal tonnage produced by one percent of nation's manpower. For the present, this indicator demonstrates a unipolar international hierarchy.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Cereals are the most traded commodity by quantity in 2021: the Americas and Europe are the largest exporters and Asia is the largest importer.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "During the second half of the 20th century there was a significant increase in the production of high-yield cereal crops worldwide, especially wheat and rice, due to an initiative known as the Green Revolution. The strategies developed by the Green Revolution focused on fending off starvation and increasing yield-per-plant, and were very successful in raising overall yields of cereal grains, but did not give sufficient relevance to nutritional quality. These modern high-yield cereal crops tend to have low quality proteins, with essential amino acid deficiencies, are high in carbohydrates, and lack balanced essential fatty acids, vitamins, minerals and other quality factors. So-called ancient grains and heirloom varieties have seen an increase in popularity with the \"organic\" movements of the early 21st century, but there is a tradeoff in yield-per-plant, putting pressure on resource-poor areas as food crops are replaced with cash crops.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Cereals belong to the family Poaceae, commonly known as grasses. Grasses have stems that are hollow except at the nodes and narrow alternate leaves borne in two ranks. The lower part of each leaf encloses the stem, forming a leaf-sheath. The leaf grows from the base of the blade, an adaptation allowing it to cope with frequent grazing. The flowers are usually hermaphroditic—maize being an important exception—and mainly anemophilous or wind-pollinated, although insects occasionally play a role.",
"title": "Common features"
},
{
"paragraph_id": 12,
"text": "Some of the most-well known cereals are maize, rice, wheat, barley, sorghum, millet, oat, rye and triticale. Some pseudocereals are colloquially called cereal, even though botanically they do not belong to the Poaceae family; these include buckwheat, quinoa, and amaranth.",
"title": "Common features"
},
{
"paragraph_id": 13,
"text": "Some cereals are deficient in the essential amino acid lysine. That is why many vegetarian cultures, in order to get a balanced diet, combine their diet of cereal grains with legumes. Many legumes, however, are deficient in the essential amino acid methionine, which grains contain. Thus, a combination of legumes with grains forms a well-balanced diet for vegetarians. Common examples of such combinations are dal (lentils) with rice by South Indians and Bengalis, dal with wheat in Pakistan and North India, beans with maize tortillas, tofu with rice, and peanut butter with wholegrain wheat bread (as sandwiches) in several other cultures, including the Americas. The amount of crude protein measured in grains is expressed as grain crude protein concentration.",
"title": "Common features"
},
{
"paragraph_id": 14,
"text": "Cereals contain exogenous opioid food peptides called exorphins such as gluten exorphin. They mimic the actions of endorphins because they bind to the same opioid receptors in the brain.",
"title": "Common features"
},
{
"paragraph_id": 15,
"text": "While each individual species has its own peculiarities, the cultivation of all cereal crops is similar. Most are annual plants; consequently one planting yields one harvest. Cereals that are adapted to grow in temperate climate are called cold-season cereals, and those grown in tropical climate are called warm-season cereals. Wheat, rye, triticale, oats, barley, and spelt are the \"cool-season\" cereals. These are hardy plants that grow well in moderate weather and cease to grow in hot weather (approximately 30 °C or 85 °F, but this varies by species and variety). The \"warm-season\" cereals are tender and prefer hot weather. Barley and rye are the hardiest cereals, able to overwinter in the subarctic and Siberia. Many cool-season cereals are grown in the tropics. However, some are only grown in cooler highlands, where it may be possible to grow multiple crops per year.",
"title": "Cultivation"
},
{
"paragraph_id": 16,
"text": "For the past few decades, there has also been increasing interest in perennial grain plants. This interest developed due to advantages in erosion control, reduced need for fertilizer, and potentially lowered costs to the farmer. Though research is still in early stages, The Land Institute in Salina, Kansas, has been able to create a few cultivars that produce a fairly good crop yield.",
"title": "Cultivation"
},
{
"paragraph_id": 17,
"text": "The warm-season cereals are grown in tropical lowlands year-round and in temperate climates during the frost-free season. Rice is commonly grown in flooded fields, though some strains are grown on dry land. Other warm climate cereals, such as sorghum, are adapted to arid conditions.",
"title": "Cultivation"
},
{
"paragraph_id": 18,
"text": "Cool-season cereals are well-adapted to temperate climates. Most varieties of a particular species are either winter or spring types. Winter varieties are sown in the autumn, germinate and grow vegetatively, then become dormant during winter. They resume growing in the springtime and mature in late spring or early summer. This cultivation system makes optimal use of water and frees the land for another crop early in the growing season.",
"title": "Cultivation"
},
{
"paragraph_id": 19,
"text": "Winter varieties do not flower until springtime because they need vernalization: exposure to low temperatures for a genetically determined length of time. Where winters are too warm for vernalization or exceed the hardiness of the crop (which varies by species and variety), farmers grow spring varieties. Spring cereals are planted in early springtime and mature later that same summer, without vernalization. Spring cereals typically require more irrigation and yield less than winter cereals.",
"title": "Cultivation"
},
{
"paragraph_id": 20,
"text": "The greatest constraints on yield are cereal diseases, especially rusts (mostly the Puccinia spp.) and powdery mildews. Fusarium Head Blight (FHB) caused by Fusarium graminearum is also significant on a wide variety of cereals.",
"title": "Cultivation"
},
{
"paragraph_id": 21,
"text": "Once the cereal plants have grown their seeds, they have completed their life cycle. The plants die, become brown, and dry. As soon as the parent plants and their seed kernels are reasonably dry, harvest can begin.",
"title": "Cultivation"
},
{
"paragraph_id": 22,
"text": "In developed countries, cereal crops are almost universally machine-harvested, typically using a combine harvester, which cuts, threshes, and winnows the grain during a single pass across the field. In developing countries, a variety of harvesting methods are in use, depending on the cost of labor, from combines to hand tools such as the scythe or grain cradle.",
"title": "Cultivation"
},
{
"paragraph_id": 23,
"text": "If a crop is harvested during humid weather, the grain may not dry adequately in the field to prevent spoilage during its storage. In this case, the grain is sent to a dehydrating facility, where artificial heat dries it.",
"title": "Cultivation"
},
{
"paragraph_id": 24,
"text": "In North America, farmers commonly deliver their newly harvested grain to a grain elevator, a large storage facility that consolidates the crops of many farmers. The farmer may sell the grain at the time of delivery or maintain ownership of a share of grain in the pool for later sale.",
"title": "Cultivation"
},
{
"paragraph_id": 25,
"text": "Rice is an example of a cereal that requires little preparation before human consumption. For example, to make plain cooked rice, raw milled rice needs to be washed and submerged in simmering water for 10–12 minutes.",
"title": "Uses"
},
{
"paragraph_id": 26,
"text": "Cereals can be ground to make flour. Cereal flour, particularly wheat flour, is the main ingredient of bread, which is a staple food for many cultures. Maize flour has been important in Mesoamerican cuisine since ancient times and remains a staple in the Americas. Rye flour is a constituent of bread in central and northern Europe, while rice flour is common in Asia.",
"title": "Uses"
},
{
"paragraph_id": 27,
"text": "Cereal flour consists either of the endosperm, germ, and bran together (whole-grain flour) or of the endosperm alone (refined flour). Meal is either differentiable from flour as having slightly coarser particle size (degree of comminution) or is synonymous with flour; the word is used both ways. For example, the word cornmeal often connotes a grittier texture whereas corn flour connotes fine powder, although there is no codified dividing line.",
"title": "Uses"
},
{
"paragraph_id": 28,
"text": "Because of cereals' high starch content, they are often used to make Industrial alcohol and alcoholic drinks via fermentation. For instance, beer is produced by the brewing and fermentation of starches, mainly derived from cereal grains—most commonly from malted barley, though wheat, maize, rice, and oats are also used. During the brewing process, fermentation of the starch sugars in the wort produces ethanol and carbonation in the resulting beer.",
"title": "Uses"
},
{
"paragraph_id": 29,
"text": "The following table shows the annual production of cereals in 1961, 1980, 2000, 2010, and 2019/2020.",
"title": "Production statistics"
},
{
"paragraph_id": 30,
"text": "Maize, wheat, and rice together accounted for 89% of all cereal production worldwide in 2012, and 43% of the global supply of food energy in 2009, while the production of oats and rye have drastically fallen from their 1960s levels.",
"title": "Production statistics"
},
{
"paragraph_id": 31,
"text": "Other cereals not included in the U.N.'s Food and Agriculture Organization statistics include:",
"title": "Production statistics"
},
{
"paragraph_id": 32,
"text": "This article incorporates text from a free content work. Licensed under CC BY-SA IGO 3.0 (license statement/permission). Text taken from World Food and Agriculture – Statistical Yearbook 2023, FAO, FAO.",
"title": "Sources"
}
] | A cereal is any grass cultivated for its edible grain, which is composed of an endosperm, a germ, and a bran. Cereal grain crops are grown in greater quantities and provide more food energy worldwide than any other type of crop and are therefore staple crops. They include rice, wheat, rye, oats, barley, millet, and maize. Edible grains from other plant families, such as buckwheat, quinoa, and chia, are referred to as pseudocereals. In their unprocessed whole grain form, cereals are a rich source of vitamins, minerals, carbohydrates, fats, oils, and protein. When processed by the removal of the bran and germ, the remaining endosperm is mostly carbohydrate. In some developing countries, cereals constitute a majority of daily sustenance. In developed countries, cereal consumption is moderate and varied but still substantial, primarily in the form of refined and processed grains. Because of the dietary importance of cereals, the cereal trade is often at the heart of the food trade, with many cereals sold as commodities. | 2001-10-05T01:23:25Z | 2023-12-30T20:19:11Z | [
"Template:Cite web",
"Template:Convert",
"Template:See also",
"Template:Portal",
"Template:Reflist",
"Template:Agriculture country lists",
"Template:Use dmy dates",
"Template:Further",
"Template:Div col end",
"Template:One source",
"Template:Cite book",
"Template:Authority control",
"Template:ISBN",
"Template:Subject bar",
"Template:Short description",
"Template:Hatgrp",
"Template:Webarchive",
"Template:Vegetarianism",
"Template:Main",
"Template:Efn",
"Template:Notelist",
"Template:Cite news",
"Template:Cereals",
"Template:Cite journal",
"Template:Rp",
"Template:Div col",
"Template:Cite magazine",
"Template:Free-content attribution"
] | https://en.wikipedia.org/wiki/Cereal |
6,704 | Christendom | Christendom historically refers to the Christian states, Christian empires, Christian-majority countries and the countries in which Christianity dominates, prevails, or that it is culturally or historically intertwined with.
Following the spread of Christianity from the Levant to Europe and North Africa during the early Roman Empire, Christendom has been divided in the pre-existing Greek East and Latin West. Consequently, internal sects within the Christian religion arose with their own beliefs and practices, centred around the cities of Rome (Western Christianity, whose community was called Western or Latin Christendom) and Constantinople (Eastern Christianity, whose community was called Eastern Christendom). From the 11th to the 13th centuries, Latin Christendom rose to the central role of the Western world. The history of the Christian world spans about 2,000 years and includes a variety of socio-political developments, as well as advancements in the arts, architecture, literature, science, philosophy, and technology.
The term usually refers to the Middle Ages and the Early Modern period during which the Christian world represented a geopolitical power that was juxtaposed with both the pagan and especially the Muslim world.
The Anglo-Saxon term crīstendōm appears to have been coined in the 9th century by a scribe somewhere in southern England, possibly at the court of king Alfred the Great of Wessex. The scribe was translating Paulus Orosius' book History Against the Pagans (c. 416) and in need for a term to express the concept of the universal culture focused on Jesus Christ. It had the sense now taken by Christianity (as is still the case with the cognate Dutch christendom, where it denotes mostly the religion itself, just like the German Christentum.
The current sense of the word of "lands where Christianity is the dominant religion" emerged in Late Middle English (by c. 1400).
Canadian theology professor Douglas John Hall stated (1997) that "Christendom" [...] means literally the dominion or sovereignty of the Christian religion." Thomas John Curry, Roman Catholic auxiliary bishop of Los Angeles, defined (2001) Christendom as "the system dating from the fourth century by which governments upheld and promoted Christianity." Curry states that the end of Christendom came about because modern governments refused to "uphold the teachings, customs, ethos, and practice of Christianity." British church historian Diarmaid MacCulloch described (2010) Christendom as "the union between Christianity and secular power."
Christendom was originally a medieval concept which has steadily evolved since the fall of the Western Roman Empire and the gradual rise of the Papacy more in religio-temporal implications practically during and after the reign of Charlemagne; and the concept let itself be lulled in the minds of the staunch believers to the archetype of a holy religious space inhabited by Christians, blessed by God, the Heavenly Father, ruled by Christ through the Church and protected by the Spirit-body of Christ; no wonder, this concept, as included the whole of Europe and then the expanding Christian territories on earth, strengthened the roots of Romance of the greatness of Christianity in the world.
There is a common and nonliteral sense of the word that is much like the terms Western world, known world or Free World. The notion of "Europe" and the "Western World" has been intimately connected with the concept of "Christianity and Christendom"; many even attribute Christianity for being the link that created a unified European identity.
Early Christianity spread in the Greek/Roman world and beyond as a 1st-century Jewish sect, which historians refer to as Jewish Christianity. It may be divided into two distinct phases: the apostolic period, when the first apostles were alive and organizing the Church, and the post-apostolic period, when an early episcopal structure developed, whereby bishoprics were governed by bishops (overseers).
The post-apostolic period concerns the time roughly after the death of the apostles when bishops emerged as overseers of urban Christian populations. The earliest recorded use of the terms Christianity (Greek Χριστιανισμός) and catholic (Greek καθολικός), dates to this period, the 2nd century, attributed to Ignatius of Antioch c. 107. Early Christendom would close at the end of imperial persecution of Christians after the ascension of Constantine the Great and the Edict of Milan in AD 313 and the First Council of Nicaea in 325.
According to Malcolm Muggeridge (1980), Christ founded Christianity, but Constantine founded Christendom. Canadian theology professor Douglas John Hall dates the 'inauguration of Christendom' to the 4th century, with Constantine playing the primary role (so much so that he equates Christendom with "Constantinianism") and Theodosius I (Edict of Thessalonica, 380) and Justinian I secondary roles.
"Christendom" has referred to the medieval and renaissance notion of the Christian world as a polity. In essence, the earliest vision of Christendom was a vision of a Christian theocracy, a government founded upon and upholding Christian values, whose institutions are spread through and over with Christian doctrine. In this period, members of the Christian clergy wield political authority. The specific relationship between the political leaders and the clergy varied but, in theory, the national and political divisions were at times subsumed under the leadership of the church as an institution. This model of church-state relations was accepted by various Church leaders and political leaders in European history.
The Church gradually became a defining institution of the Roman Empire. Emperor Constantine issued the Edict of Milan in 313 proclaiming toleration for the Christian religion, and convoked the First Council of Nicaea in 325 whose Nicene Creed included belief in "one holy catholic and apostolic Church". Emperor Theodosius I made Nicene Christianity the state church of the Roman Empire with the Edict of Thessalonica of 380. In terms of prosperity and cultural life, the Byzantine Empire was one of the peaks in Christian history and Christian civilization, and Constantinople remained the leading city of the Christian world in size, wealth, and culture. There was a renewed interest in classical Greek philosophy, as well as an increase in literary output in vernacular Greek.
As the Western Roman Empire disintegrated into feudal kingdoms and principalities, the concept of Christendom changed as the western church became one of five patriarchates of the Pentarchy and the Christians of the Eastern Roman Empire developed. The Byzantine Empire was the last bastion of Christendom. Christendom would take a turn with the rise of the Franks, a Germanic tribe who converted to the Christian faith and entered into communion with Rome.
On Christmas Day 800 AD, Pope Leo III crowned Charlemagne, resulting in the creation of another Christian king beside the Christian emperor in the Byzantine state. The Carolingian Empire created a definition of Christendom in juxtaposition with the Byzantine Empire, that of a distributed versus centralized culture respectively.
The classical heritage flourished throughout the Middle Ages in both the Byzantine Greek East and the Latin West. In the Greek philosopher Plato's ideal state there are three major classes, which was representative of the idea of the "tripartite soul", which is expressive of three functions or capacities of the human soul: "reason", "the spirited element", and "appetites" (or "passions"). Will Durant made a convincing case that certain prominent features of Plato's ideal community where discernible in the organization, dogma and effectiveness of "the" Medieval Church in Europe:
... For a thousand years Europe was ruled by an order of guardians considerably like that which was visioned by our philosopher. During the Middle Ages it was customary to classify the population of Christendom into laboratores (workers), bellatores (soldiers), and oratores (clergy). The last group, though small in number, monopolized the instruments and opportunities of culture, and ruled with almost unlimited sway half of the most powerful continent on the globe. The clergy, like Plato's guardians, were placed in authority... by their talent as shown in ecclesiastical studies and administration, by their disposition to a life of meditation and simplicity, and ... by the influence of their relatives with the powers of state and church. In the latter half of the period in which they ruled [800 AD onwards], the clergy were as free from family cares as even Plato could desire [for such guardians]... [Clerical] Celibacy was part of the psychological structure of the power of the clergy; for on the one hand they were unimpeded by the narrowing egoism of the family, and on the other their apparent superiority to the call of the flesh added to the awe in which lay sinners held them.... In the latter half of the period in which they ruled, the clergy were as free from family cares as even Plato could desire.
After the collapse of Charlemagne's empire, the southern remnants of the Holy Roman Empire became a collection of states loosely connected to the Holy See of Rome. Tensions between Pope Innocent III and secular rulers ran high, as the pontiff exerted control over their temporal counterparts in the west and vice versa. The pontificate of Innocent III is considered the height of temporal power of the papacy. The Corpus Christianum described the then-current notion of the community of all Christians united under the Roman Catholic Church. The community was to be guided by Christian values in its politics, economics and social life. Its legal basis was the corpus iuris canonica (body of canon law).
In the East, Christendom became more defined as the Byzantine Empire's gradual loss of territory to an expanding Islam and the Muslim conquest of Persia. This caused Christianity to become important to the Byzantine identity. Before the East–West Schism which divided the Church religiously, there had been the notion of a universal Christendom that included the East and the West. After the East–West Schism, hopes of regaining religious unity with the West were ended by the Fourth Crusade, when Crusaders conquered the Byzantine capital of Constantinople and hastened the decline of the Byzantine Empire on the path to its destruction. With the breakup of the Byzantine Empire into individual nations with nationalist Orthodox Churches, the term Christendom described Western Europe, Catholicism, Orthodox Byzantines, and other Eastern rites of the Church.
The Catholic Church's peak of authority over all European Christians and their common endeavours of the Christian community—for example, the Crusades, the fight against the Moors in the Iberian Peninsula and against the Ottomans in the Balkans—helped to develop a sense of communal identity against the obstacle of Europe's deep political divisions. The popes, formally just the bishops of Rome, claimed to be the focus of all Christendom, which was largely recognised in Western Christendom from the 11th century until the Reformation, but not in Eastern Christendom. Moreover, this authority was also sometimes abused, and fostered the Inquisition and anti-Jewish pogroms, to root out divergent elements and create a religiously uniform community. Ultimately, the Inquisition was done away with by order of Pope Innocent III.
Christendom ultimately was led into specific crisis in the late Middle Ages, when the kings of France managed to establish a French national church during the 14th century and the papacy became ever more aligned with the Holy Roman Empire of the German Nation. Known as the Western Schism, western Christendom was a split between three men, who were driven by politics rather than any real theological disagreement for simultaneously claiming to be the true pope. The Avignon Papacy developed a reputation for corruption that estranged major parts of Western Christendom. The Avignon schism was ended by the Council of Constance.
Before the modern period, Christendom was in a general crisis at the time of the Renaissance Popes because of the moral laxity of these pontiffs and their willingness to seek and rely on temporal power as secular rulers did. Many in the Catholic Church's hierarchy in the Renaissance became increasingly entangled with insatiable greed for material wealth and temporal power, which led to many reform movements, some merely wanting a moral reformation of the Church's clergy, while others repudiated the Church and separated from it in order to form new sects. The Italian Renaissance produced ideas or institutions by which men living in society could be held together in harmony. In the early 16th century, Baldassare Castiglione (The Book of the Courtier) laid out his vision of the ideal gentleman and lady, while Machiavelli cast a jaundiced eye on "la verità effetuale delle cose"—the actual truth of things—in The Prince, composed, humanist style, chiefly of parallel ancient and modern examples of Virtù. Some Protestant movements grew up along lines of mysticism or renaissance humanism (cf. Erasmus). The Catholic Church fell partly into general neglect under the Renaissance Popes, whose inability to govern the Church by showing personal example of high moral standards set the climate for what would ultimately become the Protestant Reformation. During the Renaissance, the papacy was mainly run by the wealthy families and also had strong secular interests. To safeguard Rome and the connected Papal States the popes became necessarily involved in temporal matters, even leading armies, as the great patron of arts Pope Julius II did. During these intermediate times, popes strove to make Rome the capital of Christendom while projecting it through art, architecture, and literature as the center of a Golden Age of unity, order, and peace.
Professor Frederick J. McGinness described Rome as essential in understanding the legacy the Church and its representatives encapsulated best by The Eternal City:
No other city in Europe matches Rome in its traditions, history, legacies, and influence in the Western world. Rome in the Renaissance under the papacy not only acted as guardian and transmitter of these elements stemming from the Roman Empire but also assumed the role as artificer and interpreter of its myths and meanings for the peoples of Europe from the Middle Ages to modern times... Under the patronage of the popes, whose wealth and income were exceeded only by their ambitions, the city became a cultural center for master architects, sculptors, musicians, painters, and artisans of every kind...In its myth and message, Rome had become the sacred city of the popes, the prime symbol of a triumphant Catholicism, the center of orthodox Christianity, a new Jerusalem.
It is clearly noticeable that the popes of the Italian Renaissance have been subjected by many writers with an overly harsh tone. Pope Julius II, for example, was not only an effective secular leader in military affairs, a deviously effective politician but foremost one of the greatest patron of the Renaissance period and person who also encouraged open criticism from noted humanists.
The blossoming of renaissance humanism was made very much possible due to the universality of the institutions of Catholic Church and represented by personalities such as Pope Pius II, Nicolaus Copernicus, Leon Battista Alberti, Desiderius Erasmus, sir Thomas More, Bartolomé de Las Casas, Leonardo da Vinci and Teresa of Ávila. George Santayana in his work The Life of Reason postulated the tenets of the all encompassing order the Church had brought and as the repository of the legacy of classical antiquity:
The enterprise of individuals or of small aristocratic bodies has meantime sown the world which we call civilised with some seeds and nuclei of order. There are scattered about a variety of churches, industries, academies, and governments. But the universal order once dreamt of and nominally almost established, the empire of universal peace, all-permeating rational art, and philosophical worship, is mentioned no more. An unformulated conception, the prerational ethics of private privilege and national unity, fills the background of men's minds. It represents feudal traditions rather than the tendency really involved in contemporary industry, science, or philanthropy. Those dark ages, from which our political practice is derived, had a political theory which we should do well to study; for their theory about a universal empire and a Catholic church was in turn the echo of a former age of reason, when a few men conscious of ruling the world had for a moment sought to survey it as a whole and to rule it justly.
Developments in western philosophy and European events brought change to the notion of the Corpus Christianum. The Hundred Years' War accelerated the process of transforming France from a feudal monarchy to a centralized state. The rise of strong, centralized monarchies denoted the European transition from feudalism to capitalism. By the end of the Hundred Years' War, both France and England were able to raise enough money through taxation to create independent standing armies. In the Wars of the Roses, Henry Tudor took the crown of England. His heir, the absolute king Henry VIII establishing the English church.
In modern history, the Reformation and rise of modernity in the early 16th century entailed a change in the Corpus Christianum. In the Holy Roman Empire, the Peace of Augsburg of 1555 officially ended the idea among secular leaders that all Christians must be united under one church. The principle of cuius regio, eius religio ("whose the region is, his religion") established the religious, political and geographic divisions of Christianity, and this was established with the Treaty of Westphalia in 1648, which legally ended the concept of a single Christian hegemony in the territories of the Holy Roman Empire, despite the Catholic Church's doctrine that it alone is the one true Church founded by Christ. Subsequently, each government determined the religion of their own state. Christians living in states where their denomination was not the established one were guaranteed the right to practice their faith in public during allotted hours and in private at their will. At times there were mass expulsions of dissenting faiths as happened with the Salzburg Protestants. Some people passed as adhering to the official church, but instead lived as Nicodemites or crypto-protestants.
The European wars of religion are usually taken to have ended with the Treaty of Westphalia (1648), or arguably, including the Nine Years' War and the War of the Spanish Succession in this period, with the Treaty of Utrecht of 1713. In the 18th century, the focus shifts away from religious conflicts, either between Christian factions or against the external threat of Islamic factions.
The European Miracle, the Age of Enlightenment and the formation of the great colonial empires together with the beginning decline of the Ottoman Empire mark the end of the geopolitical "history of Christendom". Instead, the focus of Western history shifts to the development of the nation-state, accompanied by increasing atheism and secularism, culminating with the French Revolution and the Napoleonic Wars at the turn of the 19th century.
Writing in 1997, Canadian theology professor Douglas John Hall argued that Christendom had either fallen already or was in its death throes; although its end was gradual and not as clear to pin down as its 4th-century establishment, the "transition to the post-Constantinian, or post-Christendom, situation (...) has already been in process for a century or two," beginning with the 18th-century rationalist Enlightenment and the French Revolution (the first attempt to topple the Christian establishment). American Catholic bishop Thomas John Curry stated (2001) that the end of Christendom came about because modern governments refused to "uphold the teachings, customs, ethos, and practice of Christianity." He argued the First Amendment to the United States Constitution (1791) and the Second Vatican Council's Declaration on Religious Freedom (1965) are two of the most important documents setting the stage for its end. According to British historian Diarmaid MacCulloch (2010), Christendom was 'killed' by the First World War (1914–18), which led to the fall of the three main Christian empires (Russian, German and Austrian) of Europe, as well as the Ottoman Empire, rupturing the Eastern Christian communities that had existed on its territory. The Christian empires were replaced by secular, even anti-clerical republics seeking to definitively keep the churches out of politics. The only surviving monarchy with an established church, Britain, was severely damaged by the war, lost most of Ireland due to Catholic–Protestant infighting, and was starting to lose grip on its colonies.
Changes in worldwide Christianity over the last century have been significant, since 1900, Christianity has spread rapidly in the Global South and Third World countries. The late 20th century has shown the shift of Christian adherence to the Third World and the Southern Hemisphere in general, by 2010 about 157 countries and territories in the world had Christian majorities.
Western culture, throughout most of its history, has been nearly equivalent to Christian culture, and many of the population of the Western hemisphere could broadly be described as cultural Christians. The notion of "Europe" and the "Western World" has been intimately connected with the concept of "Christianity and Christendom"; many even attribute Christianity for being the link that created a unified European identity. Historian Paul Legutko of Stanford University said the Catholic Church is "at the center of the development of the values, ideas, science, laws, and institutions which constitute what we call Western civilization."
Though Western culture contained several polytheistic religions during its early years under the Greek and Roman Empires, as the centralized Roman power waned, the dominance of the Catholic Church was the only consistent force in Western Europe. Until the Age of Enlightenment, Christian culture guided the course of philosophy, literature, art, music and science. Christian disciplines of the respective arts have subsequently developed into Christian philosophy, Christian art, Christian music, Christian literature etc. Art and literature, law, education, and politics were preserved in the teachings of the Church, in an environment that, otherwise, would have probably seen their loss. The Church founded many cathedrals, universities, monasteries and seminaries, some of which continue to exist today. Medieval Christianity created the first modern universities. The Catholic Church established a hospital system in Medieval Europe that vastly improved upon the Roman valetudinaria. These hospitals were established to cater to "particular social groups marginalized by poverty, sickness, and age," according to historian of hospitals, Guenter Risse. Christianity also had a strong impact on all other aspects of life: marriage and family, education, the humanities and sciences, the political and social order, the economy, and the arts.
Christianity had a significant impact on education and science and medicine as the church created the bases of the Western system of education, and was the sponsor of founding universities in the Western world as the university is generally regarded as an institution that has its origin in the Medieval Christian setting. Many clerics throughout history have made significant contributions to science and Jesuits in particular have made numerous significant contributions to the development of science. The cultural influence of Christianity includes social welfare, founding hospitals, economics (as the Protestant work ethic), natural law (which would later influence the creation of international law), politics, architecture, literature, personal hygiene, and family life. Christianity played a role in ending practices common among pagan societies, such as human sacrifice, slavery, infanticide and polygamy.
Christian literature is writing that deals with Christian themes and incorporates the Christian world view. This constitutes a huge body of extremely varied writing. Christian poetry is any poetry that contains Christian teachings, themes, or references. The influence of Christianity on poetry has been great in any area that Christianity has taken hold. Christian poems often directly reference the Bible, while others provide allegory.
Christian art is art produced in an attempt to illustrate, supplement and portray in tangible form the principles of Christianity. Virtually all Christian groupings use or have used art to some extent. The prominence of art and the media, style, and representations change; however, the unifying theme is ultimately the representation of the life and times of Jesus and in some cases the Old Testament. Depictions of saints are also common, especially in Anglicanism, Roman Catholicism, and Eastern Orthodoxy.
An illuminated manuscript is a manuscript in which the text is supplemented by the addition of decoration. The earliest surviving substantive illuminated manuscripts are from the period AD 400 to 600, primarily produced in Ireland, Constantinople and Italy. The majority of surviving manuscripts are from the Middle Ages, although many illuminated manuscripts survive from the 15th century Renaissance, along with a very limited number from Late Antiquity.
Most illuminated manuscripts were created as codices, which had superseded scrolls; some isolated single sheets survive. A very few illuminated manuscript fragments survive on papyrus. Most medieval manuscripts, illuminated or not, were written on parchment (most commonly of calf, sheep, or goat skin), but most manuscripts important enough to illuminate were written on the best quality of parchment, called vellum, traditionally made of unsplit calfskin, though high quality parchment from other skins was also called parchment.
Christian art began, about two centuries after Christ, by borrowing motifs from Roman Imperial imagery, classical Greek and Roman religion and popular art. Religious images are used to some extent by the Abrahamic Christian faith, and often contain highly complex iconography, which reflects centuries of accumulated tradition. In the Late Antique period iconography began to be standardised, and to relate more closely to Biblical texts, although many gaps in the canonical Gospel narratives were plugged with matter from the apocryphal gospels. Eventually the Church would succeed in weeding most of these out, but some remain, like the ox and ass in the Nativity of Christ.
An icon is a religious work of art, most commonly a painting, from Eastern Christianity. Christianity has used symbolism from its very beginnings. In both East and West, numerous iconic types of Christ, Mary and saints and other subjects were developed; the number of named types of icons of Mary, with or without the infant Christ, was especially large in the East, whereas Christ Pantocrator was much the commonest image of Christ.
Christian symbolism invests objects or actions with an inner meaning expressing Christian ideas. Christianity has borrowed from the common stock of significant symbols known to most periods and to all regions of the world. Religious symbolism is effective when it appeals to both the intellect and the emotions. Especially important depictions of Mary include the Hodegetria and Panagia types. Traditional models evolved for narrative paintings, including large cycles covering the events of the Life of Christ, the Life of the Virgin, parts of the Old Testament, and, increasingly, the lives of popular saints. Especially in the West, a system of attributes developed for identifying individual figures of saints by a standard appearance and symbolic objects held by them; in the East they were more likely to identified by text labels.
Each saint has a story and a reason why he or she led an exemplary life. Symbols have been used to tell these stories throughout the history of the Church. A number of Christian saints are traditionally represented by a symbol or iconic motif associated with their life, termed an attribute or emblem, in order to identify them. The study of these forms part of iconography in Art history.
Christian architecture encompasses a wide range of both secular and religious styles from the foundation of Christianity to the present day, influencing the design and construction of buildings and structures in Christian culture.
Buildings were at first adapted from those originally intended for other purposes but, with the rise of distinctively ecclesiastical architecture, church buildings came to influence secular ones which have often imitated religious architecture. In the 20th century, the use of new materials, such as concrete, as well as simpler styles has had its effect upon the design of churches and arguably the flow of influence has been reversed. From the birth of Christianity to the present, the most significant period of transformation for Christian architecture in the west was the Gothic cathedral. In the east, Byzantine architecture was a continuation of Roman architecture.
Christian philosophy is a term to describe the fusion of various fields of philosophy with the theological doctrines of Christianity. Scholasticism, which means "that [which] belongs to the school", and was a method of learning taught by the academics (or school people) of medieval universities c. 1100–1500. Scholasticism originally started to reconcile the philosophy of the ancient classical philosophers with medieval Christian theology. Scholasticism is not a philosophy or theology in itself but a tool and method for learning which places emphasis on dialectical reasoning.
The Byzantine Empire, which was the most sophisticated culture during antiquity, suffered under Muslim conquests limiting its scientific prowess during the Medieval period. Christian Western Europe had suffered a catastrophic loss of knowledge following the fall of the Western Roman Empire. But thanks to the Church scholars such as Aquinas and Buridan, the West carried on at least the spirit of scientific inquiry which would later lead to Europe's taking the lead in science during the Scientific Revolution using translations of medieval works.
Medieval technology refers to the technology used in medieval Europe under Christian rule. After the Renaissance of the 12th century, medieval Europe saw a radical change in the rate of new inventions, innovations in the ways of managing traditional means of production, and economic growth. The period saw major technological advances, including the adoption of gunpowder and the astrolabe, the invention of spectacles, and greatly improved water mills, building techniques, agriculture in general, clocks, and ships. The latter advances made possible the dawn of the Age of Exploration. The development of water mills was impressive, and extended from agriculture to sawmills both for timber and stone, probably derived from Roman technology. By the time of the Domesday Book, most large villages in Britain had mills. They also were widely used in mining, as described by Georg Agricola in De Re Metallica for raising ore from shafts, crushing ore, and even powering bellows.
Significant in this respect were advances within the fields of navigation. The compass and astrolabe along with advances in shipbuilding, enabled the navigation of the World Oceans and thus domination of the worlds economic trade. Gutenberg's printing press made possible a dissemination of knowledge to a wider population, that would not only lead to a gradually more egalitarian society, but one more able to dominate other cultures, drawing from a vast reserve of knowledge and experience.
During the Renaissance, great advances occurred in geography, astronomy, chemistry, physics, math, manufacturing, and engineering. The rediscovery of ancient scientific texts was accelerated after the Fall of Constantinople, and the invention of printing which would democratize learning and allow a faster propagation of new ideas. Renaissance technology is the set of artifacts and customs, spanning roughly the 14th through the 16th century. The era is marked by such profound technical advancements like the printing press, linear perspectivity, patent law, double shell domes or Bastion fortresses. Draw-books of the Renaissance artist-engineers such as Taccola and Leonardo da Vinci give a deep insight into the mechanical technology then known and applied.
Renaissance science spawned the Scientific Revolution; science and technology began a cycle of mutual advancement. The Scientific Renaissance was the early phase of the Scientific Revolution. In the two-phase model of early modern science: a Scientific Renaissance of the 15th and 16th centuries, focused on the restoration of the natural knowledge of the ancients; and a Scientific Revolution of the 17th century, when scientists shifted from recovery to innovation. Some scholars and historians attributes Christianity to having contributed to the rise of the Scientific Revolution.
Professor Noah J Efron says that "Generations of historians and sociologists have discovered many ways in which Christians, Christian beliefs, and Christian institutions played crucial roles in fashioning the tenets, methods, and institutions of what in time became modern science. They found that some forms of Christianity provided the motivation to study nature systematically..." Virtually all modern scholars and historians agree that Christianity moved many early-modern intellectuals to study nature systematically.
In 2009, according to the Encyclopædia Britannica, Christianity was the majority religion in Europe (including Russia) with 80%, Latin America with 92%, North America with 81%, and Oceania with 79%. There are also large Christian communities in other parts of the world, such as China, India and Central Asia, where Christianity is the second-largest religion after Islam. The United States is home to the world's largest Christian population, followed by Brazil and Mexico.
Many Christians not only live under, but also have an official status in, a state religion of the following nations: Argentina (Roman Catholic Church), Armenia (Armenian Apostolic Church), Costa Rica (Roman Catholic Church), Denmark (Church of Denmark), El Salvador (Roman Catholic Church), England (Church of England), Georgia (Georgian Orthodox Church), Greece (Church of Greece), Iceland (Church of Iceland), Liechtenstein (Roman Catholic Church), Malta (Roman Catholic Church), Monaco (Roman Catholic Church), Romania (Romanian Orthodox Church), Norway (Church of Norway), Vatican City (Roman Catholic Church), Switzerland (Roman Catholic Church, Swiss Reformed Church and Christian Catholic Church of Switzerland).
The estimated number of Christians in the world ranges from 2.2 billion to 2.4 billion people. The faith represents approximately one-third of the world's population and is the largest religion in the world, with the three largest groups of Christians being the Catholic Church, Protestantism, and the Eastern Orthodox Church. The largest Christian denomination is the Catholic Church, with an estimated 1.2 billion adherents.
A religious order is a lineage of communities and organizations of people who live in some way set apart from society in accordance with their specific religious devotion, usually characterized by the principles of its founder's religious practice. In contrast, the term Holy Orders is used by many Christian churches to refer to ordination or to a group of individuals who are set apart for a special role or ministry. Historically, the word "order" designated an established civil body or corporation with a hierarchy, and ordination meant legal incorporation into an ordo. The word "holy" refers to the Church. In context, therefore, a holy order is set apart for ministry in the Church. Religious orders are composed of initiates (laity) and, in some traditions, ordained clergies.
Various organizations include:
Within the framework of Christianity, there are at least three possible definitions for Church law. One is the Torah/Mosaic Law (from what Christians consider to be the Old Testament) also called Divine Law or Biblical law. Another is the instructions of Jesus of Nazareth in the Gospel (sometimes referred to as the Law of Christ or the New Commandment or the New Covenant). A third is canon law which is the internal ecclesiastical law governing the Roman Catholic Church, the Eastern Orthodox churches, and the Anglican Communion of churches. The way that such church law is legislated, interpreted and at times adjudicated varies widely among these three bodies of churches. In all three traditions, a canon was initially a rule adopted by a council (From Greek kanon / κανών, Hebrew kaneh / קנה, for rule, standard, or measure); these canons formed the foundation of canon law.
Christian ethics in general has tended to stress the need for grace, mercy, and forgiveness because of human weakness and developed while Early Christians were subjects of the Roman Empire. From the time Nero blamed Christians for setting Rome ablaze (64 AD) until Galerius (311 AD), persecutions against Christians erupted periodically. Consequently, Early Christian ethics included discussions of how believers should relate to Roman authority and to the empire.
Under the Emperor Constantine I (312–337), Christianity became a legal religion. While some scholars debate whether Constantine's conversion to Christianity was authentic or simply matter of political expediency, Constantine's decree made the empire safe for Christian practice and belief. Consequently, issues of Christian doctrine, ethics and church practice were debated openly, see for example the First Council of Nicaea and the First seven Ecumenical Councils. By the time of Theodosius I (379–395), Christianity had become the state religion of the empire. With Christianity in power, ethical concerns broaden and included discussions of the proper role of the state.
Render unto Caesar... is the beginning of a phrase attributed to Jesus in the synoptic gospels which reads in full, "Render unto Caesar the things which are Caesar's, and unto God the things that are God's". This phrase has become a widely quoted summary of the relationship between Christianity and secular authority. The gospels say that when Jesus gave his response, his interrogators "marvelled, and left him, and went their way." Time has not resolved an ambiguity in this phrase, and people continue to interpret this passage to support various positions that are poles apart. The traditional division, carefully determined, in Christian thought is the state and church have separate spheres of influence.
Thomas Aquinas thoroughly discussed that human law is positive law which means that it is natural law applied by governments to societies. All human laws were to be judged by their conformity to the natural law. An unjust law was in a sense no law at all. At this point, the natural law was not only used to pass judgment on the moral worth of various laws, but also to determine what the law said in the first place. This could result in some tension. Late ecclesiastical writers followed in his footsteps.
Christian democracy is a political ideology that seeks to apply Christian principles to public policy. It emerged in 19th-century Europe, largely under the influence of Catholic social teaching. In a number of countries, the democracy's Christian ethos has been diluted by secularisation. In practice, Christian democracy is often considered conservative on cultural, social and moral issues and progressive on fiscal and economic issues. In places, where their opponents have traditionally been secularist socialists and social democrats, Christian democratic parties are moderately conservative, whereas in other cultural and political environments they can lean to the left.
Attitudes and beliefs about the roles and responsibilities of women in Christianity vary considerably today as they have throughout the last two millennia—evolving along with or counter to the societies in which Christians have lived. The Bible and Christianity historically have been interpreted as excluding women from church leadership and placing them in submissive roles in marriage. Male leadership has been assumed in the church and within marriage, society and government.
Some contemporary writers describe the role of women in the life of the church as having been downplayed, overlooked, or denied throughout much of Christian history. Paradigm shifts in gender roles in society and also many churches has inspired reevaluation by many Christians of some long-held attitudes to the contrary. Christian egalitarians have increasingly argued for equal roles for men and women in marriage, as well as for the ordination of women to the clergy. Contemporary conservatives meanwhile have reasserted what has been termed a "complementarian" position, promoting the traditional belief that the Bible ordains different roles and responsibilities for women and men in the Church and family. | [
{
"paragraph_id": 0,
"text": "Christendom historically refers to the Christian states, Christian empires, Christian-majority countries and the countries in which Christianity dominates, prevails, or that it is culturally or historically intertwined with.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Following the spread of Christianity from the Levant to Europe and North Africa during the early Roman Empire, Christendom has been divided in the pre-existing Greek East and Latin West. Consequently, internal sects within the Christian religion arose with their own beliefs and practices, centred around the cities of Rome (Western Christianity, whose community was called Western or Latin Christendom) and Constantinople (Eastern Christianity, whose community was called Eastern Christendom). From the 11th to the 13th centuries, Latin Christendom rose to the central role of the Western world. The history of the Christian world spans about 2,000 years and includes a variety of socio-political developments, as well as advancements in the arts, architecture, literature, science, philosophy, and technology.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The term usually refers to the Middle Ages and the Early Modern period during which the Christian world represented a geopolitical power that was juxtaposed with both the pagan and especially the Muslim world.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Anglo-Saxon term crīstendōm appears to have been coined in the 9th century by a scribe somewhere in southern England, possibly at the court of king Alfred the Great of Wessex. The scribe was translating Paulus Orosius' book History Against the Pagans (c. 416) and in need for a term to express the concept of the universal culture focused on Jesus Christ. It had the sense now taken by Christianity (as is still the case with the cognate Dutch christendom, where it denotes mostly the religion itself, just like the German Christentum.",
"title": "Terminology"
},
{
"paragraph_id": 4,
"text": "The current sense of the word of \"lands where Christianity is the dominant religion\" emerged in Late Middle English (by c. 1400).",
"title": "Terminology"
},
{
"paragraph_id": 5,
"text": "Canadian theology professor Douglas John Hall stated (1997) that \"Christendom\" [...] means literally the dominion or sovereignty of the Christian religion.\" Thomas John Curry, Roman Catholic auxiliary bishop of Los Angeles, defined (2001) Christendom as \"the system dating from the fourth century by which governments upheld and promoted Christianity.\" Curry states that the end of Christendom came about because modern governments refused to \"uphold the teachings, customs, ethos, and practice of Christianity.\" British church historian Diarmaid MacCulloch described (2010) Christendom as \"the union between Christianity and secular power.\"",
"title": "Terminology"
},
{
"paragraph_id": 6,
"text": "Christendom was originally a medieval concept which has steadily evolved since the fall of the Western Roman Empire and the gradual rise of the Papacy more in religio-temporal implications practically during and after the reign of Charlemagne; and the concept let itself be lulled in the minds of the staunch believers to the archetype of a holy religious space inhabited by Christians, blessed by God, the Heavenly Father, ruled by Christ through the Church and protected by the Spirit-body of Christ; no wonder, this concept, as included the whole of Europe and then the expanding Christian territories on earth, strengthened the roots of Romance of the greatness of Christianity in the world.",
"title": "Terminology"
},
{
"paragraph_id": 7,
"text": "There is a common and nonliteral sense of the word that is much like the terms Western world, known world or Free World. The notion of \"Europe\" and the \"Western World\" has been intimately connected with the concept of \"Christianity and Christendom\"; many even attribute Christianity for being the link that created a unified European identity.",
"title": "Terminology"
},
{
"paragraph_id": 8,
"text": "Early Christianity spread in the Greek/Roman world and beyond as a 1st-century Jewish sect, which historians refer to as Jewish Christianity. It may be divided into two distinct phases: the apostolic period, when the first apostles were alive and organizing the Church, and the post-apostolic period, when an early episcopal structure developed, whereby bishoprics were governed by bishops (overseers).",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The post-apostolic period concerns the time roughly after the death of the apostles when bishops emerged as overseers of urban Christian populations. The earliest recorded use of the terms Christianity (Greek Χριστιανισμός) and catholic (Greek καθολικός), dates to this period, the 2nd century, attributed to Ignatius of Antioch c. 107. Early Christendom would close at the end of imperial persecution of Christians after the ascension of Constantine the Great and the Edict of Milan in AD 313 and the First Council of Nicaea in 325.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "According to Malcolm Muggeridge (1980), Christ founded Christianity, but Constantine founded Christendom. Canadian theology professor Douglas John Hall dates the 'inauguration of Christendom' to the 4th century, with Constantine playing the primary role (so much so that he equates Christendom with \"Constantinianism\") and Theodosius I (Edict of Thessalonica, 380) and Justinian I secondary roles.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "\"Christendom\" has referred to the medieval and renaissance notion of the Christian world as a polity. In essence, the earliest vision of Christendom was a vision of a Christian theocracy, a government founded upon and upholding Christian values, whose institutions are spread through and over with Christian doctrine. In this period, members of the Christian clergy wield political authority. The specific relationship between the political leaders and the clergy varied but, in theory, the national and political divisions were at times subsumed under the leadership of the church as an institution. This model of church-state relations was accepted by various Church leaders and political leaders in European history.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Church gradually became a defining institution of the Roman Empire. Emperor Constantine issued the Edict of Milan in 313 proclaiming toleration for the Christian religion, and convoked the First Council of Nicaea in 325 whose Nicene Creed included belief in \"one holy catholic and apostolic Church\". Emperor Theodosius I made Nicene Christianity the state church of the Roman Empire with the Edict of Thessalonica of 380. In terms of prosperity and cultural life, the Byzantine Empire was one of the peaks in Christian history and Christian civilization, and Constantinople remained the leading city of the Christian world in size, wealth, and culture. There was a renewed interest in classical Greek philosophy, as well as an increase in literary output in vernacular Greek.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "As the Western Roman Empire disintegrated into feudal kingdoms and principalities, the concept of Christendom changed as the western church became one of five patriarchates of the Pentarchy and the Christians of the Eastern Roman Empire developed. The Byzantine Empire was the last bastion of Christendom. Christendom would take a turn with the rise of the Franks, a Germanic tribe who converted to the Christian faith and entered into communion with Rome.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "On Christmas Day 800 AD, Pope Leo III crowned Charlemagne, resulting in the creation of another Christian king beside the Christian emperor in the Byzantine state. The Carolingian Empire created a definition of Christendom in juxtaposition with the Byzantine Empire, that of a distributed versus centralized culture respectively.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "The classical heritage flourished throughout the Middle Ages in both the Byzantine Greek East and the Latin West. In the Greek philosopher Plato's ideal state there are three major classes, which was representative of the idea of the \"tripartite soul\", which is expressive of three functions or capacities of the human soul: \"reason\", \"the spirited element\", and \"appetites\" (or \"passions\"). Will Durant made a convincing case that certain prominent features of Plato's ideal community where discernible in the organization, dogma and effectiveness of \"the\" Medieval Church in Europe:",
"title": "History"
},
{
"paragraph_id": 16,
"text": "... For a thousand years Europe was ruled by an order of guardians considerably like that which was visioned by our philosopher. During the Middle Ages it was customary to classify the population of Christendom into laboratores (workers), bellatores (soldiers), and oratores (clergy). The last group, though small in number, monopolized the instruments and opportunities of culture, and ruled with almost unlimited sway half of the most powerful continent on the globe. The clergy, like Plato's guardians, were placed in authority... by their talent as shown in ecclesiastical studies and administration, by their disposition to a life of meditation and simplicity, and ... by the influence of their relatives with the powers of state and church. In the latter half of the period in which they ruled [800 AD onwards], the clergy were as free from family cares as even Plato could desire [for such guardians]... [Clerical] Celibacy was part of the psychological structure of the power of the clergy; for on the one hand they were unimpeded by the narrowing egoism of the family, and on the other their apparent superiority to the call of the flesh added to the awe in which lay sinners held them.... In the latter half of the period in which they ruled, the clergy were as free from family cares as even Plato could desire.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "After the collapse of Charlemagne's empire, the southern remnants of the Holy Roman Empire became a collection of states loosely connected to the Holy See of Rome. Tensions between Pope Innocent III and secular rulers ran high, as the pontiff exerted control over their temporal counterparts in the west and vice versa. The pontificate of Innocent III is considered the height of temporal power of the papacy. The Corpus Christianum described the then-current notion of the community of all Christians united under the Roman Catholic Church. The community was to be guided by Christian values in its politics, economics and social life. Its legal basis was the corpus iuris canonica (body of canon law).",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In the East, Christendom became more defined as the Byzantine Empire's gradual loss of territory to an expanding Islam and the Muslim conquest of Persia. This caused Christianity to become important to the Byzantine identity. Before the East–West Schism which divided the Church religiously, there had been the notion of a universal Christendom that included the East and the West. After the East–West Schism, hopes of regaining religious unity with the West were ended by the Fourth Crusade, when Crusaders conquered the Byzantine capital of Constantinople and hastened the decline of the Byzantine Empire on the path to its destruction. With the breakup of the Byzantine Empire into individual nations with nationalist Orthodox Churches, the term Christendom described Western Europe, Catholicism, Orthodox Byzantines, and other Eastern rites of the Church.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The Catholic Church's peak of authority over all European Christians and their common endeavours of the Christian community—for example, the Crusades, the fight against the Moors in the Iberian Peninsula and against the Ottomans in the Balkans—helped to develop a sense of communal identity against the obstacle of Europe's deep political divisions. The popes, formally just the bishops of Rome, claimed to be the focus of all Christendom, which was largely recognised in Western Christendom from the 11th century until the Reformation, but not in Eastern Christendom. Moreover, this authority was also sometimes abused, and fostered the Inquisition and anti-Jewish pogroms, to root out divergent elements and create a religiously uniform community. Ultimately, the Inquisition was done away with by order of Pope Innocent III.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Christendom ultimately was led into specific crisis in the late Middle Ages, when the kings of France managed to establish a French national church during the 14th century and the papacy became ever more aligned with the Holy Roman Empire of the German Nation. Known as the Western Schism, western Christendom was a split between three men, who were driven by politics rather than any real theological disagreement for simultaneously claiming to be the true pope. The Avignon Papacy developed a reputation for corruption that estranged major parts of Western Christendom. The Avignon schism was ended by the Council of Constance.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Before the modern period, Christendom was in a general crisis at the time of the Renaissance Popes because of the moral laxity of these pontiffs and their willingness to seek and rely on temporal power as secular rulers did. Many in the Catholic Church's hierarchy in the Renaissance became increasingly entangled with insatiable greed for material wealth and temporal power, which led to many reform movements, some merely wanting a moral reformation of the Church's clergy, while others repudiated the Church and separated from it in order to form new sects. The Italian Renaissance produced ideas or institutions by which men living in society could be held together in harmony. In the early 16th century, Baldassare Castiglione (The Book of the Courtier) laid out his vision of the ideal gentleman and lady, while Machiavelli cast a jaundiced eye on \"la verità effetuale delle cose\"—the actual truth of things—in The Prince, composed, humanist style, chiefly of parallel ancient and modern examples of Virtù. Some Protestant movements grew up along lines of mysticism or renaissance humanism (cf. Erasmus). The Catholic Church fell partly into general neglect under the Renaissance Popes, whose inability to govern the Church by showing personal example of high moral standards set the climate for what would ultimately become the Protestant Reformation. During the Renaissance, the papacy was mainly run by the wealthy families and also had strong secular interests. To safeguard Rome and the connected Papal States the popes became necessarily involved in temporal matters, even leading armies, as the great patron of arts Pope Julius II did. During these intermediate times, popes strove to make Rome the capital of Christendom while projecting it through art, architecture, and literature as the center of a Golden Age of unity, order, and peace.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Professor Frederick J. McGinness described Rome as essential in understanding the legacy the Church and its representatives encapsulated best by The Eternal City:",
"title": "History"
},
{
"paragraph_id": 23,
"text": "No other city in Europe matches Rome in its traditions, history, legacies, and influence in the Western world. Rome in the Renaissance under the papacy not only acted as guardian and transmitter of these elements stemming from the Roman Empire but also assumed the role as artificer and interpreter of its myths and meanings for the peoples of Europe from the Middle Ages to modern times... Under the patronage of the popes, whose wealth and income were exceeded only by their ambitions, the city became a cultural center for master architects, sculptors, musicians, painters, and artisans of every kind...In its myth and message, Rome had become the sacred city of the popes, the prime symbol of a triumphant Catholicism, the center of orthodox Christianity, a new Jerusalem.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "It is clearly noticeable that the popes of the Italian Renaissance have been subjected by many writers with an overly harsh tone. Pope Julius II, for example, was not only an effective secular leader in military affairs, a deviously effective politician but foremost one of the greatest patron of the Renaissance period and person who also encouraged open criticism from noted humanists.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The blossoming of renaissance humanism was made very much possible due to the universality of the institutions of Catholic Church and represented by personalities such as Pope Pius II, Nicolaus Copernicus, Leon Battista Alberti, Desiderius Erasmus, sir Thomas More, Bartolomé de Las Casas, Leonardo da Vinci and Teresa of Ávila. George Santayana in his work The Life of Reason postulated the tenets of the all encompassing order the Church had brought and as the repository of the legacy of classical antiquity:",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The enterprise of individuals or of small aristocratic bodies has meantime sown the world which we call civilised with some seeds and nuclei of order. There are scattered about a variety of churches, industries, academies, and governments. But the universal order once dreamt of and nominally almost established, the empire of universal peace, all-permeating rational art, and philosophical worship, is mentioned no more. An unformulated conception, the prerational ethics of private privilege and national unity, fills the background of men's minds. It represents feudal traditions rather than the tendency really involved in contemporary industry, science, or philanthropy. Those dark ages, from which our political practice is derived, had a political theory which we should do well to study; for their theory about a universal empire and a Catholic church was in turn the echo of a former age of reason, when a few men conscious of ruling the world had for a moment sought to survey it as a whole and to rule it justly.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Developments in western philosophy and European events brought change to the notion of the Corpus Christianum. The Hundred Years' War accelerated the process of transforming France from a feudal monarchy to a centralized state. The rise of strong, centralized monarchies denoted the European transition from feudalism to capitalism. By the end of the Hundred Years' War, both France and England were able to raise enough money through taxation to create independent standing armies. In the Wars of the Roses, Henry Tudor took the crown of England. His heir, the absolute king Henry VIII establishing the English church.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "In modern history, the Reformation and rise of modernity in the early 16th century entailed a change in the Corpus Christianum. In the Holy Roman Empire, the Peace of Augsburg of 1555 officially ended the idea among secular leaders that all Christians must be united under one church. The principle of cuius regio, eius religio (\"whose the region is, his religion\") established the religious, political and geographic divisions of Christianity, and this was established with the Treaty of Westphalia in 1648, which legally ended the concept of a single Christian hegemony in the territories of the Holy Roman Empire, despite the Catholic Church's doctrine that it alone is the one true Church founded by Christ. Subsequently, each government determined the religion of their own state. Christians living in states where their denomination was not the established one were guaranteed the right to practice their faith in public during allotted hours and in private at their will. At times there were mass expulsions of dissenting faiths as happened with the Salzburg Protestants. Some people passed as adhering to the official church, but instead lived as Nicodemites or crypto-protestants.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "The European wars of religion are usually taken to have ended with the Treaty of Westphalia (1648), or arguably, including the Nine Years' War and the War of the Spanish Succession in this period, with the Treaty of Utrecht of 1713. In the 18th century, the focus shifts away from religious conflicts, either between Christian factions or against the external threat of Islamic factions.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "The European Miracle, the Age of Enlightenment and the formation of the great colonial empires together with the beginning decline of the Ottoman Empire mark the end of the geopolitical \"history of Christendom\". Instead, the focus of Western history shifts to the development of the nation-state, accompanied by increasing atheism and secularism, culminating with the French Revolution and the Napoleonic Wars at the turn of the 19th century.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Writing in 1997, Canadian theology professor Douglas John Hall argued that Christendom had either fallen already or was in its death throes; although its end was gradual and not as clear to pin down as its 4th-century establishment, the \"transition to the post-Constantinian, or post-Christendom, situation (...) has already been in process for a century or two,\" beginning with the 18th-century rationalist Enlightenment and the French Revolution (the first attempt to topple the Christian establishment). American Catholic bishop Thomas John Curry stated (2001) that the end of Christendom came about because modern governments refused to \"uphold the teachings, customs, ethos, and practice of Christianity.\" He argued the First Amendment to the United States Constitution (1791) and the Second Vatican Council's Declaration on Religious Freedom (1965) are two of the most important documents setting the stage for its end. According to British historian Diarmaid MacCulloch (2010), Christendom was 'killed' by the First World War (1914–18), which led to the fall of the three main Christian empires (Russian, German and Austrian) of Europe, as well as the Ottoman Empire, rupturing the Eastern Christian communities that had existed on its territory. The Christian empires were replaced by secular, even anti-clerical republics seeking to definitively keep the churches out of politics. The only surviving monarchy with an established church, Britain, was severely damaged by the war, lost most of Ireland due to Catholic–Protestant infighting, and was starting to lose grip on its colonies.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "Changes in worldwide Christianity over the last century have been significant, since 1900, Christianity has spread rapidly in the Global South and Third World countries. The late 20th century has shown the shift of Christian adherence to the Third World and the Southern Hemisphere in general, by 2010 about 157 countries and territories in the world had Christian majorities.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Western culture, throughout most of its history, has been nearly equivalent to Christian culture, and many of the population of the Western hemisphere could broadly be described as cultural Christians. The notion of \"Europe\" and the \"Western World\" has been intimately connected with the concept of \"Christianity and Christendom\"; many even attribute Christianity for being the link that created a unified European identity. Historian Paul Legutko of Stanford University said the Catholic Church is \"at the center of the development of the values, ideas, science, laws, and institutions which constitute what we call Western civilization.\"",
"title": "Classical culture"
},
{
"paragraph_id": 34,
"text": "Though Western culture contained several polytheistic religions during its early years under the Greek and Roman Empires, as the centralized Roman power waned, the dominance of the Catholic Church was the only consistent force in Western Europe. Until the Age of Enlightenment, Christian culture guided the course of philosophy, literature, art, music and science. Christian disciplines of the respective arts have subsequently developed into Christian philosophy, Christian art, Christian music, Christian literature etc. Art and literature, law, education, and politics were preserved in the teachings of the Church, in an environment that, otherwise, would have probably seen their loss. The Church founded many cathedrals, universities, monasteries and seminaries, some of which continue to exist today. Medieval Christianity created the first modern universities. The Catholic Church established a hospital system in Medieval Europe that vastly improved upon the Roman valetudinaria. These hospitals were established to cater to \"particular social groups marginalized by poverty, sickness, and age,\" according to historian of hospitals, Guenter Risse. Christianity also had a strong impact on all other aspects of life: marriage and family, education, the humanities and sciences, the political and social order, the economy, and the arts.",
"title": "Classical culture"
},
{
"paragraph_id": 35,
"text": "Christianity had a significant impact on education and science and medicine as the church created the bases of the Western system of education, and was the sponsor of founding universities in the Western world as the university is generally regarded as an institution that has its origin in the Medieval Christian setting. Many clerics throughout history have made significant contributions to science and Jesuits in particular have made numerous significant contributions to the development of science. The cultural influence of Christianity includes social welfare, founding hospitals, economics (as the Protestant work ethic), natural law (which would later influence the creation of international law), politics, architecture, literature, personal hygiene, and family life. Christianity played a role in ending practices common among pagan societies, such as human sacrifice, slavery, infanticide and polygamy.",
"title": "Classical culture"
},
{
"paragraph_id": 36,
"text": "Christian literature is writing that deals with Christian themes and incorporates the Christian world view. This constitutes a huge body of extremely varied writing. Christian poetry is any poetry that contains Christian teachings, themes, or references. The influence of Christianity on poetry has been great in any area that Christianity has taken hold. Christian poems often directly reference the Bible, while others provide allegory.",
"title": "Classical culture"
},
{
"paragraph_id": 37,
"text": "Christian art is art produced in an attempt to illustrate, supplement and portray in tangible form the principles of Christianity. Virtually all Christian groupings use or have used art to some extent. The prominence of art and the media, style, and representations change; however, the unifying theme is ultimately the representation of the life and times of Jesus and in some cases the Old Testament. Depictions of saints are also common, especially in Anglicanism, Roman Catholicism, and Eastern Orthodoxy.",
"title": "Classical culture"
},
{
"paragraph_id": 38,
"text": "An illuminated manuscript is a manuscript in which the text is supplemented by the addition of decoration. The earliest surviving substantive illuminated manuscripts are from the period AD 400 to 600, primarily produced in Ireland, Constantinople and Italy. The majority of surviving manuscripts are from the Middle Ages, although many illuminated manuscripts survive from the 15th century Renaissance, along with a very limited number from Late Antiquity.",
"title": "Classical culture"
},
{
"paragraph_id": 39,
"text": "Most illuminated manuscripts were created as codices, which had superseded scrolls; some isolated single sheets survive. A very few illuminated manuscript fragments survive on papyrus. Most medieval manuscripts, illuminated or not, were written on parchment (most commonly of calf, sheep, or goat skin), but most manuscripts important enough to illuminate were written on the best quality of parchment, called vellum, traditionally made of unsplit calfskin, though high quality parchment from other skins was also called parchment.",
"title": "Classical culture"
},
{
"paragraph_id": 40,
"text": "Christian art began, about two centuries after Christ, by borrowing motifs from Roman Imperial imagery, classical Greek and Roman religion and popular art. Religious images are used to some extent by the Abrahamic Christian faith, and often contain highly complex iconography, which reflects centuries of accumulated tradition. In the Late Antique period iconography began to be standardised, and to relate more closely to Biblical texts, although many gaps in the canonical Gospel narratives were plugged with matter from the apocryphal gospels. Eventually the Church would succeed in weeding most of these out, but some remain, like the ox and ass in the Nativity of Christ.",
"title": "Classical culture"
},
{
"paragraph_id": 41,
"text": "An icon is a religious work of art, most commonly a painting, from Eastern Christianity. Christianity has used symbolism from its very beginnings. In both East and West, numerous iconic types of Christ, Mary and saints and other subjects were developed; the number of named types of icons of Mary, with or without the infant Christ, was especially large in the East, whereas Christ Pantocrator was much the commonest image of Christ.",
"title": "Classical culture"
},
{
"paragraph_id": 42,
"text": "Christian symbolism invests objects or actions with an inner meaning expressing Christian ideas. Christianity has borrowed from the common stock of significant symbols known to most periods and to all regions of the world. Religious symbolism is effective when it appeals to both the intellect and the emotions. Especially important depictions of Mary include the Hodegetria and Panagia types. Traditional models evolved for narrative paintings, including large cycles covering the events of the Life of Christ, the Life of the Virgin, parts of the Old Testament, and, increasingly, the lives of popular saints. Especially in the West, a system of attributes developed for identifying individual figures of saints by a standard appearance and symbolic objects held by them; in the East they were more likely to identified by text labels.",
"title": "Classical culture"
},
{
"paragraph_id": 43,
"text": "Each saint has a story and a reason why he or she led an exemplary life. Symbols have been used to tell these stories throughout the history of the Church. A number of Christian saints are traditionally represented by a symbol or iconic motif associated with their life, termed an attribute or emblem, in order to identify them. The study of these forms part of iconography in Art history.",
"title": "Classical culture"
},
{
"paragraph_id": 44,
"text": "Christian architecture encompasses a wide range of both secular and religious styles from the foundation of Christianity to the present day, influencing the design and construction of buildings and structures in Christian culture.",
"title": "Classical culture"
},
{
"paragraph_id": 45,
"text": "Buildings were at first adapted from those originally intended for other purposes but, with the rise of distinctively ecclesiastical architecture, church buildings came to influence secular ones which have often imitated religious architecture. In the 20th century, the use of new materials, such as concrete, as well as simpler styles has had its effect upon the design of churches and arguably the flow of influence has been reversed. From the birth of Christianity to the present, the most significant period of transformation for Christian architecture in the west was the Gothic cathedral. In the east, Byzantine architecture was a continuation of Roman architecture.",
"title": "Classical culture"
},
{
"paragraph_id": 46,
"text": "Christian philosophy is a term to describe the fusion of various fields of philosophy with the theological doctrines of Christianity. Scholasticism, which means \"that [which] belongs to the school\", and was a method of learning taught by the academics (or school people) of medieval universities c. 1100–1500. Scholasticism originally started to reconcile the philosophy of the ancient classical philosophers with medieval Christian theology. Scholasticism is not a philosophy or theology in itself but a tool and method for learning which places emphasis on dialectical reasoning.",
"title": "Classical culture"
},
{
"paragraph_id": 47,
"text": "The Byzantine Empire, which was the most sophisticated culture during antiquity, suffered under Muslim conquests limiting its scientific prowess during the Medieval period. Christian Western Europe had suffered a catastrophic loss of knowledge following the fall of the Western Roman Empire. But thanks to the Church scholars such as Aquinas and Buridan, the West carried on at least the spirit of scientific inquiry which would later lead to Europe's taking the lead in science during the Scientific Revolution using translations of medieval works.",
"title": "Christian civilization"
},
{
"paragraph_id": 48,
"text": "Medieval technology refers to the technology used in medieval Europe under Christian rule. After the Renaissance of the 12th century, medieval Europe saw a radical change in the rate of new inventions, innovations in the ways of managing traditional means of production, and economic growth. The period saw major technological advances, including the adoption of gunpowder and the astrolabe, the invention of spectacles, and greatly improved water mills, building techniques, agriculture in general, clocks, and ships. The latter advances made possible the dawn of the Age of Exploration. The development of water mills was impressive, and extended from agriculture to sawmills both for timber and stone, probably derived from Roman technology. By the time of the Domesday Book, most large villages in Britain had mills. They also were widely used in mining, as described by Georg Agricola in De Re Metallica for raising ore from shafts, crushing ore, and even powering bellows.",
"title": "Christian civilization"
},
{
"paragraph_id": 49,
"text": "Significant in this respect were advances within the fields of navigation. The compass and astrolabe along with advances in shipbuilding, enabled the navigation of the World Oceans and thus domination of the worlds economic trade. Gutenberg's printing press made possible a dissemination of knowledge to a wider population, that would not only lead to a gradually more egalitarian society, but one more able to dominate other cultures, drawing from a vast reserve of knowledge and experience.",
"title": "Christian civilization"
},
{
"paragraph_id": 50,
"text": "During the Renaissance, great advances occurred in geography, astronomy, chemistry, physics, math, manufacturing, and engineering. The rediscovery of ancient scientific texts was accelerated after the Fall of Constantinople, and the invention of printing which would democratize learning and allow a faster propagation of new ideas. Renaissance technology is the set of artifacts and customs, spanning roughly the 14th through the 16th century. The era is marked by such profound technical advancements like the printing press, linear perspectivity, patent law, double shell domes or Bastion fortresses. Draw-books of the Renaissance artist-engineers such as Taccola and Leonardo da Vinci give a deep insight into the mechanical technology then known and applied.",
"title": "Christian civilization"
},
{
"paragraph_id": 51,
"text": "Renaissance science spawned the Scientific Revolution; science and technology began a cycle of mutual advancement. The Scientific Renaissance was the early phase of the Scientific Revolution. In the two-phase model of early modern science: a Scientific Renaissance of the 15th and 16th centuries, focused on the restoration of the natural knowledge of the ancients; and a Scientific Revolution of the 17th century, when scientists shifted from recovery to innovation. Some scholars and historians attributes Christianity to having contributed to the rise of the Scientific Revolution.",
"title": "Christian civilization"
},
{
"paragraph_id": 52,
"text": "Professor Noah J Efron says that \"Generations of historians and sociologists have discovered many ways in which Christians, Christian beliefs, and Christian institutions played crucial roles in fashioning the tenets, methods, and institutions of what in time became modern science. They found that some forms of Christianity provided the motivation to study nature systematically...\" Virtually all modern scholars and historians agree that Christianity moved many early-modern intellectuals to study nature systematically.",
"title": "Christian civilization"
},
{
"paragraph_id": 53,
"text": "In 2009, according to the Encyclopædia Britannica, Christianity was the majority religion in Europe (including Russia) with 80%, Latin America with 92%, North America with 81%, and Oceania with 79%. There are also large Christian communities in other parts of the world, such as China, India and Central Asia, where Christianity is the second-largest religion after Islam. The United States is home to the world's largest Christian population, followed by Brazil and Mexico.",
"title": "Demographics"
},
{
"paragraph_id": 54,
"text": "Many Christians not only live under, but also have an official status in, a state religion of the following nations: Argentina (Roman Catholic Church), Armenia (Armenian Apostolic Church), Costa Rica (Roman Catholic Church), Denmark (Church of Denmark), El Salvador (Roman Catholic Church), England (Church of England), Georgia (Georgian Orthodox Church), Greece (Church of Greece), Iceland (Church of Iceland), Liechtenstein (Roman Catholic Church), Malta (Roman Catholic Church), Monaco (Roman Catholic Church), Romania (Romanian Orthodox Church), Norway (Church of Norway), Vatican City (Roman Catholic Church), Switzerland (Roman Catholic Church, Swiss Reformed Church and Christian Catholic Church of Switzerland).",
"title": "Demographics"
},
{
"paragraph_id": 55,
"text": "The estimated number of Christians in the world ranges from 2.2 billion to 2.4 billion people. The faith represents approximately one-third of the world's population and is the largest religion in the world, with the three largest groups of Christians being the Catholic Church, Protestantism, and the Eastern Orthodox Church. The largest Christian denomination is the Catholic Church, with an estimated 1.2 billion adherents.",
"title": "Demographics"
},
{
"paragraph_id": 56,
"text": "A religious order is a lineage of communities and organizations of people who live in some way set apart from society in accordance with their specific religious devotion, usually characterized by the principles of its founder's religious practice. In contrast, the term Holy Orders is used by many Christian churches to refer to ordination or to a group of individuals who are set apart for a special role or ministry. Historically, the word \"order\" designated an established civil body or corporation with a hierarchy, and ordination meant legal incorporation into an ordo. The word \"holy\" refers to the Church. In context, therefore, a holy order is set apart for ministry in the Church. Religious orders are composed of initiates (laity) and, in some traditions, ordained clergies.",
"title": "Demographics"
},
{
"paragraph_id": 57,
"text": "Various organizations include:",
"title": "Demographics"
},
{
"paragraph_id": 58,
"text": "Within the framework of Christianity, there are at least three possible definitions for Church law. One is the Torah/Mosaic Law (from what Christians consider to be the Old Testament) also called Divine Law or Biblical law. Another is the instructions of Jesus of Nazareth in the Gospel (sometimes referred to as the Law of Christ or the New Commandment or the New Covenant). A third is canon law which is the internal ecclesiastical law governing the Roman Catholic Church, the Eastern Orthodox churches, and the Anglican Communion of churches. The way that such church law is legislated, interpreted and at times adjudicated varies widely among these three bodies of churches. In all three traditions, a canon was initially a rule adopted by a council (From Greek kanon / κανών, Hebrew kaneh / קנה, for rule, standard, or measure); these canons formed the foundation of canon law.",
"title": "Christianity law and ethics"
},
{
"paragraph_id": 59,
"text": "Christian ethics in general has tended to stress the need for grace, mercy, and forgiveness because of human weakness and developed while Early Christians were subjects of the Roman Empire. From the time Nero blamed Christians for setting Rome ablaze (64 AD) until Galerius (311 AD), persecutions against Christians erupted periodically. Consequently, Early Christian ethics included discussions of how believers should relate to Roman authority and to the empire.",
"title": "Christianity law and ethics"
},
{
"paragraph_id": 60,
"text": "Under the Emperor Constantine I (312–337), Christianity became a legal religion. While some scholars debate whether Constantine's conversion to Christianity was authentic or simply matter of political expediency, Constantine's decree made the empire safe for Christian practice and belief. Consequently, issues of Christian doctrine, ethics and church practice were debated openly, see for example the First Council of Nicaea and the First seven Ecumenical Councils. By the time of Theodosius I (379–395), Christianity had become the state religion of the empire. With Christianity in power, ethical concerns broaden and included discussions of the proper role of the state.",
"title": "Christianity law and ethics"
},
{
"paragraph_id": 61,
"text": "Render unto Caesar... is the beginning of a phrase attributed to Jesus in the synoptic gospels which reads in full, \"Render unto Caesar the things which are Caesar's, and unto God the things that are God's\". This phrase has become a widely quoted summary of the relationship between Christianity and secular authority. The gospels say that when Jesus gave his response, his interrogators \"marvelled, and left him, and went their way.\" Time has not resolved an ambiguity in this phrase, and people continue to interpret this passage to support various positions that are poles apart. The traditional division, carefully determined, in Christian thought is the state and church have separate spheres of influence.",
"title": "Christianity law and ethics"
},
{
"paragraph_id": 62,
"text": "Thomas Aquinas thoroughly discussed that human law is positive law which means that it is natural law applied by governments to societies. All human laws were to be judged by their conformity to the natural law. An unjust law was in a sense no law at all. At this point, the natural law was not only used to pass judgment on the moral worth of various laws, but also to determine what the law said in the first place. This could result in some tension. Late ecclesiastical writers followed in his footsteps.",
"title": "Christianity law and ethics"
},
{
"paragraph_id": 63,
"text": "Christian democracy is a political ideology that seeks to apply Christian principles to public policy. It emerged in 19th-century Europe, largely under the influence of Catholic social teaching. In a number of countries, the democracy's Christian ethos has been diluted by secularisation. In practice, Christian democracy is often considered conservative on cultural, social and moral issues and progressive on fiscal and economic issues. In places, where their opponents have traditionally been secularist socialists and social democrats, Christian democratic parties are moderately conservative, whereas in other cultural and political environments they can lean to the left.",
"title": "Christianity law and ethics"
},
{
"paragraph_id": 64,
"text": "Attitudes and beliefs about the roles and responsibilities of women in Christianity vary considerably today as they have throughout the last two millennia—evolving along with or counter to the societies in which Christians have lived. The Bible and Christianity historically have been interpreted as excluding women from church leadership and placing them in submissive roles in marriage. Male leadership has been assumed in the church and within marriage, society and government.",
"title": "Christianity law and ethics"
},
{
"paragraph_id": 65,
"text": "Some contemporary writers describe the role of women in the life of the church as having been downplayed, overlooked, or denied throughout much of Christian history. Paradigm shifts in gender roles in society and also many churches has inspired reevaluation by many Christians of some long-held attitudes to the contrary. Christian egalitarians have increasingly argued for equal roles for men and women in marriage, as well as for the ordination of women to the clergy. Contemporary conservatives meanwhile have reasserted what has been termed a \"complementarian\" position, promoting the traditional belief that the Bible ordains different roles and responsibilities for women and men in the Church and family.",
"title": "Christianity law and ethics"
}
] | Christendom historically refers to the Christian states, Christian empires, Christian-majority countries and the countries in which Christianity dominates, prevails, or that it is culturally or historically intertwined with. Following the spread of Christianity from the Levant to Europe and North Africa during the early Roman Empire, Christendom has been divided in the pre-existing Greek East and Latin West. Consequently, internal sects within the Christian religion arose with their own beliefs and practices, centred around the cities of Rome and Constantinople. From the 11th to the 13th centuries, Latin Christendom rose to the central role of the Western world. The history of the Christian world spans about 2,000 years and includes a variety of socio-political developments, as well as advancements in the arts, architecture, literature, science, philosophy, and technology. The term usually refers to the Middle Ages and the Early Modern period during which the Christian world represented a geopolitical power that was juxtaposed with both the pagan and especially the Muslim world. | 2001-10-05T08:47:38Z | 2023-12-30T21:07:16Z | [
"Template:Dead link",
"Template:Aut",
"Template:Cite journal",
"Template:Christianity",
"Template:Christian culture",
"Template:Efn",
"Template:Reflist",
"Template:Failed verification",
"Template:Blockquote",
"Template:Lang",
"Template:Harvp",
"Template:Short description",
"Template:Increase",
"Template:Harnvb",
"Template:Citation",
"Template:Div col",
"Template:Western culture",
"Template:Authority control",
"Template:Further",
"Template:Unreliable source?",
"Template:Off topic",
"Template:Cite web",
"Template:Sister project links",
"Template:Annotated link",
"Template:CathEncy",
"Template:Div col end",
"Template:Clarify",
"Template:Main",
"Template:Decrease",
"Template:Harvnb",
"Template:ISBN",
"Template:Bibleref",
"Template:Cite news",
"Template:Cite encyclopedia",
"Template:Citation needed",
"Template:See also",
"Template:Sfn",
"Template:Nochange",
"Template:Cite book",
"Template:Christianity footer",
"Template:Circa",
"Template:Notelist",
"Template:Cite EB1911",
"Template:Cite CIA World Factbook",
"Template:Wiktionary"
] | https://en.wikipedia.org/wiki/Christendom |
6,710 | Coyote | The coyote (Canis latrans) is a species of canine native to North America. It is smaller than its close relative, the wolf, and slightly smaller than the closely related eastern wolf and red wolf. It fills much of the same ecological niche as the golden jackal does in Eurasia. The coyote is larger and more predatory and was once referred to as the American jackal by a behavioral ecologist. Other historical names for the species include the prairie wolf and the brush wolf.
The coyote is listed as least concern by the International Union for Conservation of Nature, due to its wide distribution and abundance throughout North America. The species is versatile, able to adapt to and expand into environments modified by humans; urban coyotes are common in many cities. The coyote was sighted in eastern Panama (across the Panama Canal from their home range) for the first time in 2013.
The coyote has 19 recognized subspecies. The average male weighs 8 to 20 kg (18 to 44 lb) and the average female 7 to 18 kg (15 to 40 lb). Their fur color is predominantly light gray and red or fulvous interspersed with black and white, though it varies somewhat with geography. It is highly flexible in social organization, living either in a family unit or in loosely knit packs of unrelated individuals. Primarily carnivorous, its diet consists mainly of deer, rabbits, hares, rodents, birds, reptiles, amphibians, fish, and invertebrates, though it may also eat fruits and vegetables on occasion. Its characteristic vocalization is a howl made by solitary individuals. Humans are the coyote's greatest threat, followed by cougars and gray wolves. Despite predation by gray wolves, coyotes sometimes mate with them, and with eastern, or red wolves, producing "coywolf" hybrids. In the northeastern regions of North America, the eastern coyote (a larger subspecies, though still smaller than wolves) is the result of various historical and recent matings with various types of wolves. Genetic studies show that most North American wolves contain some level of coyote DNA.
The coyote is a prominent character in Native American folklore, mainly in Aridoamerica, usually depicted as a trickster that alternately assumes the form of an actual coyote or a man. As with other trickster figures, the coyote uses deception and humor to rebel against social conventions. The animal was especially respected in Mesoamerican cosmology as a symbol of military might. After the European colonization of the Americas, it was seen in Anglo-American culture as a cowardly and untrustworthy animal. Unlike wolves, which have seen their public image improve, attitudes towards the coyote remain largely negative.
Coyote males average 8 to 20 kg (18 to 44 lb) in weight, while females average 7 to 18 kg (15 to 40 lb), though size varies geographically. Northern subspecies, which average 18 kg (40 lb), tend to grow larger than the southern subspecies of Mexico, which average 11.5 kg (25 lb). Total length ranges on average from 1.0 to 1.35 m (3 ft 3 in to 4 ft 5 in); comprising a tail length of 40 cm (16 in), with females being shorter in both body length and height. The largest coyote on record was a male killed near Afton, Wyoming, on November 19, 1937, which measured 1.5 m (4 ft 11 in) from nose to tail, and weighed 34 kg (75 lb). Scent glands are located at the upper side of the base of the tail and are a bluish-black color.
The color and texture of the coyote's fur vary somewhat geographically. The hair's predominant color is light gray and red or fulvous, interspersed around the body with black and white. Coyotes living at high elevations tend to have more black and gray shades than their desert-dwelling counterparts, which are more fulvous or whitish-gray. The coyote's fur consists of short, soft underfur and long, coarse guard hairs. The fur of northern subspecies is longer and denser than in southern forms, with the fur of some Mexican and Central American forms being almost hispid (bristly). Generally, adult coyotes (including coywolf hybrids) have a sable coat color, dark neonatal coat color, bushy tail with an active supracaudal gland, and a white facial mask. Albinism is extremely rare in coyotes. Out of a total of 750,000 coyotes killed by federal and cooperative hunters between March 1938, and June 1945, only two were albinos.
The coyote is typically smaller than the gray wolf, but has longer ears and a relatively larger braincase, as well as a thinner frame, face, and muzzle. The scent glands are smaller than the gray wolf's, but are the same color. Its fur color variation is much less varied than that of a wolf. The coyote also carries its tail downwards when running or walking, rather than horizontally as the wolf does.
Coyote tracks can be distinguished from those of dogs by their more elongated, less rounded shape. Unlike dogs, the upper canines of coyotes extend past the mental foramina.
At the time of the European colonization of the Americas, coyotes were largely confined to open plains and arid regions of the western half of the continent. In early post-Columbian historical records, determining whether the writer is describing coyotes or wolves is often difficult. One record from 1750 in Kaskaskia, Illinois, written by a local priest, noted that the "wolves" encountered there were smaller and less daring than European wolves. Another account from the early 1800's in Edwards County mentioned wolves howling at night, though these were likely coyotes. This species was encountered several times during the Lewis and Clark Expedition (1804–1806), though it was already well known to European traders on the upper Missouri. Meriwether Lewis, writing on 5 May 1805, in northeastern Montana, described the coyote in these terms:
The small wolf or burrowing dog of the prairies are the inhabitants almost invariably of the open plains; they usually associate in bands of ten or twelve sometimes more and burrow near some pass or place much frequented by game; not being able alone to take deer or goat they are rarely ever found alone but hunt in bands; they frequently watch and seize their prey near their burrows; in these burrows, they raise their young and to them they also resort when pursued; when a person approaches them they frequently bark, their note being precisely that of the small dog. They are of an intermediate size between that of the fox and dog, very active fleet and delicately formed; the ears large erect and pointed the head long and pointed more like that of the fox; tale long ... the hair and fur also resembles the fox, tho' is much coarser and inferior. They are of a pale reddish-brown colour. The eye of a deep sea green colour small and piercing. Their [claws] are rather longer than those of the ordinary wolf or that common to the Atlantic states, none of which are to be found in this quarter, nor I believe above the river Plat.
The coyote was first scientifically described by naturalist Thomas Say in September 1819, on the site of Lewis and Clark's Council Bluffs, 24 km (15 mi) up the Missouri River from the mouth of the Platte during a government-sponsored expedition with Major Stephen Long. He had the first edition of the Lewis and Clark journals in hand, which contained Biddle's edited version of Lewis's observations dated 5 May 1805. His account was published in 1823. Say was the first person to document the difference between a "prairie wolf" (coyote) and on the next page of his journal a wolf which he named Canis nubilus (Great Plains wolf). Say described the coyote as:
Canis latrans. Cinereous or gray, varied with black above, and dull fulvous, or cinnamon; hair at base dusky plumbeous, in the middle of its length dull cinnamon, and at tip gray or black, longer on the vertebral line; ears erect, rounded at tip, cinnamon behind, the hair dark plumbeous at base, inside lined with gray hair; eyelids edged with black, superior eyelashes black beneath, and at tip above; supplemental lid margined with black-brown before, and edged with black brown behind; iris yellow; pupil black-blue; spot upon the lachrymal sac black-brown; rostrum cinnamon, tinctured with grayish on the nose; lips white, edged with black, three series of black seta; head between the ears intermixed with gray, and dull cinnamon, hairs dusky plumbeous at base; sides paler than the back, obsoletely fasciate with black above the legs; legs cinnamon on the outer side, more distinct on the posterior hair: a dilated black abbreviated line on the anterior ones near the wrist; tail bushy, fusiform, straight, varied with gray and cinnamon, a spot near the base above, and tip black; the tip of the trunk of the tail, attains the tip of the os calcis, when the leg is extended; beneath white, immaculate, tail cinnamon towards the tip, tip black; posterior feet four toed, anterior five toed.
The first published usage of the word "coyote" (which is a Spanish borrowing of its Nahuatl name coyōtl pronunciation) comes from the historian Francisco Javier Clavijero's Historia de México in 1780. The first time it was used in English occurred in William Bullock's Six months' residence and travels in Mexico (1824), where it is variously transcribed as cayjotte and cocyotie. The word's spelling was standardized as "coyote" by the 1880s.
Alternative English names for the coyote include "prairie wolf", "brush wolf", "cased wolf", "little wolf" and "American jackal". Its binomial name Canis latrans translates to "barking dog", a reference to the many vocalizations they produce.
Xiaoming Wang and Richard H. Tedford, one of the foremost authorities on carnivore evolution, proposed that the genus Canis was the descendant of the coyote-like Eucyon davisi and its remains first appeared in the Miocene 6 million years ago (Mya) in the southwestern US and Mexico. By the Pliocene (5 Mya), the larger Canis lepophagus appeared in the same region and by the early Pleistocene (1 Mya) C. latrans (the coyote) was in existence. They proposed that the progression from Eucyon davisi to C. lepophagus to the coyote was linear evolution.
C. latrans and C. aureus are closely related to C. edwardii, a species that appeared earliest spanning the mid-Blancan (late Pliocene) to the close of the Irvingtonian (late Pleistocene), and coyote remains indistinguishable from C. latrans were contemporaneous with C. edwardii in North America. Johnston describes C. lepophagus as having a more slender skull and skeleton than the modern coyote. Ronald Nowak found that the early populations had small, delicate, narrowly proportioned skulls that resemble small coyotes and appear to be ancestral to C. latrans.
C. lepophagus was similar in weight to modern coyotes, but had shorter limb bones that indicate a less cursorial lifestyle. The coyote represents a more primitive form of Canis than the gray wolf, as shown by its relatively small size and its comparatively narrow skull and jaws, which lack the grasping power necessary to hold the large prey in which wolves specialize. This is further corroborated by the coyote's sagittal crest, which is low or totally flattened, thus indicating a weaker bite than the wolves. The coyote is not a specialized carnivore as the wolf is, as shown by the larger chewing surfaces on the molars, reflecting the species' relative dependence on vegetable matter. In these respects, the coyote resembles the fox-like progenitors of the genus more so than the wolf.
The oldest fossils that fall within the range of the modern coyote date to 0.74–0.85 Ma (million years) in Hamilton Cave, West Virginia; 0.73 Ma in Irvington, California; 0.35–0.48 Ma in Porcupine Cave, Colorado, and in Cumberland Cave, Pennsylvania. Modern coyotes arose 1,000 years after the Quaternary extinction event. Compared to their modern Holocene counterparts, Pleistocene coyotes (C. l. orcutti) were larger and more robust, likely in response to larger competitors and prey. Pleistocene coyotes were likely more specialized carnivores than their descendants, as their teeth were more adapted to shearing meat, showing fewer grinding surfaces suited for processing vegetation. Their reduction in size occurred within 1,000 years of the Quaternary extinction event, when their large prey died out. Furthermore, Pleistocene coyotes were unable to exploit the big-game hunting niche left vacant after the extinction of the dire wolf (Aenocyon dirus), as it was rapidly filled by gray wolves, which likely actively killed off the large coyotes, with natural selection favoring the modern gracile morph.
In 1993, a study proposed that the wolves of North America display skull traits more similar to the coyote than wolves from Eurasia. In 2010, a study found that the coyote was a basal member of the clade that included the Tibetan wolf, the domestic dog, the Mongolian wolf and the Eurasian wolf, with the Tibetan wolf diverging early from wolves and domestic dogs.
In 2016, a whole-genome DNA study proposed, based on the assumptions made, that all of the North American wolves and coyotes diverged from a common ancestor about 51,000 years ago. However, the proposed timing of the wolf / coyote divergence conflicts with the discovery of a coyote-like specimen in strata dated to 1 Mya. The study also indicated that all North American wolves have a significant amount of coyote ancestry and all coyotes some degree of wolf ancestry, and that the red wolf and eastern wolf are highly admixed with different proportions of gray wolf and coyote ancestry.
Genetic studies relating to wolves or dogs have inferred phylogenetic relationships based on the only reference genome available, that of the Boxer dog. In 2017, the first reference genome of the wolf Canis lupus lupus was mapped to aid future research. In 2018, a study looked at the genomic structure and admixture of North American wolves, wolf-like canids, and coyotes using specimens from across their entire range that mapped the largest dataset of nuclear genome sequences against the wolf reference genome.
The study supports the findings of previous studies that North American gray wolves and wolf-like canids were the result of complex gray wolf and coyote mixing. A polar wolf from Greenland and a coyote from Mexico represented the purest specimens. The coyotes from Alaska, California, Alabama, and Quebec show almost no wolf ancestry. Coyotes from Missouri, Illinois, and Florida exhibit 5–10% wolf ancestry. There was 40% wolf to 60% coyote ancestry in red wolves, 60% wolf to 40% coyote in Eastern timber wolves, and 75% wolf to 25% coyote in the Great Lakes wolves. There was 10% coyote ancestry in Mexican wolves and the Atlantic Coast wolves, 5% in Pacific Coast and Yellowstone wolves, and less than 3% in Canadian archipelago wolves. If a third canid had been involved in the admixture of the North American wolf-like canids, then its genetic signature would have been found in coyotes and wolves, which it has not.
In 2018, whole genome sequencing was used to compare members of the genus Canis. The study indicates that the common ancestor of the coyote and gray wolf has genetically admixed with a ghost population of an extinct, unidentified canid. The "ghost" canid was genetically close to the dhole, and had evolved after the divergence of the African wild dog from the other canid species. The basal position of the coyote compared to the wolf is proposed to be due to the coyote retaining more of the mitochondrial genome from the unknown extinct canid.
As of 2005, 19 subspecies are recognized. Geographic variation in coyotes is not great, though taken as a whole, the eastern subspecies (C. l. thamnos and C. l. frustor) are large, dark-colored animals, with a gradual paling in color and reduction in size westward and northward (C. l. texensis, C. l. latrans, C. l. lestes, and C. l. incolatus), a brightening of 'ochraceous' tones – deep orange or brown – towards the Pacific coast (C. l. ochropus, C. l. umpquensis), a reduction in size in Aridoamerica (C. l. microdon, C. l. mearnsi) and a general trend towards dark reddish colors and short muzzles in Mexican and Central American populations.
Coyotes occasionally mate with domestic dogs, sometimes producing crosses colloquially known as "coydogs". Such matings are rare in the wild, as the mating cycles of dogs and coyotes do not coincide, and coyotes are usually antagonistic towards dogs. Hybridization usually only occurs when coyotes are expanding into areas where conspecifics are few, and dogs are the only alternatives. Even then, pup survival rates are lower than normal, as dogs do not form pair bonds with coyotes, thus making the rearing of pups more difficult. In captivity, F1 hybrids (first generation) tend to be more mischievous and less manageable as pups than dogs, and are less trustworthy on maturity than wolf-dog hybrids.
Hybrids vary in appearance, but generally retain the coyote's usual characteristics. F1 hybrids tend to be intermediate in form between dogs and coyotes, while F2 hybrids (second generation) are more varied. Both F1 and F2 hybrids resemble their coyote parents in terms of shyness and intrasexual aggression. Hybrids are fertile and can be successfully bred through four generations. Melanistic coyotes owe their black pelts to a mutation that first arose in domestic dogs. A population of non-albino white coyotes in Newfoundland owe their coloration to a melanocortin 1 receptor mutation inherited from Golden Retrievers.
Coyotes have hybridized with wolves to varying degrees, particularly in eastern North America. The so-called "eastern coyote" of northeastern North America probably originated in the aftermath of the extermination of gray and eastern wolves in the northeast, thus allowing coyotes to colonize former wolf ranges and mix with the remnant wolf populations. This hybrid is smaller than either the gray or eastern wolf, and holds smaller territories, but is in turn larger and holds more extensive home ranges than the typical western coyote. As of 2010, the eastern coyote's genetic makeup is fairly uniform, with minimal influence from eastern wolves or western coyotes.
Adult eastern coyotes are larger than western coyotes, with female eastern coyotes weighing 21% more than male western coyotes. Physical differences become more apparent by the age of 35 days, with eastern coyote pups having longer legs than their western counterparts. Differences in dental development also occurs, with tooth eruption being later, and in a different order in the eastern coyote. Aside from its size, the eastern coyote is physically similar to the western coyote. The four color phases range from dark brown to blond or reddish blond, though the most common phase is gray-brown, with reddish legs, ears, and flanks.
No significant differences exist between eastern and western coyotes in aggression and fighting, though eastern coyotes tend to fight less, and are more playful. Unlike western coyote pups, in which fighting precedes play behavior, fighting among eastern coyote pups occurs after the onset of play. Eastern coyotes tend to reach sexual maturity at two years of age, much later than in western coyotes.
Eastern and red wolves are also products of varying degrees of wolf-coyote hybridization. The eastern wolf probably was a result of a wolf-coyote admixture, combined with extensive backcrossing with parent gray wolf populations. The red wolf may have originated during a time of declining wolf populations in the Southeastern Woodlands, forcing a wolf-coyote hybridization, as well as backcrossing with local parent coyote populations to the extent that about 75–80% of the modern red wolf's genome is of coyote derivation.
Like the Eurasian golden jackal, the coyote is gregarious, but not as dependent on conspecifics as more social canid species like wolves are. This is likely because the coyote is not a specialized hunter of large prey as the latter species is. The basic social unit of a coyote pack is a family containing a reproductive female. However, unrelated coyotes may join forces for companionship, or to bring down prey too large to attack singly. Such "nonfamily" packs are only temporary, and may consist of bachelor males, nonreproductive females and subadult young. Families are formed in midwinter, when females enter estrus. Pair bonding can occur 2–3 months before actual copulation takes place.
The copulatory tie can last 5–45 minutes. A female entering estrus attracts males by scent marking and howling with increasing frequency. A single female in heat can attract up to seven reproductive males, which can follow her for as long as a month. Although some squabbling may occur among the males, once the female has selected a mate and copulates, the rejected males do not intervene, and move on once they detect other estrous females. Unlike the wolf, which has been known to practice both monogamous and bigamous matings, the coyote is strictly monogamous, even in areas with high coyote densities and abundant food.
Females that fail to mate sometimes assist their sisters or mothers in raising their pups, or join their siblings until the next time they can mate. The newly mated pair then establishes a territory and either constructs their own den or cleans out abandoned badger, marmot, or skunk earths. During the pregnancy, the male frequently hunts alone and brings back food for the female. The female may line the den with dried grass or with fur pulled from her belly. The gestation period is 63 days, with an average litter size of six, though the number fluctuates depending on coyote population density and the abundance of food.
Coyote pups are born in dens, hollow trees, or under ledges, and weigh 200 to 500 g (0.44 to 1.10 lb) at birth. They are altricial, and are completely dependent on milk for their first 10 days. The incisors erupt at about 12 days, the canines at 16, and the second premolars at 21. Their eyes open after 10 days, by which point the pups become increasingly more mobile, walking by 20 days, and running at the age of six weeks. The parents begin supplementing the pup's diet with regurgitated solid food after 12–15 days. By the age of four to six weeks, when their milk teeth are fully functional, the pups are given small food items such as mice, rabbits, or pieces of ungulate carcasses, with lactation steadily decreasing after two months.
Unlike wolf pups, coyote pups begin seriously fighting (as opposed to play fighting) prior to engaging in play behavior. A common play behavior includes the coyote "hip-slam". By three weeks of age, coyote pups bite each other with less inhibition than wolf pups. By the age of four to five weeks, pups have established dominance hierarchies, and are by then more likely to play rather than fight. The male plays an active role in feeding, grooming, and guarding the pups, but abandons them if the female goes missing before the pups are completely weaned. The den is abandoned by June to July, and the pups follow their parents in patrolling their territory and hunting. Pups may leave their families in August, though can remain for much longer. The pups attain adult dimensions at eight months and gain adult weight a month later.
Individual feeding territories vary in size from 0.4 to 62 km (0.15 to 24 sq mi), with the general concentration of coyotes in a given area depending on food abundance, adequate denning sites, and competition with conspecifics and other predators. The coyote generally does not defend its territory outside of the denning season, and is much less aggressive towards intruders than the wolf is, typically chasing and sparring with them, but rarely killing them. Conflicts between coyotes can arise during times of food shortage. Coyotes mark their territories by raised-leg urination and ground-scratching.
Like wolves, coyotes use a den, usually the deserted holes of other species, when gestating and rearing young, though they may occasionally give birth under sagebrushes in the open. Coyote dens can be located in canyons, washouts, coulees, banks, rock bluffs, or level ground. Some dens have been found under abandoned homestead shacks, grain bins, drainage pipes, railroad tracks, hollow logs, thickets, and thistles. The den is continuously dug and cleaned out by the female until the pups are born. Should the den be disturbed or infested with fleas, the pups are moved into another den. A coyote den can have several entrances and passages branching out from the main chamber. A single den can be used year after year.
While the popular consensus is that olfaction is very important for hunting, two studies that experimentally investigated the role of olfactory, auditory, and visual cues found that visual cues are the most important ones for hunting in red foxes and coyotes.
When hunting large prey, the coyote often works in pairs or small groups. Success in killing large ungulates depends on factors such as snow depth and crust density. Younger animals usually avoid participating in such hunts, with the breeding pair typically doing most of the work. The coyote pursues large prey, typically hamstringing the animal, and subsequently then harassing it until the prey falls. Like other canids, the coyote caches excess food. Coyotes catch mouse-sized rodents by pouncing, whereas ground squirrels are chased. Although coyotes can live in large groups, small prey is typically caught singly.
Coyotes have been observed to kill porcupines in pairs, using their paws to flip the rodents on their backs, then attacking the soft underbelly. Only old and experienced coyotes can successfully prey on porcupines, with many predation attempts by young coyotes resulting in them being injured by their prey's quills. Coyotes sometimes urinate on their food, possibly to claim ownership over it. Recent evidence demonstrates that at least some coyotes have become more nocturnal in hunting, presumably to avoid humans.
Coyotes may occasionally form mutualistic hunting relationships with American badgers, assisting each other in digging up rodent prey. The relationship between the two species may occasionally border on apparent "friendship", as some coyotes have been observed laying their heads on their badger companions or licking their faces without protest. The amicable interactions between coyotes and badgers were known to pre-Columbian civilizations, as shown on a jar found in Mexico dated to 1250–1300 CE depicting the relationship between the two.
Food scraps, pet food, and animal feces may attract a coyote to a trash can.
Being both a gregarious and solitary animal, the variability of the coyote's visual and vocal repertoire is intermediate between that of the solitary foxes and the highly social wolf. The aggressive behavior of the coyote bears more similarities to that of foxes than it does that of wolves and dogs. An aggressive coyote arches its back and lowers its tail. Unlike dogs, which solicit playful behavior by performing a "play-bow" followed by a "play-leap", play in coyotes consists of a bow, followed by side-to-side head flexions and a series of "spins" and "dives". Although coyotes will sometimes bite their playmates' scruff as dogs do, they typically approach low, and make upward-directed bites.
Pups fight each other regardless of sex, while among adults, aggression is typically reserved for members of the same sex. Combatants approach each other waving their tails and snarling with their jaws open, though fights are typically silent. Males tend to fight in a vertical stance, while females fight on all four paws. Fights among females tend to be more serious than ones among males, as females seize their opponents' forelegs, throat, and shoulders.
The coyote has been described as "the most vocal of all [wild] North American mammals". Its loudness and range of vocalizations was the cause for its binomial name Canis latrans, meaning "barking dog". At least 11 different vocalizations are known in adult coyotes. These sounds are divided into three categories: agonistic and alarm, greeting, and contact. Vocalizations of the first category include woofs, growls, huffs, barks, bark howls, yelps, and high-frequency whines. Woofs are used as low-intensity threats or alarms and are usually heard near den sites, prompting the pups to immediately retreat into their burrows.
Growls are used as threats at short distances but have also been heard among pups playing and copulating males. Huffs are high-intensity threat vocalizations produced by rapid expiration of air. Barks can be classed as both long-distance threat vocalizations and alarm calls. Bark howls may serve similar functions. Yelps are emitted as a sign of submission, while high-frequency whines are produced by dominant animals acknowledging the submission of subordinates. Greeting vocalizations include low-frequency whines, 'wow-oo-wows', and group yip howls. Low-frequency whines are emitted by submissive animals and are usually accompanied by tail wagging and muzzle nibbling.
The sound known as 'wow-oo-wow' has been described as a "greeting song". The group yip howl is emitted when two or more pack members reunite and may be the final act of a complex greeting ceremony. Contact calls include lone howls and group howls, as well as the previously mentioned group yip howls. The lone howl is the most iconic sound of the coyote and may serve the purpose of announcing the presence of a lone individual separated from its pack. Group howls are used as both substitute group yip howls and as responses to either lone howls, group howls, or group yip howls.
Prior to the near extermination of wolves and cougars, the coyote was most numerous in grasslands inhabited by bison, pronghorn, elk, and other deer, doing particularly well in short-grass areas with prairie dogs, though it was just as much at home in semiarid areas with sagebrush and jackrabbits or in deserts inhabited by cactus, kangaroo rats, and rattlesnakes. As long as it was not in direct competition with the wolf, the coyote ranged from the Sonoran Desert to the alpine regions of adjoining mountains or the plains and mountainous areas of Alberta. With the extermination of the wolf, the coyote's range expanded to encompass broken forests from the tropics of Guatemala and the northern slope of Alaska.
Coyotes walk around 5–16 kilometres (3–10 mi) per day, often along trails such as logging roads and paths; they may use iced-over rivers as travel routes in winter. They are often crepuscular, being more active around evening and the beginning of the night than during the day. However, in urban areas coyotes are known to be more nocturnal, likely to avoid encounters with humans. Like many canids, coyotes are competent swimmers, reported to be able to travel at least 0.8 kilometres (0.5 mi) across water.
The coyote is ecologically the North American equivalent of the Eurasian golden jackal. Likewise, the coyote is highly versatile in its choice of food, but is primarily carnivorous, with 90% of its diet consisting of meat. Prey species include bison (largely as carrion), white-tailed deer, mule deer, moose, elk, bighorn sheep, pronghorn, rabbits, hares, rodents, birds (especially galliformes, roadrunners, young water birds and pigeons and doves), amphibians (except toads), lizards, snakes, turtles and tortoises, fish, crustaceans, and insects. Coyotes may be picky over the prey they target, as animals such as shrews, moles, and brown rats do not occur in their diet in proportion to their numbers.
Terrestrial animals and/or burrowing small mammals such as ground squirrels and associated species (marmots, prairie dogs, chipmunks) as well as voles, pocket gophers, kangaroo rats and other ground-favoring rodents may be quite common foods, especially for lone coyotes. Examples of specific, primary mammal prey include eastern cottontail rabbits, thirteen-lined ground squirrels, and white-footed mice. More unusual prey include fishers, young black bear cubs, harp seals and rattlesnakes. Coyotes kill rattlesnakes mostly for food, but also to protect their pups at their dens, by teasing the snakes until they stretch out and then biting their heads and snapping and shaking the snakes. Birds taken by coyotes may range in size from thrashers, larks and sparrows to adult wild turkeys and, rarely, brooding adult swans and pelicans.
If working in packs or pairs, coyotes may have access to larger prey than lone individuals normally take, such as various prey weighing more than 10 kg (22 lb). In some cases, packs of coyotes have dispatched much larger prey such as adult Odocoileus deer, cow elk, pronghorns and wild sheep, although the young fawn, calves and lambs of these animals are considerably more often taken even by packs, as well as domestic sheep and domestic cattle. In some cases, coyotes can bring down prey weighing up to 100 to 200 kg (220 to 440 lb) or more. When it comes to adult ungulates such as wild deer, they often exploit them when vulnerable such as those that are infirm, stuck in snow or ice, otherwise winter-weakened or heavily pregnant, whereas less wary domestic ungulates may be more easily exploited.
Although coyotes prefer fresh meat, they will scavenge when the opportunity presents itself. Excluding the insects, fruit, and grass eaten, the coyote requires an estimated 600 g (1.3 lb) of food daily, or 250 kg (550 lb) annually. The coyote readily cannibalizes the carcasses of conspecifics, with coyote fat having been successfully used by coyote hunters as a lure or poisoned bait. The coyote's winter diet consists mainly of large ungulate carcasses, with very little plant matter. Rodent prey increases in importance during the spring, summer, and fall.
The coyote feeds on a variety of different produce, including strawberries, blackberries, blueberries, sarsaparillas, peaches, pears, apples, prickly pears, chapotes, persimmons, peanuts, watermelons, cantaloupes, and carrots. During the winter and early spring, the coyote eats large quantities of grass, such as green wheat blades. It sometimes eats unusual items such as cotton cake, soybean meal, domestic animal droppings, beans, and cultivated grain such as maize, wheat, and sorghum.
In coastal California, coyotes now consume a higher percentage of marine-based food than their ancestors, which is thought to be due to the extirpation of the grizzly bear from this region. In Death Valley, coyotes may consume great quantities of hawkmoth caterpillars or beetles in the spring flowering months.
In areas where the ranges of coyotes and gray wolves overlap, interference competition and predation by wolves has been hypothesized to limit local coyote densities. Coyote ranges expanded during the 19th and 20th centuries following the extirpation of wolves, while coyotes were driven to extinction on Isle Royale after wolves colonized the island in the 1940s. One study conducted in Yellowstone National Park, where both species coexist, concluded that the coyote population in the Lamar River Valley declined by 39% following the reintroduction of wolves in the 1990s, while coyote populations in wolf inhabited areas of the Grand Teton National Park are 33% lower than in areas where they are absent. Wolves have been observed to not tolerate coyotes in their vicinity, though coyotes have been known to trail wolves to feed on their kills.
Coyotes may compete with cougars in some areas. In the eastern Sierra Nevada, coyotes compete with cougars over mule deer. Cougars normally outcompete and dominate coyotes, and may kill them occasionally, thus reducing coyote predation pressure on smaller carnivores such as foxes and bobcats. Coyotes that are killed are sometimes not eaten, perhaps indicating that these comprise competitive interspecies interactions, however there are multiple confirmed cases of cougars also eating coyotes. In northeastern Mexico, cougar predation on coyotes continues apace but coyotes were absent from the prey spectrum of sympatric jaguars, apparently due to differing habitat usages.
Other than by gray wolves and cougars, predation on adult coyotes is relatively rare but multiple other predators can be occasional threats. In some cases, adult coyotes have been preyed upon by both American black and grizzly bears, American alligators, large Canada lynx and golden eagles. At kill sites and carrion, coyotes, especially if working alone, tend to be dominated by wolves, cougars, bears, wolverines and, usually but not always, eagles (i.e., bald and golden). When such larger, more powerful and/or more aggressive predators such as these come to a shared feeding site, a coyote may either try to fight, wait until the other predator is done or occasionally share a kill, but if a major danger such as wolves or an adult cougar is present, the coyote will tend to flee.
Coyotes rarely kill healthy adult red foxes, and have been observed to feed or den alongside them, though they often kill foxes caught in traps. Coyotes may kill fox kits, but this is not a major source of mortality. In southern California, coyotes frequently kill gray foxes, and these smaller canids tend to avoid areas with high coyote densities.
In some areas, coyotes share their ranges with bobcats. These two similarly-sized species rarely physically confront one another, though bobcat populations tend to diminish in areas with high coyote densities. However, several studies have demonstrated interference competition between coyotes and bobcats, and in all cases coyotes dominated the interaction. Multiple researchers reported instances of coyotes killing bobcats, whereas bobcats killing coyotes is more rare. Coyotes attack bobcats using a bite-and-shake method similar to what is used on medium-sized prey. Coyotes, both single individuals and groups, have been known to occasionally kill bobcats. In most cases, the bobcats were relatively small specimens, such as adult females and juveniles.
Coyote attacks, by an unknown number of coyotes, on adult male bobcats have occurred. In California, coyote and bobcat populations are not negatively correlated across different habitat types, but predation by coyotes is an important source of mortality in bobcats. Biologist Stanley Paul Young noted that in his entire trapping career, he had never successfully saved a captured bobcat from being killed by coyotes, and wrote of two incidents wherein coyotes chased bobcats up trees. Coyotes have been documented to directly kill Canada lynx on occasion, and compete with them for prey, especially snowshoe hares. In some areas, including central Alberta, lynx are more abundant where coyotes are few, thus interactions with coyotes appears to influence lynx populations more than the availability of snowshoe hares.
Due to the coyote's wide range and abundance throughout North America, it is listed as Least Concern by the International Union for Conservation of Nature (IUCN). The coyote's pre-Columbian range was limited to the Southwest and Plains regions of North America, and northern and central Mexico. By the 19th century, the species expanded north and east, expanding further after 1900, coinciding with land conversion and the extirpation of wolves. By this time, its range encompassed the entire North American continent, including all of the contiguous United States and Mexico, southward into Central America, and northward into most of Canada and Alaska. This expansion is ongoing, and the species now occupies the majority of areas between 8°N (Panama) and 70°N (northern Alaska).
Although it was once widely believed that coyotes are recent immigrants to southern Mexico and Central America, aided in their expansion by deforestation, Pleistocene and Early Holocene records, as well as records from the pre-Columbian period and early European colonization show that the animal was present in the area long before modern times. Range expansion occurred south of Costa Rica during the late 1970s and northern Panama in the early 1980s, following the expansion of cattle-grazing lands into tropical rain forests.
The coyote is predicted to appear in northern Belize in the near future, as the habitat there is favorable to the species. Concerns have been raised of a possible expansion into South America through the Panamanian Isthmus, should the Darién Gap ever be closed by the Pan-American Highway. This fear was partially confirmed in January 2013, when the species was recorded in eastern Panama's Chepo District, beyond the Panama Canal.
A 2017 genetic study proposes that coyotes were originally not found in the area of the eastern United States. From the 1890s, dense forests were transformed into agricultural land and wolf control implemented on a large scale, leaving a niche for coyotes to disperse into. There were two major dispersals from two populations of genetically distinct coyotes. The first major dispersal to the northeast came in the early 20th century from those coyotes living in the northern Great Plains. These came to New England via the northern Great Lakes region and southern Canada, and to Pennsylvania via the southern Great Lakes region, meeting together in the 1940s in New York and Pennsylvania.
These coyotes have hybridized with the remnant gray wolf and eastern wolf populations, which has added to coyote genetic diversity and may have assisted adaptation to the new niche. The second major dispersal to the southeast came in the mid-20th century from Texas and reached the Carolinas in the 1980s. These coyotes have hybridized with the remnant red wolf populations before the 1970s when the red wolf was extirpated in the wild, which has also added to coyote genetic diversity and may have assisted adaptation to this new niche as well. Both of these two major coyote dispersals have experienced rapid population growth and are forecast to meet along the mid-Atlantic coast. The study concludes that for coyotes the long range dispersal, gene flow from local populations, and rapid population growth may be inter-related.
Among large North American carnivores, the coyote probably carries the largest number of diseases and parasites, likely due to its wide range and varied diet. Viral diseases known to infect coyotes include rabies, canine distemper, infectious canine hepatitis, four strains of equine encephalitis, and oral papillomatosis. By the late 1970s, serious rabies outbreaks in coyotes had ceased to be a problem for over 60 years, though sporadic cases every 1–5 years did occur. Distemper causes the deaths of many pups in the wild, though some specimens can survive infection. Tularemia, a bacterial disease, infects coyotes from tick bites and through their rodent and lagomorph prey, and can be deadly for pups.
Coyotes can be infected by both demodectic and sarcoptic mange, the latter being the most common. Mite infestations are rare and incidental in coyotes, while tick infestations are more common, with seasonal peaks depending on locality (May–August in the Northwest, March–November in Arkansas). Coyotes are only rarely infested with lice, while fleas infest coyotes from puphood, though they may be more a source of irritation than serious illness. Pulex simulans is the most common species to infest coyotes, while Ctenocephalides canis tends to occur only in places where coyotes and dogs (its primary host) inhabit the same area. Although coyotes are rarely host to flukes, they can nevertheless have serious effects on coyotes, particularly Nanophyetus salmincola, which can infect them with salmon poisoning disease, a disease with a 90% mortality rate. Trematode Metorchis conjunctus can also infect coyotes.
Tapeworms have been recorded to infest 60–95% of all coyotes examined. The most common species to infest coyotes are Taenia pisiformis and Taenia crassiceps, which uses cottontail rabbits and rodents as intermediate hosts. The largest species known in coyotes is T. hydatigena, which enters coyotes through infected ungulates, and can grow to lengths of 80 to 400 cm (31 to 157 in). Although once largely limited to wolves, Echinococcus granulosus has expanded to coyotes since the latter began colonizing former wolf ranges.
The most frequent ascaroid roundworm in coyotes is Toxascaris leonina, which dwells in the coyote's small intestine and has no ill effects, except for causing the host to eat more frequently. Hookworms of the genus Ancylostoma infest coyotes throughout their range, being particularly prevalent in humid areas. In areas of high moisture, such as coastal Texas, coyotes can carry up to 250 hookworms each. The blood-drinking A. caninum is particularly dangerous, as it damages the coyote through blood loss and lung congestion. A 10-day-old pup can die from being host to as few as 25 A. caninum worms.
Coyote features as a trickster figure and skin-walker in the folktales of some Native Americans, notably several nations in the Southwestern and Plains regions, where he alternately assumes the form of an actual coyote or that of a man. As with other trickster figures, Coyote acts as a picaresque hero who rebels against social convention through deception and humor. Folklorists such as Harris believe coyotes came to be seen as tricksters due to the animal's intelligence and adaptability. After the European colonization of the Americas, Anglo-American depictions of Coyote are of a cowardly and untrustworthy animal. Unlike the gray wolf, which has undergone a radical improvement of its public image, Anglo-American cultural attitudes towards the coyote remain largely negative.
In the Maidu creation story, Coyote introduces work, suffering, and death to the world. Zuni lore has Coyote bringing winter into the world by stealing light from the kachinas. The Chinook, Maidu, Pawnee, Tohono O'odham, and Ute portray the coyote as the companion of The Creator. A Tohono O'odham flood story has Coyote helping Montezuma survive a global deluge that destroys humanity. After The Creator creates humanity, Coyote and Montezuma teach people how to live. The Crow creation story portrays Old Man Coyote as The Creator. In The Dineh creation story, Coyote was present in the First World with First Man and First Woman, though a different version has it being created in the Fourth World. The Navajo Coyote brings death into the world, explaining that without death, too many people would exist, thus no room to plant corn.
Prior to the Spanish conquest of the Aztec Empire, Coyote played a significant role in Mesoamerican cosmology. The coyote symbolized military might in Classic era Teotihuacan, with warriors dressing up in coyote costumes to call upon its predatory power. The species continued to be linked to Central Mexican warrior cults in the centuries leading up to the post-Classic Aztec rule.
In Aztec mythology, Huehuecóyotl (meaning "old coyote"), the god of dance, music and carnality, is depicted in several codices as a man with a coyote's head. He is sometimes depicted as a womanizer, responsible for bringing war into the world by seducing Xochiquetzal, the goddess of love. Epigrapher David H. Kelley argued that the god Quetzalcoatl owed its origins to pre-Aztec Uto-Aztecan mythological depictions of the coyote, which is portrayed as mankind's "Elder Brother", a creator, seducer, trickster, and culture hero linked to the morning star.
Coyote attacks on humans are uncommon and rarely cause serious injuries, due to the relatively small size of the coyote, but have been increasingly frequent, especially in California. By the middle of the 19th century, the coyote was already marked as an enemy by humans. (Sharp & Hall, 1978 Pg. 41-54) There have been only two confirmed fatal attacks: one on a three-year-old named Kelly Keen in Glendale, California and another on a nineteen-year-old named Taylor Mitchell in Nova Scotia, Canada. In the 30 years leading up to March 2006, at least 160 attacks occurred in the United States, mostly in the Los Angeles County area. Data from United States Department of Agriculture (USDA) Wildlife Services, the California Department of Fish and Game, and other sources show that while 41 attacks occurred during the period of 1988–1997, 48 attacks were verified from 1998 through 2003. The majority of these incidents occurred in Southern California near the suburban-wildland interface.
In the absence of the harassment of coyotes practiced by rural people, urban coyotes are losing their fear of humans, which is further worsened by people intentionally or unintentionally feeding coyotes. In such situations, some coyotes have begun to act aggressively toward humans, chasing joggers and bicyclists, confronting people walking their dogs, and stalking small children. Non-rabid coyotes in these areas sometimes target small children, mostly under the age of 10, though some adults have been bitten.
Although media reports of such attacks generally identify the animals in question as simply "coyotes", research into the genetics of the eastern coyote indicates those involved in attacks in northeast North America, including Pennsylvania, New York, New England, and eastern Canada, may have actually been coywolves, hybrids of Canis latrans and C. lupus, not fully coyotes.
As of 2007, coyotes were the most abundant livestock predators in western North America, causing the majority of sheep, goat, and cattle losses. For example, according to the National Agricultural Statistics Service, coyotes were responsible for 60.5% of the 224,000 sheep deaths attributed to predation in 2004. The total number of sheep deaths in 2004 comprised 2.22% of the total sheep and lamb population in the United States, which, according to the National Agricultural Statistics Service USDA report, totaled 4.66 million and 7.80 million heads respectively as of July 1, 2005.
Because coyote populations are typically many times greater and more widely distributed than those of wolves, coyotes cause more overall predation losses. United States government agents routinely shoot, poison, trap, and kill about 90,000 coyotes each year to protect livestock. An Idaho census taken in 2005 showed that individual coyotes were 5% as likely to attack livestock as individual wolves. In Utah, more than 11,000 coyotes were killed for bounties totaling over $500,000 in the fiscal year ending June 30, 2017.
Livestock guardian dogs are commonly used to aggressively repel predators and have worked well in both fenced pasture and range operations. A 1986 survey of sheep producers in the USA found that 82% reported the use of dogs represented an economic asset.
Re-wilding cattle, which involves increasing the natural protective tendencies of cattle, is a method for controlling coyotes discussed by Temple Grandin of Colorado State University. This method is gaining popularity among producers who allow their herds to calve on the range and whose cattle graze open pastures throughout the year.
Coyotes typically bite the throat just behind the jaw and below the ear when attacking adult sheep or goats, with death commonly resulting from suffocation. Blood loss is usually a secondary cause of death. Calves and heavily fleeced sheep are killed by attacking the flanks or hindquarters, causing shock and blood loss. When attacking smaller prey, such as young lambs, the kill is made by biting the skull and spinal regions, causing massive tissue and bone damage. Small or young prey may be completely carried off, leaving only blood as evidence of a kill. Coyotes usually leave the hide and most of the skeleton of larger animals relatively intact, unless food is scarce, in which case they may leave only the largest bones. Scattered bits of wool, skin, and other parts are characteristic where coyotes feed extensively on larger carcasses.
Tracks are an important factor in distinguishing coyote from dog predation. Coyote tracks tend to be more oval-shaped and compact than those of domestic dogs, and their claw marks are less prominent and the tracks tend to follow a straight line more closely than those of dogs. With the exception of sighthounds, most dogs of similar weight to coyotes have a slightly shorter stride. Coyote kills can be distinguished from wolf kills by less damage to the underlying tissues in the former. Also, coyote scat tends to be smaller than wolf scat.
Coyotes are often attracted to dog food and animals that are small enough to appear as prey. Items such as garbage, pet food, and sometimes feeding stations for birds and squirrels attract coyotes into backyards. About three to five pets attacked by coyotes are brought into the Animal Urgent Care hospital of South Orange County (California) each week, the majority of which are dogs, since cats typically do not survive the attacks. Scat analysis collected near Claremont, California, revealed that coyotes relied heavily on pets as a food source in winter and spring.
At one location in Southern California, coyotes began relying on a colony of feral cats as a food source. Over time, the coyotes killed most of the cats and then continued to eat the cat food placed daily at the colony site by people who were maintaining the cat colony. Coyotes usually attack smaller-sized dogs, but they have been known to attack even large, powerful breeds such as the Rottweiler in exceptional cases. Dogs larger than coyotes, such as greyhounds, are generally able to drive them off and have been known to kill coyotes. Smaller breeds are more likely to suffer injury or death.
Coyote hunting is one of the most common forms of predator hunting that humans partake in. There are not many regulations with regard to the taking of the coyote which means there are many different methods that can be used to hunt the animal. The most common forms are trapping, calling, and hound hunting. Since coyotes are colorblind, seeing only in shades of gray and subtle blues, open camouflages, and plain patterns can be used. As the average male coyote weighs 8 to 20 kg (18 to 44 lbs) and the average female coyote 7 to 18 kg (15 to 40 lbs), a universal projectile that can perform between those weights is the .223 Remington, so that the projectile expands in the target after entry, but before the exit, thus delivering the most energy.
Coyotes being the light and agile animals they are, they often leave a very light impression on terrain. The coyote's footprint is oblong, approximately 6.35 cm (2.5-inches) long and 5.08 cm (2-inches) wide. There are four claws in both their front and hind paws. The coyote's center pad is relatively shaped like that of a rounded triangle. Like the domestic dog the coyote's front paw is slightly larger than the hind paw. The coyote's paw is most similar to that of the domestic dog.
Prior to the mid-19th century, coyote fur was considered worthless. This changed with the diminution of beavers, and by 1860, the hunting of coyotes for their fur became a great source of income (75 cents to $1.50 per skin) for wolfers in the Great Plains. Coyote pelts were of significant economic importance during the early 1950s, ranging in price from $5 to $25 per pelt, depending on locality. The coyote's fur is not durable enough to make rugs, but can be used for coats and jackets, scarves, or muffs. The majority of pelts are used for making trimmings, such as coat collars and sleeves for women's clothing. Coyote fur is sometimes dyed black as imitation silver fox.
Coyotes were occasionally eaten by trappers and mountain men during the western expansion. Coyotes sometimes featured in the feasts of the Plains Indians, and coyote pups were eaten by the indigenous people of San Gabriel, California. The taste of coyote meat has been likened to that of the wolf and is more tender than pork when boiled. Coyote fat, when taken in the fall, has been used on occasion to grease leather or eaten as a spread.
Coyotes were likely semidomesticated by various pre-Columbian cultures. Some 19th-century writers wrote of coyotes being kept in native villages in the Great Plains. The coyote is easily tamed as a pup, but can become destructive as an adult. Both full-blooded and hybrid coyotes can be playful and confiding with their owners, but are suspicious and shy of strangers, though coyotes being tractable enough to be used for practical purposes like retrieving and pointing have been recorded. A tame coyote named "Butch", caught in the summer of 1945, had a short-lived career in cinema, appearing in Smoky (1946) and Ramrod (1947) before being shot while raiding a henhouse. | [
{
"paragraph_id": 0,
"text": "The coyote (Canis latrans) is a species of canine native to North America. It is smaller than its close relative, the wolf, and slightly smaller than the closely related eastern wolf and red wolf. It fills much of the same ecological niche as the golden jackal does in Eurasia. The coyote is larger and more predatory and was once referred to as the American jackal by a behavioral ecologist. Other historical names for the species include the prairie wolf and the brush wolf.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The coyote is listed as least concern by the International Union for Conservation of Nature, due to its wide distribution and abundance throughout North America. The species is versatile, able to adapt to and expand into environments modified by humans; urban coyotes are common in many cities. The coyote was sighted in eastern Panama (across the Panama Canal from their home range) for the first time in 2013.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The coyote has 19 recognized subspecies. The average male weighs 8 to 20 kg (18 to 44 lb) and the average female 7 to 18 kg (15 to 40 lb). Their fur color is predominantly light gray and red or fulvous interspersed with black and white, though it varies somewhat with geography. It is highly flexible in social organization, living either in a family unit or in loosely knit packs of unrelated individuals. Primarily carnivorous, its diet consists mainly of deer, rabbits, hares, rodents, birds, reptiles, amphibians, fish, and invertebrates, though it may also eat fruits and vegetables on occasion. Its characteristic vocalization is a howl made by solitary individuals. Humans are the coyote's greatest threat, followed by cougars and gray wolves. Despite predation by gray wolves, coyotes sometimes mate with them, and with eastern, or red wolves, producing \"coywolf\" hybrids. In the northeastern regions of North America, the eastern coyote (a larger subspecies, though still smaller than wolves) is the result of various historical and recent matings with various types of wolves. Genetic studies show that most North American wolves contain some level of coyote DNA.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The coyote is a prominent character in Native American folklore, mainly in Aridoamerica, usually depicted as a trickster that alternately assumes the form of an actual coyote or a man. As with other trickster figures, the coyote uses deception and humor to rebel against social conventions. The animal was especially respected in Mesoamerican cosmology as a symbol of military might. After the European colonization of the Americas, it was seen in Anglo-American culture as a cowardly and untrustworthy animal. Unlike wolves, which have seen their public image improve, attitudes towards the coyote remain largely negative.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Coyote males average 8 to 20 kg (18 to 44 lb) in weight, while females average 7 to 18 kg (15 to 40 lb), though size varies geographically. Northern subspecies, which average 18 kg (40 lb), tend to grow larger than the southern subspecies of Mexico, which average 11.5 kg (25 lb). Total length ranges on average from 1.0 to 1.35 m (3 ft 3 in to 4 ft 5 in); comprising a tail length of 40 cm (16 in), with females being shorter in both body length and height. The largest coyote on record was a male killed near Afton, Wyoming, on November 19, 1937, which measured 1.5 m (4 ft 11 in) from nose to tail, and weighed 34 kg (75 lb). Scent glands are located at the upper side of the base of the tail and are a bluish-black color.",
"title": "Description"
},
{
"paragraph_id": 5,
"text": "The color and texture of the coyote's fur vary somewhat geographically. The hair's predominant color is light gray and red or fulvous, interspersed around the body with black and white. Coyotes living at high elevations tend to have more black and gray shades than their desert-dwelling counterparts, which are more fulvous or whitish-gray. The coyote's fur consists of short, soft underfur and long, coarse guard hairs. The fur of northern subspecies is longer and denser than in southern forms, with the fur of some Mexican and Central American forms being almost hispid (bristly). Generally, adult coyotes (including coywolf hybrids) have a sable coat color, dark neonatal coat color, bushy tail with an active supracaudal gland, and a white facial mask. Albinism is extremely rare in coyotes. Out of a total of 750,000 coyotes killed by federal and cooperative hunters between March 1938, and June 1945, only two were albinos.",
"title": "Description"
},
{
"paragraph_id": 6,
"text": "The coyote is typically smaller than the gray wolf, but has longer ears and a relatively larger braincase, as well as a thinner frame, face, and muzzle. The scent glands are smaller than the gray wolf's, but are the same color. Its fur color variation is much less varied than that of a wolf. The coyote also carries its tail downwards when running or walking, rather than horizontally as the wolf does.",
"title": "Description"
},
{
"paragraph_id": 7,
"text": "Coyote tracks can be distinguished from those of dogs by their more elongated, less rounded shape. Unlike dogs, the upper canines of coyotes extend past the mental foramina.",
"title": "Description"
},
{
"paragraph_id": 8,
"text": "At the time of the European colonization of the Americas, coyotes were largely confined to open plains and arid regions of the western half of the continent. In early post-Columbian historical records, determining whether the writer is describing coyotes or wolves is often difficult. One record from 1750 in Kaskaskia, Illinois, written by a local priest, noted that the \"wolves\" encountered there were smaller and less daring than European wolves. Another account from the early 1800's in Edwards County mentioned wolves howling at night, though these were likely coyotes. This species was encountered several times during the Lewis and Clark Expedition (1804–1806), though it was already well known to European traders on the upper Missouri. Meriwether Lewis, writing on 5 May 1805, in northeastern Montana, described the coyote in these terms:",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 9,
"text": "The small wolf or burrowing dog of the prairies are the inhabitants almost invariably of the open plains; they usually associate in bands of ten or twelve sometimes more and burrow near some pass or place much frequented by game; not being able alone to take deer or goat they are rarely ever found alone but hunt in bands; they frequently watch and seize their prey near their burrows; in these burrows, they raise their young and to them they also resort when pursued; when a person approaches them they frequently bark, their note being precisely that of the small dog. They are of an intermediate size between that of the fox and dog, very active fleet and delicately formed; the ears large erect and pointed the head long and pointed more like that of the fox; tale long ... the hair and fur also resembles the fox, tho' is much coarser and inferior. They are of a pale reddish-brown colour. The eye of a deep sea green colour small and piercing. Their [claws] are rather longer than those of the ordinary wolf or that common to the Atlantic states, none of which are to be found in this quarter, nor I believe above the river Plat.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 10,
"text": "The coyote was first scientifically described by naturalist Thomas Say in September 1819, on the site of Lewis and Clark's Council Bluffs, 24 km (15 mi) up the Missouri River from the mouth of the Platte during a government-sponsored expedition with Major Stephen Long. He had the first edition of the Lewis and Clark journals in hand, which contained Biddle's edited version of Lewis's observations dated 5 May 1805. His account was published in 1823. Say was the first person to document the difference between a \"prairie wolf\" (coyote) and on the next page of his journal a wolf which he named Canis nubilus (Great Plains wolf). Say described the coyote as:",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 11,
"text": "Canis latrans. Cinereous or gray, varied with black above, and dull fulvous, or cinnamon; hair at base dusky plumbeous, in the middle of its length dull cinnamon, and at tip gray or black, longer on the vertebral line; ears erect, rounded at tip, cinnamon behind, the hair dark plumbeous at base, inside lined with gray hair; eyelids edged with black, superior eyelashes black beneath, and at tip above; supplemental lid margined with black-brown before, and edged with black brown behind; iris yellow; pupil black-blue; spot upon the lachrymal sac black-brown; rostrum cinnamon, tinctured with grayish on the nose; lips white, edged with black, three series of black seta; head between the ears intermixed with gray, and dull cinnamon, hairs dusky plumbeous at base; sides paler than the back, obsoletely fasciate with black above the legs; legs cinnamon on the outer side, more distinct on the posterior hair: a dilated black abbreviated line on the anterior ones near the wrist; tail bushy, fusiform, straight, varied with gray and cinnamon, a spot near the base above, and tip black; the tip of the trunk of the tail, attains the tip of the os calcis, when the leg is extended; beneath white, immaculate, tail cinnamon towards the tip, tip black; posterior feet four toed, anterior five toed.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 12,
"text": "The first published usage of the word \"coyote\" (which is a Spanish borrowing of its Nahuatl name coyōtl pronunciation) comes from the historian Francisco Javier Clavijero's Historia de México in 1780. The first time it was used in English occurred in William Bullock's Six months' residence and travels in Mexico (1824), where it is variously transcribed as cayjotte and cocyotie. The word's spelling was standardized as \"coyote\" by the 1880s.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 13,
"text": "Alternative English names for the coyote include \"prairie wolf\", \"brush wolf\", \"cased wolf\", \"little wolf\" and \"American jackal\". Its binomial name Canis latrans translates to \"barking dog\", a reference to the many vocalizations they produce.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 14,
"text": "",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 15,
"text": "Xiaoming Wang and Richard H. Tedford, one of the foremost authorities on carnivore evolution, proposed that the genus Canis was the descendant of the coyote-like Eucyon davisi and its remains first appeared in the Miocene 6 million years ago (Mya) in the southwestern US and Mexico. By the Pliocene (5 Mya), the larger Canis lepophagus appeared in the same region and by the early Pleistocene (1 Mya) C. latrans (the coyote) was in existence. They proposed that the progression from Eucyon davisi to C. lepophagus to the coyote was linear evolution.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 16,
"text": "C. latrans and C. aureus are closely related to C. edwardii, a species that appeared earliest spanning the mid-Blancan (late Pliocene) to the close of the Irvingtonian (late Pleistocene), and coyote remains indistinguishable from C. latrans were contemporaneous with C. edwardii in North America. Johnston describes C. lepophagus as having a more slender skull and skeleton than the modern coyote. Ronald Nowak found that the early populations had small, delicate, narrowly proportioned skulls that resemble small coyotes and appear to be ancestral to C. latrans.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 17,
"text": "C. lepophagus was similar in weight to modern coyotes, but had shorter limb bones that indicate a less cursorial lifestyle. The coyote represents a more primitive form of Canis than the gray wolf, as shown by its relatively small size and its comparatively narrow skull and jaws, which lack the grasping power necessary to hold the large prey in which wolves specialize. This is further corroborated by the coyote's sagittal crest, which is low or totally flattened, thus indicating a weaker bite than the wolves. The coyote is not a specialized carnivore as the wolf is, as shown by the larger chewing surfaces on the molars, reflecting the species' relative dependence on vegetable matter. In these respects, the coyote resembles the fox-like progenitors of the genus more so than the wolf.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 18,
"text": "The oldest fossils that fall within the range of the modern coyote date to 0.74–0.85 Ma (million years) in Hamilton Cave, West Virginia; 0.73 Ma in Irvington, California; 0.35–0.48 Ma in Porcupine Cave, Colorado, and in Cumberland Cave, Pennsylvania. Modern coyotes arose 1,000 years after the Quaternary extinction event. Compared to their modern Holocene counterparts, Pleistocene coyotes (C. l. orcutti) were larger and more robust, likely in response to larger competitors and prey. Pleistocene coyotes were likely more specialized carnivores than their descendants, as their teeth were more adapted to shearing meat, showing fewer grinding surfaces suited for processing vegetation. Their reduction in size occurred within 1,000 years of the Quaternary extinction event, when their large prey died out. Furthermore, Pleistocene coyotes were unable to exploit the big-game hunting niche left vacant after the extinction of the dire wolf (Aenocyon dirus), as it was rapidly filled by gray wolves, which likely actively killed off the large coyotes, with natural selection favoring the modern gracile morph.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 19,
"text": "In 1993, a study proposed that the wolves of North America display skull traits more similar to the coyote than wolves from Eurasia. In 2010, a study found that the coyote was a basal member of the clade that included the Tibetan wolf, the domestic dog, the Mongolian wolf and the Eurasian wolf, with the Tibetan wolf diverging early from wolves and domestic dogs.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 20,
"text": "In 2016, a whole-genome DNA study proposed, based on the assumptions made, that all of the North American wolves and coyotes diverged from a common ancestor about 51,000 years ago. However, the proposed timing of the wolf / coyote divergence conflicts with the discovery of a coyote-like specimen in strata dated to 1 Mya. The study also indicated that all North American wolves have a significant amount of coyote ancestry and all coyotes some degree of wolf ancestry, and that the red wolf and eastern wolf are highly admixed with different proportions of gray wolf and coyote ancestry.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 21,
"text": "Genetic studies relating to wolves or dogs have inferred phylogenetic relationships based on the only reference genome available, that of the Boxer dog. In 2017, the first reference genome of the wolf Canis lupus lupus was mapped to aid future research. In 2018, a study looked at the genomic structure and admixture of North American wolves, wolf-like canids, and coyotes using specimens from across their entire range that mapped the largest dataset of nuclear genome sequences against the wolf reference genome.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 22,
"text": "The study supports the findings of previous studies that North American gray wolves and wolf-like canids were the result of complex gray wolf and coyote mixing. A polar wolf from Greenland and a coyote from Mexico represented the purest specimens. The coyotes from Alaska, California, Alabama, and Quebec show almost no wolf ancestry. Coyotes from Missouri, Illinois, and Florida exhibit 5–10% wolf ancestry. There was 40% wolf to 60% coyote ancestry in red wolves, 60% wolf to 40% coyote in Eastern timber wolves, and 75% wolf to 25% coyote in the Great Lakes wolves. There was 10% coyote ancestry in Mexican wolves and the Atlantic Coast wolves, 5% in Pacific Coast and Yellowstone wolves, and less than 3% in Canadian archipelago wolves. If a third canid had been involved in the admixture of the North American wolf-like canids, then its genetic signature would have been found in coyotes and wolves, which it has not.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 23,
"text": "In 2018, whole genome sequencing was used to compare members of the genus Canis. The study indicates that the common ancestor of the coyote and gray wolf has genetically admixed with a ghost population of an extinct, unidentified canid. The \"ghost\" canid was genetically close to the dhole, and had evolved after the divergence of the African wild dog from the other canid species. The basal position of the coyote compared to the wolf is proposed to be due to the coyote retaining more of the mitochondrial genome from the unknown extinct canid.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 24,
"text": "As of 2005, 19 subspecies are recognized. Geographic variation in coyotes is not great, though taken as a whole, the eastern subspecies (C. l. thamnos and C. l. frustor) are large, dark-colored animals, with a gradual paling in color and reduction in size westward and northward (C. l. texensis, C. l. latrans, C. l. lestes, and C. l. incolatus), a brightening of 'ochraceous' tones – deep orange or brown – towards the Pacific coast (C. l. ochropus, C. l. umpquensis), a reduction in size in Aridoamerica (C. l. microdon, C. l. mearnsi) and a general trend towards dark reddish colors and short muzzles in Mexican and Central American populations.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 25,
"text": "",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 26,
"text": "Coyotes occasionally mate with domestic dogs, sometimes producing crosses colloquially known as \"coydogs\". Such matings are rare in the wild, as the mating cycles of dogs and coyotes do not coincide, and coyotes are usually antagonistic towards dogs. Hybridization usually only occurs when coyotes are expanding into areas where conspecifics are few, and dogs are the only alternatives. Even then, pup survival rates are lower than normal, as dogs do not form pair bonds with coyotes, thus making the rearing of pups more difficult. In captivity, F1 hybrids (first generation) tend to be more mischievous and less manageable as pups than dogs, and are less trustworthy on maturity than wolf-dog hybrids.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 27,
"text": "Hybrids vary in appearance, but generally retain the coyote's usual characteristics. F1 hybrids tend to be intermediate in form between dogs and coyotes, while F2 hybrids (second generation) are more varied. Both F1 and F2 hybrids resemble their coyote parents in terms of shyness and intrasexual aggression. Hybrids are fertile and can be successfully bred through four generations. Melanistic coyotes owe their black pelts to a mutation that first arose in domestic dogs. A population of non-albino white coyotes in Newfoundland owe their coloration to a melanocortin 1 receptor mutation inherited from Golden Retrievers.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 28,
"text": "Coyotes have hybridized with wolves to varying degrees, particularly in eastern North America. The so-called \"eastern coyote\" of northeastern North America probably originated in the aftermath of the extermination of gray and eastern wolves in the northeast, thus allowing coyotes to colonize former wolf ranges and mix with the remnant wolf populations. This hybrid is smaller than either the gray or eastern wolf, and holds smaller territories, but is in turn larger and holds more extensive home ranges than the typical western coyote. As of 2010, the eastern coyote's genetic makeup is fairly uniform, with minimal influence from eastern wolves or western coyotes.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 29,
"text": "Adult eastern coyotes are larger than western coyotes, with female eastern coyotes weighing 21% more than male western coyotes. Physical differences become more apparent by the age of 35 days, with eastern coyote pups having longer legs than their western counterparts. Differences in dental development also occurs, with tooth eruption being later, and in a different order in the eastern coyote. Aside from its size, the eastern coyote is physically similar to the western coyote. The four color phases range from dark brown to blond or reddish blond, though the most common phase is gray-brown, with reddish legs, ears, and flanks.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 30,
"text": "No significant differences exist between eastern and western coyotes in aggression and fighting, though eastern coyotes tend to fight less, and are more playful. Unlike western coyote pups, in which fighting precedes play behavior, fighting among eastern coyote pups occurs after the onset of play. Eastern coyotes tend to reach sexual maturity at two years of age, much later than in western coyotes.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 31,
"text": "Eastern and red wolves are also products of varying degrees of wolf-coyote hybridization. The eastern wolf probably was a result of a wolf-coyote admixture, combined with extensive backcrossing with parent gray wolf populations. The red wolf may have originated during a time of declining wolf populations in the Southeastern Woodlands, forcing a wolf-coyote hybridization, as well as backcrossing with local parent coyote populations to the extent that about 75–80% of the modern red wolf's genome is of coyote derivation.",
"title": "Taxonomy and evolution"
},
{
"paragraph_id": 32,
"text": "Like the Eurasian golden jackal, the coyote is gregarious, but not as dependent on conspecifics as more social canid species like wolves are. This is likely because the coyote is not a specialized hunter of large prey as the latter species is. The basic social unit of a coyote pack is a family containing a reproductive female. However, unrelated coyotes may join forces for companionship, or to bring down prey too large to attack singly. Such \"nonfamily\" packs are only temporary, and may consist of bachelor males, nonreproductive females and subadult young. Families are formed in midwinter, when females enter estrus. Pair bonding can occur 2–3 months before actual copulation takes place.",
"title": "Behavior"
},
{
"paragraph_id": 33,
"text": "The copulatory tie can last 5–45 minutes. A female entering estrus attracts males by scent marking and howling with increasing frequency. A single female in heat can attract up to seven reproductive males, which can follow her for as long as a month. Although some squabbling may occur among the males, once the female has selected a mate and copulates, the rejected males do not intervene, and move on once they detect other estrous females. Unlike the wolf, which has been known to practice both monogamous and bigamous matings, the coyote is strictly monogamous, even in areas with high coyote densities and abundant food.",
"title": "Behavior"
},
{
"paragraph_id": 34,
"text": "Females that fail to mate sometimes assist their sisters or mothers in raising their pups, or join their siblings until the next time they can mate. The newly mated pair then establishes a territory and either constructs their own den or cleans out abandoned badger, marmot, or skunk earths. During the pregnancy, the male frequently hunts alone and brings back food for the female. The female may line the den with dried grass or with fur pulled from her belly. The gestation period is 63 days, with an average litter size of six, though the number fluctuates depending on coyote population density and the abundance of food.",
"title": "Behavior"
},
{
"paragraph_id": 35,
"text": "Coyote pups are born in dens, hollow trees, or under ledges, and weigh 200 to 500 g (0.44 to 1.10 lb) at birth. They are altricial, and are completely dependent on milk for their first 10 days. The incisors erupt at about 12 days, the canines at 16, and the second premolars at 21. Their eyes open after 10 days, by which point the pups become increasingly more mobile, walking by 20 days, and running at the age of six weeks. The parents begin supplementing the pup's diet with regurgitated solid food after 12–15 days. By the age of four to six weeks, when their milk teeth are fully functional, the pups are given small food items such as mice, rabbits, or pieces of ungulate carcasses, with lactation steadily decreasing after two months.",
"title": "Behavior"
},
{
"paragraph_id": 36,
"text": "Unlike wolf pups, coyote pups begin seriously fighting (as opposed to play fighting) prior to engaging in play behavior. A common play behavior includes the coyote \"hip-slam\". By three weeks of age, coyote pups bite each other with less inhibition than wolf pups. By the age of four to five weeks, pups have established dominance hierarchies, and are by then more likely to play rather than fight. The male plays an active role in feeding, grooming, and guarding the pups, but abandons them if the female goes missing before the pups are completely weaned. The den is abandoned by June to July, and the pups follow their parents in patrolling their territory and hunting. Pups may leave their families in August, though can remain for much longer. The pups attain adult dimensions at eight months and gain adult weight a month later.",
"title": "Behavior"
},
{
"paragraph_id": 37,
"text": "Individual feeding territories vary in size from 0.4 to 62 km (0.15 to 24 sq mi), with the general concentration of coyotes in a given area depending on food abundance, adequate denning sites, and competition with conspecifics and other predators. The coyote generally does not defend its territory outside of the denning season, and is much less aggressive towards intruders than the wolf is, typically chasing and sparring with them, but rarely killing them. Conflicts between coyotes can arise during times of food shortage. Coyotes mark their territories by raised-leg urination and ground-scratching.",
"title": "Behavior"
},
{
"paragraph_id": 38,
"text": "Like wolves, coyotes use a den, usually the deserted holes of other species, when gestating and rearing young, though they may occasionally give birth under sagebrushes in the open. Coyote dens can be located in canyons, washouts, coulees, banks, rock bluffs, or level ground. Some dens have been found under abandoned homestead shacks, grain bins, drainage pipes, railroad tracks, hollow logs, thickets, and thistles. The den is continuously dug and cleaned out by the female until the pups are born. Should the den be disturbed or infested with fleas, the pups are moved into another den. A coyote den can have several entrances and passages branching out from the main chamber. A single den can be used year after year.",
"title": "Behavior"
},
{
"paragraph_id": 39,
"text": "While the popular consensus is that olfaction is very important for hunting, two studies that experimentally investigated the role of olfactory, auditory, and visual cues found that visual cues are the most important ones for hunting in red foxes and coyotes.",
"title": "Behavior"
},
{
"paragraph_id": 40,
"text": "When hunting large prey, the coyote often works in pairs or small groups. Success in killing large ungulates depends on factors such as snow depth and crust density. Younger animals usually avoid participating in such hunts, with the breeding pair typically doing most of the work. The coyote pursues large prey, typically hamstringing the animal, and subsequently then harassing it until the prey falls. Like other canids, the coyote caches excess food. Coyotes catch mouse-sized rodents by pouncing, whereas ground squirrels are chased. Although coyotes can live in large groups, small prey is typically caught singly.",
"title": "Behavior"
},
{
"paragraph_id": 41,
"text": "Coyotes have been observed to kill porcupines in pairs, using their paws to flip the rodents on their backs, then attacking the soft underbelly. Only old and experienced coyotes can successfully prey on porcupines, with many predation attempts by young coyotes resulting in them being injured by their prey's quills. Coyotes sometimes urinate on their food, possibly to claim ownership over it. Recent evidence demonstrates that at least some coyotes have become more nocturnal in hunting, presumably to avoid humans.",
"title": "Behavior"
},
{
"paragraph_id": 42,
"text": "Coyotes may occasionally form mutualistic hunting relationships with American badgers, assisting each other in digging up rodent prey. The relationship between the two species may occasionally border on apparent \"friendship\", as some coyotes have been observed laying their heads on their badger companions or licking their faces without protest. The amicable interactions between coyotes and badgers were known to pre-Columbian civilizations, as shown on a jar found in Mexico dated to 1250–1300 CE depicting the relationship between the two.",
"title": "Behavior"
},
{
"paragraph_id": 43,
"text": "Food scraps, pet food, and animal feces may attract a coyote to a trash can.",
"title": "Behavior"
},
{
"paragraph_id": 44,
"text": "Being both a gregarious and solitary animal, the variability of the coyote's visual and vocal repertoire is intermediate between that of the solitary foxes and the highly social wolf. The aggressive behavior of the coyote bears more similarities to that of foxes than it does that of wolves and dogs. An aggressive coyote arches its back and lowers its tail. Unlike dogs, which solicit playful behavior by performing a \"play-bow\" followed by a \"play-leap\", play in coyotes consists of a bow, followed by side-to-side head flexions and a series of \"spins\" and \"dives\". Although coyotes will sometimes bite their playmates' scruff as dogs do, they typically approach low, and make upward-directed bites.",
"title": "Behavior"
},
{
"paragraph_id": 45,
"text": "Pups fight each other regardless of sex, while among adults, aggression is typically reserved for members of the same sex. Combatants approach each other waving their tails and snarling with their jaws open, though fights are typically silent. Males tend to fight in a vertical stance, while females fight on all four paws. Fights among females tend to be more serious than ones among males, as females seize their opponents' forelegs, throat, and shoulders.",
"title": "Behavior"
},
{
"paragraph_id": 46,
"text": "The coyote has been described as \"the most vocal of all [wild] North American mammals\". Its loudness and range of vocalizations was the cause for its binomial name Canis latrans, meaning \"barking dog\". At least 11 different vocalizations are known in adult coyotes. These sounds are divided into three categories: agonistic and alarm, greeting, and contact. Vocalizations of the first category include woofs, growls, huffs, barks, bark howls, yelps, and high-frequency whines. Woofs are used as low-intensity threats or alarms and are usually heard near den sites, prompting the pups to immediately retreat into their burrows.",
"title": "Behavior"
},
{
"paragraph_id": 47,
"text": "Growls are used as threats at short distances but have also been heard among pups playing and copulating males. Huffs are high-intensity threat vocalizations produced by rapid expiration of air. Barks can be classed as both long-distance threat vocalizations and alarm calls. Bark howls may serve similar functions. Yelps are emitted as a sign of submission, while high-frequency whines are produced by dominant animals acknowledging the submission of subordinates. Greeting vocalizations include low-frequency whines, 'wow-oo-wows', and group yip howls. Low-frequency whines are emitted by submissive animals and are usually accompanied by tail wagging and muzzle nibbling.",
"title": "Behavior"
},
{
"paragraph_id": 48,
"text": "The sound known as 'wow-oo-wow' has been described as a \"greeting song\". The group yip howl is emitted when two or more pack members reunite and may be the final act of a complex greeting ceremony. Contact calls include lone howls and group howls, as well as the previously mentioned group yip howls. The lone howl is the most iconic sound of the coyote and may serve the purpose of announcing the presence of a lone individual separated from its pack. Group howls are used as both substitute group yip howls and as responses to either lone howls, group howls, or group yip howls.",
"title": "Behavior"
},
{
"paragraph_id": 49,
"text": "Prior to the near extermination of wolves and cougars, the coyote was most numerous in grasslands inhabited by bison, pronghorn, elk, and other deer, doing particularly well in short-grass areas with prairie dogs, though it was just as much at home in semiarid areas with sagebrush and jackrabbits or in deserts inhabited by cactus, kangaroo rats, and rattlesnakes. As long as it was not in direct competition with the wolf, the coyote ranged from the Sonoran Desert to the alpine regions of adjoining mountains or the plains and mountainous areas of Alberta. With the extermination of the wolf, the coyote's range expanded to encompass broken forests from the tropics of Guatemala and the northern slope of Alaska.",
"title": "Ecology"
},
{
"paragraph_id": 50,
"text": "Coyotes walk around 5–16 kilometres (3–10 mi) per day, often along trails such as logging roads and paths; they may use iced-over rivers as travel routes in winter. They are often crepuscular, being more active around evening and the beginning of the night than during the day. However, in urban areas coyotes are known to be more nocturnal, likely to avoid encounters with humans. Like many canids, coyotes are competent swimmers, reported to be able to travel at least 0.8 kilometres (0.5 mi) across water.",
"title": "Ecology"
},
{
"paragraph_id": 51,
"text": "The coyote is ecologically the North American equivalent of the Eurasian golden jackal. Likewise, the coyote is highly versatile in its choice of food, but is primarily carnivorous, with 90% of its diet consisting of meat. Prey species include bison (largely as carrion), white-tailed deer, mule deer, moose, elk, bighorn sheep, pronghorn, rabbits, hares, rodents, birds (especially galliformes, roadrunners, young water birds and pigeons and doves), amphibians (except toads), lizards, snakes, turtles and tortoises, fish, crustaceans, and insects. Coyotes may be picky over the prey they target, as animals such as shrews, moles, and brown rats do not occur in their diet in proportion to their numbers.",
"title": "Ecology"
},
{
"paragraph_id": 52,
"text": "Terrestrial animals and/or burrowing small mammals such as ground squirrels and associated species (marmots, prairie dogs, chipmunks) as well as voles, pocket gophers, kangaroo rats and other ground-favoring rodents may be quite common foods, especially for lone coyotes. Examples of specific, primary mammal prey include eastern cottontail rabbits, thirteen-lined ground squirrels, and white-footed mice. More unusual prey include fishers, young black bear cubs, harp seals and rattlesnakes. Coyotes kill rattlesnakes mostly for food, but also to protect their pups at their dens, by teasing the snakes until they stretch out and then biting their heads and snapping and shaking the snakes. Birds taken by coyotes may range in size from thrashers, larks and sparrows to adult wild turkeys and, rarely, brooding adult swans and pelicans.",
"title": "Ecology"
},
{
"paragraph_id": 53,
"text": "If working in packs or pairs, coyotes may have access to larger prey than lone individuals normally take, such as various prey weighing more than 10 kg (22 lb). In some cases, packs of coyotes have dispatched much larger prey such as adult Odocoileus deer, cow elk, pronghorns and wild sheep, although the young fawn, calves and lambs of these animals are considerably more often taken even by packs, as well as domestic sheep and domestic cattle. In some cases, coyotes can bring down prey weighing up to 100 to 200 kg (220 to 440 lb) or more. When it comes to adult ungulates such as wild deer, they often exploit them when vulnerable such as those that are infirm, stuck in snow or ice, otherwise winter-weakened or heavily pregnant, whereas less wary domestic ungulates may be more easily exploited.",
"title": "Ecology"
},
{
"paragraph_id": 54,
"text": "Although coyotes prefer fresh meat, they will scavenge when the opportunity presents itself. Excluding the insects, fruit, and grass eaten, the coyote requires an estimated 600 g (1.3 lb) of food daily, or 250 kg (550 lb) annually. The coyote readily cannibalizes the carcasses of conspecifics, with coyote fat having been successfully used by coyote hunters as a lure or poisoned bait. The coyote's winter diet consists mainly of large ungulate carcasses, with very little plant matter. Rodent prey increases in importance during the spring, summer, and fall.",
"title": "Ecology"
},
{
"paragraph_id": 55,
"text": "The coyote feeds on a variety of different produce, including strawberries, blackberries, blueberries, sarsaparillas, peaches, pears, apples, prickly pears, chapotes, persimmons, peanuts, watermelons, cantaloupes, and carrots. During the winter and early spring, the coyote eats large quantities of grass, such as green wheat blades. It sometimes eats unusual items such as cotton cake, soybean meal, domestic animal droppings, beans, and cultivated grain such as maize, wheat, and sorghum.",
"title": "Ecology"
},
{
"paragraph_id": 56,
"text": "In coastal California, coyotes now consume a higher percentage of marine-based food than their ancestors, which is thought to be due to the extirpation of the grizzly bear from this region. In Death Valley, coyotes may consume great quantities of hawkmoth caterpillars or beetles in the spring flowering months.",
"title": "Ecology"
},
{
"paragraph_id": 57,
"text": "In areas where the ranges of coyotes and gray wolves overlap, interference competition and predation by wolves has been hypothesized to limit local coyote densities. Coyote ranges expanded during the 19th and 20th centuries following the extirpation of wolves, while coyotes were driven to extinction on Isle Royale after wolves colonized the island in the 1940s. One study conducted in Yellowstone National Park, where both species coexist, concluded that the coyote population in the Lamar River Valley declined by 39% following the reintroduction of wolves in the 1990s, while coyote populations in wolf inhabited areas of the Grand Teton National Park are 33% lower than in areas where they are absent. Wolves have been observed to not tolerate coyotes in their vicinity, though coyotes have been known to trail wolves to feed on their kills.",
"title": "Ecology"
},
{
"paragraph_id": 58,
"text": "Coyotes may compete with cougars in some areas. In the eastern Sierra Nevada, coyotes compete with cougars over mule deer. Cougars normally outcompete and dominate coyotes, and may kill them occasionally, thus reducing coyote predation pressure on smaller carnivores such as foxes and bobcats. Coyotes that are killed are sometimes not eaten, perhaps indicating that these comprise competitive interspecies interactions, however there are multiple confirmed cases of cougars also eating coyotes. In northeastern Mexico, cougar predation on coyotes continues apace but coyotes were absent from the prey spectrum of sympatric jaguars, apparently due to differing habitat usages.",
"title": "Ecology"
},
{
"paragraph_id": 59,
"text": "Other than by gray wolves and cougars, predation on adult coyotes is relatively rare but multiple other predators can be occasional threats. In some cases, adult coyotes have been preyed upon by both American black and grizzly bears, American alligators, large Canada lynx and golden eagles. At kill sites and carrion, coyotes, especially if working alone, tend to be dominated by wolves, cougars, bears, wolverines and, usually but not always, eagles (i.e., bald and golden). When such larger, more powerful and/or more aggressive predators such as these come to a shared feeding site, a coyote may either try to fight, wait until the other predator is done or occasionally share a kill, but if a major danger such as wolves or an adult cougar is present, the coyote will tend to flee.",
"title": "Ecology"
},
{
"paragraph_id": 60,
"text": "Coyotes rarely kill healthy adult red foxes, and have been observed to feed or den alongside them, though they often kill foxes caught in traps. Coyotes may kill fox kits, but this is not a major source of mortality. In southern California, coyotes frequently kill gray foxes, and these smaller canids tend to avoid areas with high coyote densities.",
"title": "Ecology"
},
{
"paragraph_id": 61,
"text": "In some areas, coyotes share their ranges with bobcats. These two similarly-sized species rarely physically confront one another, though bobcat populations tend to diminish in areas with high coyote densities. However, several studies have demonstrated interference competition between coyotes and bobcats, and in all cases coyotes dominated the interaction. Multiple researchers reported instances of coyotes killing bobcats, whereas bobcats killing coyotes is more rare. Coyotes attack bobcats using a bite-and-shake method similar to what is used on medium-sized prey. Coyotes, both single individuals and groups, have been known to occasionally kill bobcats. In most cases, the bobcats were relatively small specimens, such as adult females and juveniles.",
"title": "Ecology"
},
{
"paragraph_id": 62,
"text": "Coyote attacks, by an unknown number of coyotes, on adult male bobcats have occurred. In California, coyote and bobcat populations are not negatively correlated across different habitat types, but predation by coyotes is an important source of mortality in bobcats. Biologist Stanley Paul Young noted that in his entire trapping career, he had never successfully saved a captured bobcat from being killed by coyotes, and wrote of two incidents wherein coyotes chased bobcats up trees. Coyotes have been documented to directly kill Canada lynx on occasion, and compete with them for prey, especially snowshoe hares. In some areas, including central Alberta, lynx are more abundant where coyotes are few, thus interactions with coyotes appears to influence lynx populations more than the availability of snowshoe hares.",
"title": "Ecology"
},
{
"paragraph_id": 63,
"text": "Due to the coyote's wide range and abundance throughout North America, it is listed as Least Concern by the International Union for Conservation of Nature (IUCN). The coyote's pre-Columbian range was limited to the Southwest and Plains regions of North America, and northern and central Mexico. By the 19th century, the species expanded north and east, expanding further after 1900, coinciding with land conversion and the extirpation of wolves. By this time, its range encompassed the entire North American continent, including all of the contiguous United States and Mexico, southward into Central America, and northward into most of Canada and Alaska. This expansion is ongoing, and the species now occupies the majority of areas between 8°N (Panama) and 70°N (northern Alaska).",
"title": "Range"
},
{
"paragraph_id": 64,
"text": "Although it was once widely believed that coyotes are recent immigrants to southern Mexico and Central America, aided in their expansion by deforestation, Pleistocene and Early Holocene records, as well as records from the pre-Columbian period and early European colonization show that the animal was present in the area long before modern times. Range expansion occurred south of Costa Rica during the late 1970s and northern Panama in the early 1980s, following the expansion of cattle-grazing lands into tropical rain forests.",
"title": "Range"
},
{
"paragraph_id": 65,
"text": "The coyote is predicted to appear in northern Belize in the near future, as the habitat there is favorable to the species. Concerns have been raised of a possible expansion into South America through the Panamanian Isthmus, should the Darién Gap ever be closed by the Pan-American Highway. This fear was partially confirmed in January 2013, when the species was recorded in eastern Panama's Chepo District, beyond the Panama Canal.",
"title": "Range"
},
{
"paragraph_id": 66,
"text": "A 2017 genetic study proposes that coyotes were originally not found in the area of the eastern United States. From the 1890s, dense forests were transformed into agricultural land and wolf control implemented on a large scale, leaving a niche for coyotes to disperse into. There were two major dispersals from two populations of genetically distinct coyotes. The first major dispersal to the northeast came in the early 20th century from those coyotes living in the northern Great Plains. These came to New England via the northern Great Lakes region and southern Canada, and to Pennsylvania via the southern Great Lakes region, meeting together in the 1940s in New York and Pennsylvania.",
"title": "Range"
},
{
"paragraph_id": 67,
"text": "These coyotes have hybridized with the remnant gray wolf and eastern wolf populations, which has added to coyote genetic diversity and may have assisted adaptation to the new niche. The second major dispersal to the southeast came in the mid-20th century from Texas and reached the Carolinas in the 1980s. These coyotes have hybridized with the remnant red wolf populations before the 1970s when the red wolf was extirpated in the wild, which has also added to coyote genetic diversity and may have assisted adaptation to this new niche as well. Both of these two major coyote dispersals have experienced rapid population growth and are forecast to meet along the mid-Atlantic coast. The study concludes that for coyotes the long range dispersal, gene flow from local populations, and rapid population growth may be inter-related.",
"title": "Range"
},
{
"paragraph_id": 68,
"text": "Among large North American carnivores, the coyote probably carries the largest number of diseases and parasites, likely due to its wide range and varied diet. Viral diseases known to infect coyotes include rabies, canine distemper, infectious canine hepatitis, four strains of equine encephalitis, and oral papillomatosis. By the late 1970s, serious rabies outbreaks in coyotes had ceased to be a problem for over 60 years, though sporadic cases every 1–5 years did occur. Distemper causes the deaths of many pups in the wild, though some specimens can survive infection. Tularemia, a bacterial disease, infects coyotes from tick bites and through their rodent and lagomorph prey, and can be deadly for pups.",
"title": "Diseases and parasites"
},
{
"paragraph_id": 69,
"text": "Coyotes can be infected by both demodectic and sarcoptic mange, the latter being the most common. Mite infestations are rare and incidental in coyotes, while tick infestations are more common, with seasonal peaks depending on locality (May–August in the Northwest, March–November in Arkansas). Coyotes are only rarely infested with lice, while fleas infest coyotes from puphood, though they may be more a source of irritation than serious illness. Pulex simulans is the most common species to infest coyotes, while Ctenocephalides canis tends to occur only in places where coyotes and dogs (its primary host) inhabit the same area. Although coyotes are rarely host to flukes, they can nevertheless have serious effects on coyotes, particularly Nanophyetus salmincola, which can infect them with salmon poisoning disease, a disease with a 90% mortality rate. Trematode Metorchis conjunctus can also infect coyotes.",
"title": "Diseases and parasites"
},
{
"paragraph_id": 70,
"text": "Tapeworms have been recorded to infest 60–95% of all coyotes examined. The most common species to infest coyotes are Taenia pisiformis and Taenia crassiceps, which uses cottontail rabbits and rodents as intermediate hosts. The largest species known in coyotes is T. hydatigena, which enters coyotes through infected ungulates, and can grow to lengths of 80 to 400 cm (31 to 157 in). Although once largely limited to wolves, Echinococcus granulosus has expanded to coyotes since the latter began colonizing former wolf ranges.",
"title": "Diseases and parasites"
},
{
"paragraph_id": 71,
"text": "The most frequent ascaroid roundworm in coyotes is Toxascaris leonina, which dwells in the coyote's small intestine and has no ill effects, except for causing the host to eat more frequently. Hookworms of the genus Ancylostoma infest coyotes throughout their range, being particularly prevalent in humid areas. In areas of high moisture, such as coastal Texas, coyotes can carry up to 250 hookworms each. The blood-drinking A. caninum is particularly dangerous, as it damages the coyote through blood loss and lung congestion. A 10-day-old pup can die from being host to as few as 25 A. caninum worms.",
"title": "Diseases and parasites"
},
{
"paragraph_id": 72,
"text": "Coyote features as a trickster figure and skin-walker in the folktales of some Native Americans, notably several nations in the Southwestern and Plains regions, where he alternately assumes the form of an actual coyote or that of a man. As with other trickster figures, Coyote acts as a picaresque hero who rebels against social convention through deception and humor. Folklorists such as Harris believe coyotes came to be seen as tricksters due to the animal's intelligence and adaptability. After the European colonization of the Americas, Anglo-American depictions of Coyote are of a cowardly and untrustworthy animal. Unlike the gray wolf, which has undergone a radical improvement of its public image, Anglo-American cultural attitudes towards the coyote remain largely negative.",
"title": "Relationships with humans"
},
{
"paragraph_id": 73,
"text": "In the Maidu creation story, Coyote introduces work, suffering, and death to the world. Zuni lore has Coyote bringing winter into the world by stealing light from the kachinas. The Chinook, Maidu, Pawnee, Tohono O'odham, and Ute portray the coyote as the companion of The Creator. A Tohono O'odham flood story has Coyote helping Montezuma survive a global deluge that destroys humanity. After The Creator creates humanity, Coyote and Montezuma teach people how to live. The Crow creation story portrays Old Man Coyote as The Creator. In The Dineh creation story, Coyote was present in the First World with First Man and First Woman, though a different version has it being created in the Fourth World. The Navajo Coyote brings death into the world, explaining that without death, too many people would exist, thus no room to plant corn.",
"title": "Relationships with humans"
},
{
"paragraph_id": 74,
"text": "Prior to the Spanish conquest of the Aztec Empire, Coyote played a significant role in Mesoamerican cosmology. The coyote symbolized military might in Classic era Teotihuacan, with warriors dressing up in coyote costumes to call upon its predatory power. The species continued to be linked to Central Mexican warrior cults in the centuries leading up to the post-Classic Aztec rule.",
"title": "Relationships with humans"
},
{
"paragraph_id": 75,
"text": "In Aztec mythology, Huehuecóyotl (meaning \"old coyote\"), the god of dance, music and carnality, is depicted in several codices as a man with a coyote's head. He is sometimes depicted as a womanizer, responsible for bringing war into the world by seducing Xochiquetzal, the goddess of love. Epigrapher David H. Kelley argued that the god Quetzalcoatl owed its origins to pre-Aztec Uto-Aztecan mythological depictions of the coyote, which is portrayed as mankind's \"Elder Brother\", a creator, seducer, trickster, and culture hero linked to the morning star.",
"title": "Relationships with humans"
},
{
"paragraph_id": 76,
"text": "Coyote attacks on humans are uncommon and rarely cause serious injuries, due to the relatively small size of the coyote, but have been increasingly frequent, especially in California. By the middle of the 19th century, the coyote was already marked as an enemy by humans. (Sharp & Hall, 1978 Pg. 41-54) There have been only two confirmed fatal attacks: one on a three-year-old named Kelly Keen in Glendale, California and another on a nineteen-year-old named Taylor Mitchell in Nova Scotia, Canada. In the 30 years leading up to March 2006, at least 160 attacks occurred in the United States, mostly in the Los Angeles County area. Data from United States Department of Agriculture (USDA) Wildlife Services, the California Department of Fish and Game, and other sources show that while 41 attacks occurred during the period of 1988–1997, 48 attacks were verified from 1998 through 2003. The majority of these incidents occurred in Southern California near the suburban-wildland interface.",
"title": "Relationships with humans"
},
{
"paragraph_id": 77,
"text": "In the absence of the harassment of coyotes practiced by rural people, urban coyotes are losing their fear of humans, which is further worsened by people intentionally or unintentionally feeding coyotes. In such situations, some coyotes have begun to act aggressively toward humans, chasing joggers and bicyclists, confronting people walking their dogs, and stalking small children. Non-rabid coyotes in these areas sometimes target small children, mostly under the age of 10, though some adults have been bitten.",
"title": "Relationships with humans"
},
{
"paragraph_id": 78,
"text": "Although media reports of such attacks generally identify the animals in question as simply \"coyotes\", research into the genetics of the eastern coyote indicates those involved in attacks in northeast North America, including Pennsylvania, New York, New England, and eastern Canada, may have actually been coywolves, hybrids of Canis latrans and C. lupus, not fully coyotes.",
"title": "Relationships with humans"
},
{
"paragraph_id": 79,
"text": "As of 2007, coyotes were the most abundant livestock predators in western North America, causing the majority of sheep, goat, and cattle losses. For example, according to the National Agricultural Statistics Service, coyotes were responsible for 60.5% of the 224,000 sheep deaths attributed to predation in 2004. The total number of sheep deaths in 2004 comprised 2.22% of the total sheep and lamb population in the United States, which, according to the National Agricultural Statistics Service USDA report, totaled 4.66 million and 7.80 million heads respectively as of July 1, 2005.",
"title": "Relationships with humans"
},
{
"paragraph_id": 80,
"text": "Because coyote populations are typically many times greater and more widely distributed than those of wolves, coyotes cause more overall predation losses. United States government agents routinely shoot, poison, trap, and kill about 90,000 coyotes each year to protect livestock. An Idaho census taken in 2005 showed that individual coyotes were 5% as likely to attack livestock as individual wolves. In Utah, more than 11,000 coyotes were killed for bounties totaling over $500,000 in the fiscal year ending June 30, 2017.",
"title": "Relationships with humans"
},
{
"paragraph_id": 81,
"text": "Livestock guardian dogs are commonly used to aggressively repel predators and have worked well in both fenced pasture and range operations. A 1986 survey of sheep producers in the USA found that 82% reported the use of dogs represented an economic asset.",
"title": "Relationships with humans"
},
{
"paragraph_id": 82,
"text": "Re-wilding cattle, which involves increasing the natural protective tendencies of cattle, is a method for controlling coyotes discussed by Temple Grandin of Colorado State University. This method is gaining popularity among producers who allow their herds to calve on the range and whose cattle graze open pastures throughout the year.",
"title": "Relationships with humans"
},
{
"paragraph_id": 83,
"text": "Coyotes typically bite the throat just behind the jaw and below the ear when attacking adult sheep or goats, with death commonly resulting from suffocation. Blood loss is usually a secondary cause of death. Calves and heavily fleeced sheep are killed by attacking the flanks or hindquarters, causing shock and blood loss. When attacking smaller prey, such as young lambs, the kill is made by biting the skull and spinal regions, causing massive tissue and bone damage. Small or young prey may be completely carried off, leaving only blood as evidence of a kill. Coyotes usually leave the hide and most of the skeleton of larger animals relatively intact, unless food is scarce, in which case they may leave only the largest bones. Scattered bits of wool, skin, and other parts are characteristic where coyotes feed extensively on larger carcasses.",
"title": "Relationships with humans"
},
{
"paragraph_id": 84,
"text": "Tracks are an important factor in distinguishing coyote from dog predation. Coyote tracks tend to be more oval-shaped and compact than those of domestic dogs, and their claw marks are less prominent and the tracks tend to follow a straight line more closely than those of dogs. With the exception of sighthounds, most dogs of similar weight to coyotes have a slightly shorter stride. Coyote kills can be distinguished from wolf kills by less damage to the underlying tissues in the former. Also, coyote scat tends to be smaller than wolf scat.",
"title": "Relationships with humans"
},
{
"paragraph_id": 85,
"text": "Coyotes are often attracted to dog food and animals that are small enough to appear as prey. Items such as garbage, pet food, and sometimes feeding stations for birds and squirrels attract coyotes into backyards. About three to five pets attacked by coyotes are brought into the Animal Urgent Care hospital of South Orange County (California) each week, the majority of which are dogs, since cats typically do not survive the attacks. Scat analysis collected near Claremont, California, revealed that coyotes relied heavily on pets as a food source in winter and spring.",
"title": "Relationships with humans"
},
{
"paragraph_id": 86,
"text": "At one location in Southern California, coyotes began relying on a colony of feral cats as a food source. Over time, the coyotes killed most of the cats and then continued to eat the cat food placed daily at the colony site by people who were maintaining the cat colony. Coyotes usually attack smaller-sized dogs, but they have been known to attack even large, powerful breeds such as the Rottweiler in exceptional cases. Dogs larger than coyotes, such as greyhounds, are generally able to drive them off and have been known to kill coyotes. Smaller breeds are more likely to suffer injury or death.",
"title": "Relationships with humans"
},
{
"paragraph_id": 87,
"text": "Coyote hunting is one of the most common forms of predator hunting that humans partake in. There are not many regulations with regard to the taking of the coyote which means there are many different methods that can be used to hunt the animal. The most common forms are trapping, calling, and hound hunting. Since coyotes are colorblind, seeing only in shades of gray and subtle blues, open camouflages, and plain patterns can be used. As the average male coyote weighs 8 to 20 kg (18 to 44 lbs) and the average female coyote 7 to 18 kg (15 to 40 lbs), a universal projectile that can perform between those weights is the .223 Remington, so that the projectile expands in the target after entry, but before the exit, thus delivering the most energy.",
"title": "Relationships with humans"
},
{
"paragraph_id": 88,
"text": "Coyotes being the light and agile animals they are, they often leave a very light impression on terrain. The coyote's footprint is oblong, approximately 6.35 cm (2.5-inches) long and 5.08 cm (2-inches) wide. There are four claws in both their front and hind paws. The coyote's center pad is relatively shaped like that of a rounded triangle. Like the domestic dog the coyote's front paw is slightly larger than the hind paw. The coyote's paw is most similar to that of the domestic dog.",
"title": "Relationships with humans"
},
{
"paragraph_id": 89,
"text": "Prior to the mid-19th century, coyote fur was considered worthless. This changed with the diminution of beavers, and by 1860, the hunting of coyotes for their fur became a great source of income (75 cents to $1.50 per skin) for wolfers in the Great Plains. Coyote pelts were of significant economic importance during the early 1950s, ranging in price from $5 to $25 per pelt, depending on locality. The coyote's fur is not durable enough to make rugs, but can be used for coats and jackets, scarves, or muffs. The majority of pelts are used for making trimmings, such as coat collars and sleeves for women's clothing. Coyote fur is sometimes dyed black as imitation silver fox.",
"title": "Relationships with humans"
},
{
"paragraph_id": 90,
"text": "Coyotes were occasionally eaten by trappers and mountain men during the western expansion. Coyotes sometimes featured in the feasts of the Plains Indians, and coyote pups were eaten by the indigenous people of San Gabriel, California. The taste of coyote meat has been likened to that of the wolf and is more tender than pork when boiled. Coyote fat, when taken in the fall, has been used on occasion to grease leather or eaten as a spread.",
"title": "Relationships with humans"
},
{
"paragraph_id": 91,
"text": "Coyotes were likely semidomesticated by various pre-Columbian cultures. Some 19th-century writers wrote of coyotes being kept in native villages in the Great Plains. The coyote is easily tamed as a pup, but can become destructive as an adult. Both full-blooded and hybrid coyotes can be playful and confiding with their owners, but are suspicious and shy of strangers, though coyotes being tractable enough to be used for practical purposes like retrieving and pointing have been recorded. A tame coyote named \"Butch\", caught in the summer of 1945, had a short-lived career in cinema, appearing in Smoky (1946) and Ramrod (1947) before being shot while raiding a henhouse.",
"title": "Relationships with humans"
}
] | The coyote is a species of canine native to North America. It is smaller than its close relative, the wolf, and slightly smaller than the closely related eastern wolf and red wolf. It fills much of the same ecological niche as the golden jackal does in Eurasia. The coyote is larger and more predatory and was once referred to as the American jackal by a behavioral ecologist. Other historical names for the species include the prairie wolf and the brush wolf. The coyote is listed as least concern by the International Union for Conservation of Nature, due to its wide distribution and abundance throughout North America. The species is versatile, able to adapt to and expand into environments modified by humans; urban coyotes are common in many cities. The coyote was sighted in eastern Panama for the first time in 2013. The coyote has 19 recognized subspecies. The average male weighs 8 to 20 kg and the average female 7 to 18 kg. Their fur color is predominantly light gray and red or fulvous interspersed with black and white, though it varies somewhat with geography. It is highly flexible in social organization, living either in a family unit or in loosely knit packs of unrelated individuals. Primarily carnivorous, its diet consists mainly of deer, rabbits, hares, rodents, birds, reptiles, amphibians, fish, and invertebrates, though it may also eat fruits and vegetables on occasion. Its characteristic vocalization is a howl made by solitary individuals. Humans are the coyote's greatest threat, followed by cougars and gray wolves. Despite predation by gray wolves, coyotes sometimes mate with them, and with eastern, or red wolves, producing "coywolf" hybrids. In the northeastern regions of North America, the eastern coyote is the result of various historical and recent matings with various types of wolves. Genetic studies show that most North American wolves contain some level of coyote DNA. The coyote is a prominent character in Native American folklore, mainly in Aridoamerica, usually depicted as a trickster that alternately assumes the form of an actual coyote or a man. As with other trickster figures, the coyote uses deception and humor to rebel against social conventions. The animal was especially respected in Mesoamerican cosmology as a symbol of military might. After the European colonization of the Americas, it was seen in Anglo-American culture as a cowardly and untrustworthy animal. Unlike wolves, which have seen their public image improve, attitudes towards the coyote remain largely negative. | 2001-10-20T12:54:44Z | 2023-12-30T17:39:48Z | [
"Template:Cite web",
"Template:As of",
"Template:Small",
"Template:Carnivora",
"Template:See also",
"Template:Notelist",
"Template:Use mdy dates",
"Template:Nbsp",
"Template:ITIS",
"Template:Asof",
"Template:Cite AV media",
"Template:Cite journal",
"Template:Cite conference",
"Template:Authority control",
"Template:Short description",
"Template:Speciesbox",
"Template:Convert",
"Template:Cite news",
"Template:Taxonbar",
"Template:Anchor",
"Template:Harvnb",
"Template:ISBN",
"Template:ASIN",
"Template:Wiktionary",
"Template:Cite book",
"Template:Image frame",
"Template:Failed verification",
"Template:Cite dictionary",
"Template:Lang",
"Template:Cbignore",
"Template:Cite thesis",
"Template:Wikiquote",
"Template:Cite magazine",
"Template:Wikispecies",
"Template:Commons",
"Template:Other uses",
"Template:Good article",
"Template:Blockquote",
"Template:Main",
"Template:Use American English",
"Template:MSW3 Wozencraft",
"Template:Pronunciation",
"Template:Efn",
"Template:Cite EB1911",
"Template:Reflist",
"Template:Webarchive",
"Template:Sfn",
"Template:Further",
"Template:Cite report",
"Template:North American Game",
"Template:Cladogram",
"Template:Nobr",
"Template:Citation"
] | https://en.wikipedia.org/wiki/Coyote |
6,711 | Compressor (disambiguation) | A compressor is a mechanical device that increases the pressure of a gas by reducing its volume.
Compressor may also refer to: | [
{
"paragraph_id": 0,
"text": "A compressor is a mechanical device that increases the pressure of a gas by reducing its volume.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Compressor may also refer to:",
"title": ""
}
] | A compressor is a mechanical device that increases the pressure of a gas by reducing its volume. Compressor may also refer to: A device that performs Compression (disambiguation)
Compressor, for dynamic range compression
Compressor (software), a video and audio media compression and encoding application | 2019-10-01T18:42:41Z | [
"Template:Intitle",
"Template:Disambiguation",
"Template:Wiktionary",
"Template:Lookfrom"
] | https://en.wikipedia.org/wiki/Compressor_(disambiguation) |
|
6,713 | Conan the Barbarian | Conan the Barbarian (also known as Conan the Cimmerian) is a fictional sword and sorcery hero who originated in pulp magazines and has since been adapted to books, comics, films (including Conan the Barbarian and Conan the Destroyer), television programs (animated and live-action), video games, and role-playing games. Robert E. Howard created the character in 1932 for a series of fantasy stories published in Weird Tales magazine.
The earliest appearance of a Robert E. Howard character named Conan was that of a black-haired barbarian with heroic attributes in the 1931 short story "People of the Dark". By 1932, Howard had fully conceptualized Conan. Before his death, Howard had written 21 stories starring the barbarian. Over the years many other writers have written works featuring Conan.
Many Conan the Barbarian stories feature Conan embarking on heroic adventures filled with common fantasy elements such as princesses and wizards. Howard's mythopoeia has the stories set in the legendary Hyborian Age in the times after the fall of Atlantis. Conan is a Cimmerian, who are descendants of the Atlanteans and ancestors of the modern Gaels. Conan is himself a descendant of Kull of Atlantis (an earlier adventurer of Howard's). He was born on a battlefield and is the son of a blacksmith. Conan is characterized as chivalric due to his penchant to save damsels in distress. He is honorable and has a sense of enduring loyalty. In contrast to his brooding ancestor, Kull, Conan has a sense of humour. He possesses great strength, combativeness, intelligence, agility, and endurance. The barbarian's appearance is iconic, with square-cut black hair, blue eyes, tanned skin, and giant stature, often wearing a barbarian's garb.
Licensed comics published in the 1970s by Marvel Comics drew further popularity to the character, introducing the now iconic image of Conan in his loincloth. The most popular cinematic adaptation is the 1982 film, Conan the Barbarian directed by John Milius and starring Arnold Schwarzenegger as Conan, in which the plot revolves around Conan facing the villainous Thulsa Doom.
Robert E. Howard created Conan the Barbarian in a series of fantasy stories published in Weird Tales from 1932. Howard was searching for a new character to market to the burgeoning pulp outlets of the early 1930s. In October 1931, he submitted the short story "People of the Dark" to Clayton Publications' new magazine, Strange Tales of Mystery and Terror (June 1932). "People of the Dark" is a story about the remembrance of "past lives", and in its first-person narrative, the protagonist describes one of his previous incarnations: Conan is a black-haired barbarian hero who swears by a deity called Crom. Some Howard scholars believe this Conan to be a forerunner of the more famous character.
In February 1932, Howard vacationed at a border town on the lower Rio Grande. During this trip, he further conceived the character of Conan and also wrote the poem "Cimmeria", much of which echoes specific passages in Plutarch's Lives. According to some scholars, reading Thomas Bulfinch inspired Howard to "coalesce into a coherent whole his literary aspirations and the strong physical, autobiographical elements underlying the creation of Conan".
Having digested these influences upon returning from his trip, Howard rewrote a rejected story, "By This Axe I Rule!" (May 1929), replacing his existing character Kull of Atlantis with his new hero and re-titling it "The Phoenix on the Sword". Howard also wrote "The Scarlet Citadel" and "The Frost-Giant's Daughter", inspired by the Greek myth of Daphne, and submitted both stories to Weird Tales magazine. Although "The Frost-Giant's Daughter" was rejected, the magazine accepted "The Phoenix on the Sword" after it received the requested polishing, and published it in the December 1932 issue. "The Scarlet Citadel" was published the following month.
"The Phoenix on the Sword" appeared in Weird Tales cover-dated December 1932. Editor Farnsworth Wright subsequently prompted Howard to write an 8,000-word essay for personal use detailing "the Hyborian Age", the fictional setting for Conan. Using this essay as his guideline, Howard began plotting "The Tower of the Elephant", a new Conan story that was the first to integrate his new conception of the Hyborian world.
The publication and success of "The Tower of the Elephant" spurred Howard to write more Conan stories for Weird Tales. By the time of Howard's suicide in 1936, he had written 21 complete stories, 17 of which had been published, as well as multiple unfinished fragments.
Following Howard's death, the copyright of the Conan stories passed through several hands. Eventually L. Sprague de Camp was entrusted with management of the fiction line and, beginning with 1967's Conan released by Lancer Books, oversaw a paperback series collecting all of Howard's stories (Lancer folded in 1973 and Ace Books picked up the line, reprinting the older volumes with new trade dress and continuing to release new ones). Howard's original stories received additional edits by de Camp, and de Camp also decided to create additional Conan stories to publish alongside the originals, working with Björn Nyberg and especially Lin Carter. These new stories were created from a mixture of already-complete Howard stories with different settings and characters that were altered to feature Conan and the Hyborian setting instead, incomplete fragments and outlines for Conan stories that were never completed by Howard, and all-new pastiches. Lastly, de Camp created prefaces for each story, fitting them into a timeline of Conan's life that he created.
For roughly 40 years, the original versions of Howard's Conan stories remained out of print. In 1977, the publisher Berkley Books issued three volumes using the earliest published form of the texts from Weird Tales and thus no de Camp edits, with Karl Edward Wagner as series editor, but these were halted by action from de Camp before the remaining three intended volumes could be released. In the 1980s and 1990s, the copyright holders permitted Howard's stories to go out of print entirely as the public demand for sword & sorcery dwindled, but continued to release the occasional new Conan novel by other authors such as Leonard Carpenter, Roland Green, and Harry Turtledove.
In 2000, the British publisher Gollancz Science Fiction issued a two-volume, complete edition of Howard's Conan stories as part of its Fantasy Masterworks imprint, which included several stories that had never seen print in their original form. The Gollancz edition mostly used the versions of the stories as published in Weird Tales.
The two volumes were combined and the stories restored to chronological order as The Complete Chronicles of Conan: Centenary Edition (Gollancz Science Fiction, 2006; edited and with an Afterword by Steve Jones).
In 2003, another British publisher, Wandering Star Books, made an effort both to restore Howard's original manuscripts and to provide a more scholarly and historical view of the Conan stories. It published hardcover editions in England, which were republished in the United States by the Del Rey imprint of Ballantine Books. The first book, Conan of Cimmeria: Volume One (1932–1933) (2003; published in the US as The Coming of Conan the Cimmerian) includes Howard's notes on his fictional setting as well as letters and poems concerning the genesis of his ideas. This was followed by Conan of Cimmeria: Volume Two (1934) (2004; published in the US as The Bloody Crown of Conan) and Conan of Cimmeria: Volume Three (1935–1936) (2005; published in the US as The Conquering Sword of Conan). These three volumes include all the original Conan stories.
The stories occur in the pseudo-historical "Hyborian Age", set after the destruction of Atlantis and before the rise of any known ancient civilization. This is a specific epoch in a fictional timeline created by Howard for many of the low fantasy tales of his artificial legendary.
The reasons behind the invention of the Hyborian Age were perhaps commercial. Howard had an intense love for history and historical dramas, but he also recognized the difficulties and the time-consuming research work needed in maintaining historical accuracy. Also, the poorly-stocked libraries in the rural part of Texas where Howard lived didn't have the material needed for such historical research. By conceiving "a vanished age" and by choosing names that resembled human history, Howard avoided anachronisms and the need for lengthy exposition.
According to "The Phoenix on the Sword", the adventures of Conan take place "Between the years when the oceans drank Atlantis and the gleaming cities, and the years of the rise of the Sons of Aryas."
Hither came Conan, the Cimmerian, black-haired, sullen-eyed, sword in hand, a thief, a reaver, a slayer, with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet."
Robert E. Howard, The Phoenix on the Sword, 1932.
Conan is a Cimmerian. The writings of Robert E. Howard (particularly his essay "The Hyborian Age") suggests that his Cimmerians are based on the Celts or perhaps the historic Cimmerians. Conan was born on a battlefield and is the son of a village blacksmith. Conan matured quickly as a youth and, by age fifteen, he was already a respected warrior who had participated in the destruction of the Aquilonian fortress of Venarium. After its demise, he was struck by wanderlust and began the adventures chronicled by Howard, encountering skulking monsters, evil wizards, tavern wenches, and beautiful princesses. He roamed throughout the Hyborian Age nations as a thief, outlaw, mercenary, and pirate. As he grew older, he began commanding vast units of warriors and escalating his ambitions. In his forties, he seized the crown from the tyrannical king of Aquilonia, the most powerful kingdom of the Hyborian Age, having strangled the previous ruler on the steps of his own throne. Conan's adventures often result in him performing heroic feats, though his motivation for doing so is largely to protect his own survival or for personal gain.
A conspicuous element of Conan's character is his chivalry. He is extremely reluctant to fight women (even when they fight him) and has a strong tendency to save a damsel in distress. In "Jewels of Gwahlur", he has to make a split-second decision whether to save the dancing girl Muriela or the chest of priceless gems which he spent months in search of. So, without hesitation, he rescues Muriela and allows for the treasure to be irrevocably lost. In "The Black Stranger", Conan saves the exile Zingaran Lady Belesa at considerable risk to himself, giving her as a parting gift his fortune in gems big enough to have a comfortable and wealthy life in Zingara, while asking for no favors in return. Reviewer Jennifer Bard also noted that when Conan is in a pirate crew or a robber gang led by another male, his tendency is to subvert and undermine the leader's authority, and eventually supplant (and often, kill) him (e.g. "Pool of the Black One", "A Witch Shall be Born", "Shadows in the Moonlight"). Conversely, in "Queen of the Black Coast", it is noted that Conan "generally agreed to Belit's plan. Hers was the mind that directed their raids, his the arm that carried out her ideas. It was a good life." And at the end of "Red Nails", Conan and Valeria seem to be headed towards a reasonably amicable piratical partnership.
Conan has "sullen", "smoldering", and "volcanic" blue eyes with a black "square-cut mane". Howard once describes him as having a hairy chest and, while comic book interpretations often portray Conan as wearing a loincloth or other minimalist clothing to give him a more barbaric image, Howard describes the character as wearing whatever garb is typical for the kingdom and culture in which Conan finds himself. Howard never gave a strict height or weight for Conan in a story, only describing him in loose terms like "giant" and "massive". In the tales, no human is ever described as being stronger than Conan, although few are mentioned as taller (including the strangler, Baal-Pteor) or of larger bulk. In a letter to P. Schuyler Miller and John D. Clark in 1936, only three months before Howard's death, Conan is described as standing 6 ft/183 cm and weighing 180 pounds (82 kg) when he takes part in an attack on Venarium at only 15 years old, though being far from fully grown. At one point, when he is meeting Juma in Kush, he describes Conan as tall as his friend, at nearly 7 ft. in height. Conan himself says in "Beyond the Black River" that he had "not yet seen 15 snows". at the Battle of Venarium. "At Vanarium he was already a formidable antagonist, though only fifteen, He stood six feet tall [1.83 m] and weighed 180 pounds [82 kg], though he lacked much of having his full growth." Although Conan is muscular, Howard frequently compares his agility and way of moving to that of a panther (see, for instance, "Jewels of Gwahlur", "Beyond the Black River", or "Rogues in the House"). His skin is frequently characterized as bronzed from constant exposure to the sun. In his younger years, he is often depicted wearing a light chain shirt and a horned helmet, though appearances vary with different stories.
During his reign as king of Aquilonia, Conan was
[...] a tall man, mightily shouldered and deep of chest, with a massive corded neck and heavily muscled limbs. He was clad in silk and velvet, with the royal lions of Aquilonia worked in gold upon his rich jupon, and the crown of Aquilonia shone on his square-cut black mane; but the great sword at his side seemed more natural to him than the regal accoutrements. His brow was low and broad, his eyes a volcanic blue that smoldered as if with some inner fire. His dark, scarred, almost sinister face was that of a fighting-man, and his velvet garments could not conceal the hard, dangerous lines of his limbs.
Howard imagined the Cimmerians as a pre-Celtic people with mostly black hair and blue or grey eyes. Ethnically, the Cimmerians to which Conan belongs are descendants of the Atlanteans, though they do not remember their ancestry. In his fictional historical essay "The Hyborian Age", Howard describes how the people of Atlantis—the land where his character King Kull originated—had to move east after a great cataclysm changed the face of the world and sank their island, settling where Ireland and Scotland would eventually be located. Thus they are (in Howard's work) the ancestors of the Irish and Scottish (the Celtic Gaels) and not the Picts, the other ancestor of modern Scots who also appear in Howard's work. In the same work, Howard also described how the Cimmerians eventually moved south and east after the age of Conan (presumably in the vicinity of the Black Sea, where the historical Cimmerians dwelt).
Despite his brutish appearance, Conan uses his brains as well as his brawn. The Cimmerian is a highly skilled warrior, possibly without peer with a sword, but his travels have given him vast experience in other trades, especially as a thief. He's also a talented commander, tactician, and strategist, as well as a born leader. In addition, Conan has advanced knowledge of languages and codes and is able to recognize, or even decipher, certain ancient or secret signs and writings. For example, in "Jewels of Gwahlur" Howard states: "In his roaming about the world the giant adventurer had picked up a wide smattering of knowledge, particularly including the speaking and reading of many alien tongues. Many a sheltered scholar would have been astonished at the Cimmerian's linguistic abilities." He also has incredible stamina, enabling him to go without sleep for a few days. In "A Witch Shall be Born", Conan fights armed men until he is overwhelmed, captured, and crucified, before going an entire night and day without water. However, Conan still possesses the strength to pull the nails from his feet, while hoisting himself into a horse's saddle and riding for ten miles.
Another noticeable trait is his sense of humor, largely absent in the comics and movies, but very much a part of Howard's original vision of the character (particularly apparent in "Xuthal of the Dusk", also known as "The Slithering Shadow".) His sense of humor can also be rather grimly ironic, as was demonstrated by how he unleashes his own version of justice on the treacherous—and ill-fated—innkeeper Aram Baksh in "Shadows in Zamboula".
He is a loyal friend to those true to him, with a barbaric code of conduct that often marks him as more honorable than the more sophisticated people he meets in his travels. Indeed, his straightforward nature and barbarism are constants in all the tales.
Conan is a formidable combatant both armed and unarmed. With his back to the wall, Conan is capable of engaging and killing opponents by the score. This is seen in several stories, such as "Queen of the Black Coast", "The Scarlet Citadel", and "A Witch Shall Be Born". Conan is not superhuman, though; he needed the providential help of Zelata's wolf to defeat four Nemedian soldiers in Howard's novel The Hour of the Dragon. Some of his hardest victories have come from fighting single opponents of inhuman strength: one such as Thak, an ape-like humanoid from "Rogues in the House", or the strangler Baal-Pteor in "Shadows in Zamboula". Conan is far from untouchable and has been captured or defeated several times (on one occasion, knocking himself out after drunkenly running into a wall).
Howard frequently corresponded with H. P. Lovecraft, and the two would sometimes insert references or elements of each other's settings in their works. Later editors reworked many of the original Conan stories by Howard, thus diluting this connection. Nevertheless, many of Howard's unedited Conan stories are arguably part of the Cthulhu Mythos. Additionally, many of the Conan stories by Howard, de Camp, and Carter used geographical place names from Clark Ashton Smith's Hyperborean Cycle.
A number of untitled synopses for Conan stories also exist.
The character of Conan has proven durably popular, resulting in Conan stories by later writers such as Poul Anderson, Leonard Carpenter, Lin Carter, L. Sprague de Camp, Roland J. Green, John C. Hocking, Robert Jordan, Sean A. Moore, Björn Nyberg, Andrew J. Offutt, Steve Perry, John Maddox Roberts, Harry Turtledove, and Karl Edward Wagner. Some of these writers have finished incomplete Conan manuscripts by Howard. Others were created by rewriting Howard stories which originally featured entirely different characters from entirely different milieus. Most, however, are completely original works. In total, more than fifty novels and dozens of short stories featuring the Conan character have been written by authors other than Howard.
The Gnome Press edition (1950–1957) was the first hardcover collection of Howard's Conan stories, including all the original Howard material known to exist at the time, some left unpublished in his lifetime. The later volumes contain some stories rewritten by L. Sprague de Camp (like "The Treasure of Tranicos"), including several non-Conan Howard stories, mostly historical exotica situated in the Levant at the time of the Crusades, which he turned into Conan yarns. The Gnome edition also issued the first Conan story written by an author other than Howard—the final volume published, which is by Björn Nyberg and revised by de Camp.
The Lancer/Ace editions (1966–1977), under the direction of de Camp and Lin Carter, were the first comprehensive paperbacks, compiling the material from the Gnome Press series together in a chronological order with all the remaining original Howard material, including that left unpublished in his lifetime and fragments and outlines. These were completed by de Camp and Carter. The series also included Howard stories originally featuring other protagonists that were rewritten by de Camp as Conan stories. New Conan stories written entirely by de Camp and Carter were added as well. Lancer Books went out of business before bringing out the entire series, the publication of which was completed by Ace Books. Eight of the eventual twelve volumes published featured dynamic cover paintings by Frank Frazetta that, for many fans, presented the definitive, iconic impression of Conan and his world. For decades to come, most other portrayals of the Cimmerian and his imitators were heavily influenced by the cover paintings of this series.
Most editions after the Lancer/Ace series have been of either the original Howard stories or Conan material by others, but not both. The exception are the Ace Maroto editions (1978–1981), which include both new material by other authors and older material by Howard, though the latter are some of the non-Conan tales by Howard rewritten as Conan stories by de Camp. Notable later editions of the original Howard Conan stories include the Donald M. Grant editions (1974–1989, incomplete); Berkley editions (1977); Gollancz editions (2000–2006), and Wandering Star/Del Rey editions (2003–2005). Later series of new Conan material include the Bantam editions (1978–1982) and Tor editions (1982–2004).
In an attempt to provide a coherent timeline which fit the numerous adventures of Conan penned by Robert E. Howard and later writers, various "Conan chronologies" have been prepared by many people from the 1930s onward. Note that no consistent timeline has yet accommodated every single Conan story. The following are the principal theories that have been advanced over the years.
The very first Conan cinematic project was planned by Edward Summer. Summer envisioned a series of Conan films, much like the James Bond franchise. He outlined six stories for this film series, but none were ever made. An original screenplay by Summer and Roy Thomas was written, but their lore-authentic screen story was never filmed. However, the resulting film, Conan the Barbarian (1982), was a combination of director John Milius's ideas and plots from Conan stories (written also by Howard's successors, notably Lin Carter and L. Sprague de Camp). A Nietzschean motto and Conan's life philosophy were notably added in this adaptation.
The plot of Conan the Barbarian (1982) begins with Conan being enslaved by the Vanir raiders of Thulsa Doom, a malevolent warlord who is responsible for the slaying of Conan's parents and the genocide of his people. Later, Thulsa Doom becomes a cult leader of a religion that worships Set, a Snake God. The vengeful Conan, the archer Subotai and the thief Valeria set out on a quest to rescue a princess held captive by Thulsa Doom. The film was directed by John Milius and produced by Dino De Laurentiis. The character of Conan was played by Jorge Sanz as a child and Arnold Schwarzenegger as an adult. It was Schwarzenegger's break-through role as an actor.
This film was followed by a less popular sequel, Conan the Destroyer in 1984. This sequel was a more typical fantasy-genre film and was even less faithful to Howard's Conan stories, being just a picaresque story of an assorted bunch of adventurers.
The third film in the Conan trilogy was planned for 1987 to be titled Conan the Conqueror. The director was to be either Guy Hamilton or John Guillermin. Since Arnold Schwarzenegger was committed to the film Predator and De Laurentiis's contract with the star had expired after his obligation to Red Sonja and Raw Deal, he wasn't keen to negotiate a new one; thus the third Conan film sank into development hell. The script was eventually turned into Kull the Conqueror.
There were rumors in the late 1990s of another Conan sequel, a story about an older Conan titled King Conan: Crown of Iron, but Schwarzenegger's election in 2003 as governor of California ended this project. Warner Bros. spent seven years trying to get the project off the ground. However, in June 2007 the rights reverted to Paradox Entertainment, though all drafts made under Warner remained with them. In August 2007, it was announced that Millennium Films had acquired the rights to the project. Production was aimed for a Spring 2006 start, with the intention of having stories more faithful to the Robert E. Howard creation. In June 2009, Millennium hired Marcus Nispel to direct. In January 2010, Jason Momoa was selected for the role of Conan. The film was released in August 2011, and met poor critical reviews and box office results.
In 2012, producers Chris Morgan and Frederick Malmberg announced plans for a sequel to the 1982 Conan the Barbarian titled The Legend of Conan, with Arnold Schwarzenegger reprising his role as Conan. A year later, Deadline reported that Andrea Berloff would write the script. Years passed since the initial announcement as Schwarzenegger worked on other films, but as late as 2016, Schwarzenegger affirmed his enthusiasm for making the film, saying, "Interest is high ... but we are not rushing." The script was finished, and Schwarzenegger and Morgan were meeting with possible directors. In April 2017, producer Chris Morgan stated that Universal had dropped the project, although there was a possibility of a TV show. The story of the film was supposed to be set 30 years after the first, with some inspiration from Clint Eastwood's Unforgiven.
There have been three television series related to Conan:
Conan the Barbarian has appeared in comics nearly non-stop since 1970. The comics are arguably, apart from the books, the vehicle that had the greatest influence on the longevity and popularity of the character. The earliest comic book adaptation of Conan was written in Spanish and first published in Mexico in the fifties. This version, which was done without authorization from the estate of Robert E. Howard, is loosely based on the short story Queen of the Black Coast. The earliest licensed comic adaptations were written in English and first published by Marvel Comics in the seventies, beginning with Conan the Barbarian (1970–1993) and the classic Savage Sword of Conan (1974–1995). Dark Horse Comics launched their Conan series in 2003. Dark Horse Comics is currently publishing compilations of the 1970s Marvel Comics series in trade paperback format.
Barack Obama, former President of the United States, is a collector of Conan the Barbarian comic books and a big fan of the character and appeared as a character in a comic book called Barack the Barbarian from Devil's Due.
Marvel Comics introduced a relatively lore-faithful version of Conan the Barbarian in 1970 with Conan the Barbarian, written by Roy Thomas and illustrated by Barry Windsor-Smith. Smith was succeeded by penciller John Buscema, while Thomas continued to write for many years. Later writers included J. M. DeMatteis, Bruce Jones, Michael Fleisher, Doug Moench, Jim Owsley, Alan Zelenetz, Chuck Dixon and Don Kraar. In 1974, Conan the Barbarian series spawned the more adult-oriented, black-and-white comics magazine Savage Sword of Conan, written by Thomas with art mostly by Buscema or Alfredo Alcala. Marvel also published several graphic novels starring the character , and a handbook with detailed information about the Hyborian world. Conan the Barbarian is also officially considered to be part of the larger Marvel Universe and has interacted with heroes and villains alike.
The Marvel Conan stories were also adapted as a newspaper comic strip which appeared daily and Sunday from 4 September 1978 to 12 April 1981. Originally written by Roy Thomas and illustrated by John Buscema, the strip was continued by several different Marvel artists and writers.
Dark Horse Comics began their comic adaptation of the Conan saga in 2003. Entitled simply Conan, the series was first written by Kurt Busiek and pencilled by Cary Nord. Tim Truman replaced Busiek, when Busiek signed an exclusive contract with DC Comics. However, Busiek issues were sometimes used for filler. This series is an interpretation of the original Conan material by Robert E. Howard with no connection whatsoever to the earlier Marvel comics or any Conan story not written or envisioned by Howard supplemented by wholly original material.
A second series, Conan the Cimmerian was released in 2008 by Tim Truman (writer) and Tomás Giorello (artist). The series ran for twenty-six issues, including an introductory "zero" issue.
Dark Horse's third series, Conan: Road of Kings, began in December 2010 by Roy Thomas (writer) and Mike Hawthorne (artist) and ran for twelve issues.
A fourth series, Conan the Barbarian, began in February 2012 by Brian Wood (writer) and Becky Cloonan (artist). It ran for twenty-five issues, and expanded on Robert E. Howard's Queen of the Black Coast.
A fifth series, Conan the Avenger, began in April 2014 by Fred Van Lente (writer) and Brian Ching (artist). It ran for twenty-five issues, and expanded on Robert E. Howard's The Snout in the Dark and A Witch Shall Be Born.
Dark Horse's sixth series, Conan the Slayer, began in July 2016 by Cullen Bunn (writer) and Sergio Dávila (artist).
In 2018, Marvel reacquired the rights and started new runs of both Conan the Barbarian and Savage Sword of Conan in January/February 2019. Conan is also a lead in the Savage Avengers title, which launched in 2019 and received a second volume in 2022.
In 2022, it was revealed that Titan Publishing Group had acquired the rights from Heroic Signatures to make Conan comics, with a new ongoing series set to release in May 2023.
TSR, Inc. signed a license agreement in 1984 to publish Conan-related gaming material:
In 1988 Steve Jackson Games acquired a Conan license and started publishing Conan solo adventures for its GURPS generic system of rules as of 1988 and a GURPS Conan core rulebook in 1989:
In 2003 the British company Mongoose Publishing bought a license and acquired in turn the rights to make use of the Conan gaming franchise, publishing a Conan role-playing game from 2004 until 2010. The game ran the OGL System of rules that Mongoose established for its OGL series of games:
In 2010 Mongoose Publishing dropped the Conan license. In February 2015, another British company, Modiphius Entertainment, acquired the license, announcing plans to put out a new Conan role-playing game in August of that year. Actually, the core rulebook was not launched (via Kickstarter) until a whole year later, in February 2016, reaching by far all funds needed for publication. Long after the Kickstarter ended the core rulebook was launched in PDF format on January 31, 2017. The physical core rulebook finally started distribution in June 2017:
Nine video games have been released based on the Conan mythos.
The name Conan and the names of some of Robert E. Howard's other characters are claimed as trademarked by Conan Properties International and licensed to Cabinet Entertainment, both entities controlled by CEO Fredrik Malmberg.
Since Robert E. Howard's Conan stories were published at a time when the date of publication was the marker (1932–1963), however, and any new owners failed to renew them to maintain the copyrights, the exact copyright status of all of Howard's 'Conan' works is in question. The majority of Howard's Conan fiction exist in at least two versions, subject to different copyright standards, namely 1) the original Weird Tales publications before or shortly after Howard's death, which are generally understood to be public domain and 2) restored versions based upon manuscripts which were unpublished during Howard's lifetime.
The Australian site of Project Gutenberg hosts digital copies of many of Howard's stories, including several works about Conan.
In the United Kingdom, works are released into the public domain 70 years after the death of an author. With Howard having died in 1936, his works have been in the public domain there since 2006. The same laws applies for Malmberg's home country Sweden.
In August 2018, Conan Properties International LLC won by default a suit against Spanish sculptor Ricardo Jove Sanchez after he failed to appear at court in the United States. Jove had started a crowdfunding campaign that raised around €3000 on Kickstarter, with the intent of selling barbarian figurines to online customers, including those in the United States. The Magistrate Judge originally recommended statutory damages for infringement on three Robert E. Howard characters not including Conan, but Jove was eventually fined $3,000 per character used in the campaign, including Conan, for a total of $21,000.
In September 2020, it was announced that Netflix had made a larger deal involving Malmberg and Mark Wheeler from Pathfinder Media between Netflix and Conan Properties International for the exclusive rights to the Conan library for the rights for live-action and animated films and TV shows. | [
{
"paragraph_id": 0,
"text": "Conan the Barbarian (also known as Conan the Cimmerian) is a fictional sword and sorcery hero who originated in pulp magazines and has since been adapted to books, comics, films (including Conan the Barbarian and Conan the Destroyer), television programs (animated and live-action), video games, and role-playing games. Robert E. Howard created the character in 1932 for a series of fantasy stories published in Weird Tales magazine.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The earliest appearance of a Robert E. Howard character named Conan was that of a black-haired barbarian with heroic attributes in the 1931 short story \"People of the Dark\". By 1932, Howard had fully conceptualized Conan. Before his death, Howard had written 21 stories starring the barbarian. Over the years many other writers have written works featuring Conan.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Many Conan the Barbarian stories feature Conan embarking on heroic adventures filled with common fantasy elements such as princesses and wizards. Howard's mythopoeia has the stories set in the legendary Hyborian Age in the times after the fall of Atlantis. Conan is a Cimmerian, who are descendants of the Atlanteans and ancestors of the modern Gaels. Conan is himself a descendant of Kull of Atlantis (an earlier adventurer of Howard's). He was born on a battlefield and is the son of a blacksmith. Conan is characterized as chivalric due to his penchant to save damsels in distress. He is honorable and has a sense of enduring loyalty. In contrast to his brooding ancestor, Kull, Conan has a sense of humour. He possesses great strength, combativeness, intelligence, agility, and endurance. The barbarian's appearance is iconic, with square-cut black hair, blue eyes, tanned skin, and giant stature, often wearing a barbarian's garb.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Licensed comics published in the 1970s by Marvel Comics drew further popularity to the character, introducing the now iconic image of Conan in his loincloth. The most popular cinematic adaptation is the 1982 film, Conan the Barbarian directed by John Milius and starring Arnold Schwarzenegger as Conan, in which the plot revolves around Conan facing the villainous Thulsa Doom.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Robert E. Howard created Conan the Barbarian in a series of fantasy stories published in Weird Tales from 1932. Howard was searching for a new character to market to the burgeoning pulp outlets of the early 1930s. In October 1931, he submitted the short story \"People of the Dark\" to Clayton Publications' new magazine, Strange Tales of Mystery and Terror (June 1932). \"People of the Dark\" is a story about the remembrance of \"past lives\", and in its first-person narrative, the protagonist describes one of his previous incarnations: Conan is a black-haired barbarian hero who swears by a deity called Crom. Some Howard scholars believe this Conan to be a forerunner of the more famous character.",
"title": "Publication history"
},
{
"paragraph_id": 5,
"text": "In February 1932, Howard vacationed at a border town on the lower Rio Grande. During this trip, he further conceived the character of Conan and also wrote the poem \"Cimmeria\", much of which echoes specific passages in Plutarch's Lives. According to some scholars, reading Thomas Bulfinch inspired Howard to \"coalesce into a coherent whole his literary aspirations and the strong physical, autobiographical elements underlying the creation of Conan\".",
"title": "Publication history"
},
{
"paragraph_id": 6,
"text": "Having digested these influences upon returning from his trip, Howard rewrote a rejected story, \"By This Axe I Rule!\" (May 1929), replacing his existing character Kull of Atlantis with his new hero and re-titling it \"The Phoenix on the Sword\". Howard also wrote \"The Scarlet Citadel\" and \"The Frost-Giant's Daughter\", inspired by the Greek myth of Daphne, and submitted both stories to Weird Tales magazine. Although \"The Frost-Giant's Daughter\" was rejected, the magazine accepted \"The Phoenix on the Sword\" after it received the requested polishing, and published it in the December 1932 issue. \"The Scarlet Citadel\" was published the following month.",
"title": "Publication history"
},
{
"paragraph_id": 7,
"text": "\"The Phoenix on the Sword\" appeared in Weird Tales cover-dated December 1932. Editor Farnsworth Wright subsequently prompted Howard to write an 8,000-word essay for personal use detailing \"the Hyborian Age\", the fictional setting for Conan. Using this essay as his guideline, Howard began plotting \"The Tower of the Elephant\", a new Conan story that was the first to integrate his new conception of the Hyborian world.",
"title": "Publication history"
},
{
"paragraph_id": 8,
"text": "The publication and success of \"The Tower of the Elephant\" spurred Howard to write more Conan stories for Weird Tales. By the time of Howard's suicide in 1936, he had written 21 complete stories, 17 of which had been published, as well as multiple unfinished fragments.",
"title": "Publication history"
},
{
"paragraph_id": 9,
"text": "Following Howard's death, the copyright of the Conan stories passed through several hands. Eventually L. Sprague de Camp was entrusted with management of the fiction line and, beginning with 1967's Conan released by Lancer Books, oversaw a paperback series collecting all of Howard's stories (Lancer folded in 1973 and Ace Books picked up the line, reprinting the older volumes with new trade dress and continuing to release new ones). Howard's original stories received additional edits by de Camp, and de Camp also decided to create additional Conan stories to publish alongside the originals, working with Björn Nyberg and especially Lin Carter. These new stories were created from a mixture of already-complete Howard stories with different settings and characters that were altered to feature Conan and the Hyborian setting instead, incomplete fragments and outlines for Conan stories that were never completed by Howard, and all-new pastiches. Lastly, de Camp created prefaces for each story, fitting them into a timeline of Conan's life that he created.",
"title": "Publication history"
},
{
"paragraph_id": 10,
"text": "For roughly 40 years, the original versions of Howard's Conan stories remained out of print. In 1977, the publisher Berkley Books issued three volumes using the earliest published form of the texts from Weird Tales and thus no de Camp edits, with Karl Edward Wagner as series editor, but these were halted by action from de Camp before the remaining three intended volumes could be released. In the 1980s and 1990s, the copyright holders permitted Howard's stories to go out of print entirely as the public demand for sword & sorcery dwindled, but continued to release the occasional new Conan novel by other authors such as Leonard Carpenter, Roland Green, and Harry Turtledove.",
"title": "Publication history"
},
{
"paragraph_id": 11,
"text": "In 2000, the British publisher Gollancz Science Fiction issued a two-volume, complete edition of Howard's Conan stories as part of its Fantasy Masterworks imprint, which included several stories that had never seen print in their original form. The Gollancz edition mostly used the versions of the stories as published in Weird Tales.",
"title": "Publication history"
},
{
"paragraph_id": 12,
"text": "The two volumes were combined and the stories restored to chronological order as The Complete Chronicles of Conan: Centenary Edition (Gollancz Science Fiction, 2006; edited and with an Afterword by Steve Jones).",
"title": "Publication history"
},
{
"paragraph_id": 13,
"text": "In 2003, another British publisher, Wandering Star Books, made an effort both to restore Howard's original manuscripts and to provide a more scholarly and historical view of the Conan stories. It published hardcover editions in England, which were republished in the United States by the Del Rey imprint of Ballantine Books. The first book, Conan of Cimmeria: Volume One (1932–1933) (2003; published in the US as The Coming of Conan the Cimmerian) includes Howard's notes on his fictional setting as well as letters and poems concerning the genesis of his ideas. This was followed by Conan of Cimmeria: Volume Two (1934) (2004; published in the US as The Bloody Crown of Conan) and Conan of Cimmeria: Volume Three (1935–1936) (2005; published in the US as The Conquering Sword of Conan). These three volumes include all the original Conan stories.",
"title": "Publication history"
},
{
"paragraph_id": 14,
"text": "The stories occur in the pseudo-historical \"Hyborian Age\", set after the destruction of Atlantis and before the rise of any known ancient civilization. This is a specific epoch in a fictional timeline created by Howard for many of the low fantasy tales of his artificial legendary.",
"title": "Setting"
},
{
"paragraph_id": 15,
"text": "The reasons behind the invention of the Hyborian Age were perhaps commercial. Howard had an intense love for history and historical dramas, but he also recognized the difficulties and the time-consuming research work needed in maintaining historical accuracy. Also, the poorly-stocked libraries in the rural part of Texas where Howard lived didn't have the material needed for such historical research. By conceiving \"a vanished age\" and by choosing names that resembled human history, Howard avoided anachronisms and the need for lengthy exposition.",
"title": "Setting"
},
{
"paragraph_id": 16,
"text": "According to \"The Phoenix on the Sword\", the adventures of Conan take place \"Between the years when the oceans drank Atlantis and the gleaming cities, and the years of the rise of the Sons of Aryas.\"",
"title": "Setting"
},
{
"paragraph_id": 17,
"text": "Hither came Conan, the Cimmerian, black-haired, sullen-eyed, sword in hand, a thief, a reaver, a slayer, with gigantic melancholies and gigantic mirth, to tread the jeweled thrones of the Earth under his sandalled feet.\"",
"title": "Personality and character"
},
{
"paragraph_id": 18,
"text": "Robert E. Howard, The Phoenix on the Sword, 1932.",
"title": "Personality and character"
},
{
"paragraph_id": 19,
"text": "Conan is a Cimmerian. The writings of Robert E. Howard (particularly his essay \"The Hyborian Age\") suggests that his Cimmerians are based on the Celts or perhaps the historic Cimmerians. Conan was born on a battlefield and is the son of a village blacksmith. Conan matured quickly as a youth and, by age fifteen, he was already a respected warrior who had participated in the destruction of the Aquilonian fortress of Venarium. After its demise, he was struck by wanderlust and began the adventures chronicled by Howard, encountering skulking monsters, evil wizards, tavern wenches, and beautiful princesses. He roamed throughout the Hyborian Age nations as a thief, outlaw, mercenary, and pirate. As he grew older, he began commanding vast units of warriors and escalating his ambitions. In his forties, he seized the crown from the tyrannical king of Aquilonia, the most powerful kingdom of the Hyborian Age, having strangled the previous ruler on the steps of his own throne. Conan's adventures often result in him performing heroic feats, though his motivation for doing so is largely to protect his own survival or for personal gain.",
"title": "Personality and character"
},
{
"paragraph_id": 20,
"text": "A conspicuous element of Conan's character is his chivalry. He is extremely reluctant to fight women (even when they fight him) and has a strong tendency to save a damsel in distress. In \"Jewels of Gwahlur\", he has to make a split-second decision whether to save the dancing girl Muriela or the chest of priceless gems which he spent months in search of. So, without hesitation, he rescues Muriela and allows for the treasure to be irrevocably lost. In \"The Black Stranger\", Conan saves the exile Zingaran Lady Belesa at considerable risk to himself, giving her as a parting gift his fortune in gems big enough to have a comfortable and wealthy life in Zingara, while asking for no favors in return. Reviewer Jennifer Bard also noted that when Conan is in a pirate crew or a robber gang led by another male, his tendency is to subvert and undermine the leader's authority, and eventually supplant (and often, kill) him (e.g. \"Pool of the Black One\", \"A Witch Shall be Born\", \"Shadows in the Moonlight\"). Conversely, in \"Queen of the Black Coast\", it is noted that Conan \"generally agreed to Belit's plan. Hers was the mind that directed their raids, his the arm that carried out her ideas. It was a good life.\" And at the end of \"Red Nails\", Conan and Valeria seem to be headed towards a reasonably amicable piratical partnership.",
"title": "Personality and character"
},
{
"paragraph_id": 21,
"text": "Conan has \"sullen\", \"smoldering\", and \"volcanic\" blue eyes with a black \"square-cut mane\". Howard once describes him as having a hairy chest and, while comic book interpretations often portray Conan as wearing a loincloth or other minimalist clothing to give him a more barbaric image, Howard describes the character as wearing whatever garb is typical for the kingdom and culture in which Conan finds himself. Howard never gave a strict height or weight for Conan in a story, only describing him in loose terms like \"giant\" and \"massive\". In the tales, no human is ever described as being stronger than Conan, although few are mentioned as taller (including the strangler, Baal-Pteor) or of larger bulk. In a letter to P. Schuyler Miller and John D. Clark in 1936, only three months before Howard's death, Conan is described as standing 6 ft/183 cm and weighing 180 pounds (82 kg) when he takes part in an attack on Venarium at only 15 years old, though being far from fully grown. At one point, when he is meeting Juma in Kush, he describes Conan as tall as his friend, at nearly 7 ft. in height. Conan himself says in \"Beyond the Black River\" that he had \"not yet seen 15 snows\". at the Battle of Venarium. \"At Vanarium he was already a formidable antagonist, though only fifteen, He stood six feet tall [1.83 m] and weighed 180 pounds [82 kg], though he lacked much of having his full growth.\" Although Conan is muscular, Howard frequently compares his agility and way of moving to that of a panther (see, for instance, \"Jewels of Gwahlur\", \"Beyond the Black River\", or \"Rogues in the House\"). His skin is frequently characterized as bronzed from constant exposure to the sun. In his younger years, he is often depicted wearing a light chain shirt and a horned helmet, though appearances vary with different stories.",
"title": "Personality and character"
},
{
"paragraph_id": 22,
"text": "During his reign as king of Aquilonia, Conan was",
"title": "Personality and character"
},
{
"paragraph_id": 23,
"text": "[...] a tall man, mightily shouldered and deep of chest, with a massive corded neck and heavily muscled limbs. He was clad in silk and velvet, with the royal lions of Aquilonia worked in gold upon his rich jupon, and the crown of Aquilonia shone on his square-cut black mane; but the great sword at his side seemed more natural to him than the regal accoutrements. His brow was low and broad, his eyes a volcanic blue that smoldered as if with some inner fire. His dark, scarred, almost sinister face was that of a fighting-man, and his velvet garments could not conceal the hard, dangerous lines of his limbs.",
"title": "Personality and character"
},
{
"paragraph_id": 24,
"text": "Howard imagined the Cimmerians as a pre-Celtic people with mostly black hair and blue or grey eyes. Ethnically, the Cimmerians to which Conan belongs are descendants of the Atlanteans, though they do not remember their ancestry. In his fictional historical essay \"The Hyborian Age\", Howard describes how the people of Atlantis—the land where his character King Kull originated—had to move east after a great cataclysm changed the face of the world and sank their island, settling where Ireland and Scotland would eventually be located. Thus they are (in Howard's work) the ancestors of the Irish and Scottish (the Celtic Gaels) and not the Picts, the other ancestor of modern Scots who also appear in Howard's work. In the same work, Howard also described how the Cimmerians eventually moved south and east after the age of Conan (presumably in the vicinity of the Black Sea, where the historical Cimmerians dwelt).",
"title": "Personality and character"
},
{
"paragraph_id": 25,
"text": "Despite his brutish appearance, Conan uses his brains as well as his brawn. The Cimmerian is a highly skilled warrior, possibly without peer with a sword, but his travels have given him vast experience in other trades, especially as a thief. He's also a talented commander, tactician, and strategist, as well as a born leader. In addition, Conan has advanced knowledge of languages and codes and is able to recognize, or even decipher, certain ancient or secret signs and writings. For example, in \"Jewels of Gwahlur\" Howard states: \"In his roaming about the world the giant adventurer had picked up a wide smattering of knowledge, particularly including the speaking and reading of many alien tongues. Many a sheltered scholar would have been astonished at the Cimmerian's linguistic abilities.\" He also has incredible stamina, enabling him to go without sleep for a few days. In \"A Witch Shall be Born\", Conan fights armed men until he is overwhelmed, captured, and crucified, before going an entire night and day without water. However, Conan still possesses the strength to pull the nails from his feet, while hoisting himself into a horse's saddle and riding for ten miles.",
"title": "Personality and character"
},
{
"paragraph_id": 26,
"text": "Another noticeable trait is his sense of humor, largely absent in the comics and movies, but very much a part of Howard's original vision of the character (particularly apparent in \"Xuthal of the Dusk\", also known as \"The Slithering Shadow\".) His sense of humor can also be rather grimly ironic, as was demonstrated by how he unleashes his own version of justice on the treacherous—and ill-fated—innkeeper Aram Baksh in \"Shadows in Zamboula\".",
"title": "Personality and character"
},
{
"paragraph_id": 27,
"text": "He is a loyal friend to those true to him, with a barbaric code of conduct that often marks him as more honorable than the more sophisticated people he meets in his travels. Indeed, his straightforward nature and barbarism are constants in all the tales.",
"title": "Personality and character"
},
{
"paragraph_id": 28,
"text": "Conan is a formidable combatant both armed and unarmed. With his back to the wall, Conan is capable of engaging and killing opponents by the score. This is seen in several stories, such as \"Queen of the Black Coast\", \"The Scarlet Citadel\", and \"A Witch Shall Be Born\". Conan is not superhuman, though; he needed the providential help of Zelata's wolf to defeat four Nemedian soldiers in Howard's novel The Hour of the Dragon. Some of his hardest victories have come from fighting single opponents of inhuman strength: one such as Thak, an ape-like humanoid from \"Rogues in the House\", or the strangler Baal-Pteor in \"Shadows in Zamboula\". Conan is far from untouchable and has been captured or defeated several times (on one occasion, knocking himself out after drunkenly running into a wall).",
"title": "Personality and character"
},
{
"paragraph_id": 29,
"text": "Howard frequently corresponded with H. P. Lovecraft, and the two would sometimes insert references or elements of each other's settings in their works. Later editors reworked many of the original Conan stories by Howard, thus diluting this connection. Nevertheless, many of Howard's unedited Conan stories are arguably part of the Cthulhu Mythos. Additionally, many of the Conan stories by Howard, de Camp, and Carter used geographical place names from Clark Ashton Smith's Hyperborean Cycle.",
"title": "Personality and character"
},
{
"paragraph_id": 30,
"text": "A number of untitled synopses for Conan stories also exist.",
"title": "Original Robert E. Howard Conan stories"
},
{
"paragraph_id": 31,
"text": "The character of Conan has proven durably popular, resulting in Conan stories by later writers such as Poul Anderson, Leonard Carpenter, Lin Carter, L. Sprague de Camp, Roland J. Green, John C. Hocking, Robert Jordan, Sean A. Moore, Björn Nyberg, Andrew J. Offutt, Steve Perry, John Maddox Roberts, Harry Turtledove, and Karl Edward Wagner. Some of these writers have finished incomplete Conan manuscripts by Howard. Others were created by rewriting Howard stories which originally featured entirely different characters from entirely different milieus. Most, however, are completely original works. In total, more than fifty novels and dozens of short stories featuring the Conan character have been written by authors other than Howard.",
"title": "Book editions"
},
{
"paragraph_id": 32,
"text": "The Gnome Press edition (1950–1957) was the first hardcover collection of Howard's Conan stories, including all the original Howard material known to exist at the time, some left unpublished in his lifetime. The later volumes contain some stories rewritten by L. Sprague de Camp (like \"The Treasure of Tranicos\"), including several non-Conan Howard stories, mostly historical exotica situated in the Levant at the time of the Crusades, which he turned into Conan yarns. The Gnome edition also issued the first Conan story written by an author other than Howard—the final volume published, which is by Björn Nyberg and revised by de Camp.",
"title": "Book editions"
},
{
"paragraph_id": 33,
"text": "The Lancer/Ace editions (1966–1977), under the direction of de Camp and Lin Carter, were the first comprehensive paperbacks, compiling the material from the Gnome Press series together in a chronological order with all the remaining original Howard material, including that left unpublished in his lifetime and fragments and outlines. These were completed by de Camp and Carter. The series also included Howard stories originally featuring other protagonists that were rewritten by de Camp as Conan stories. New Conan stories written entirely by de Camp and Carter were added as well. Lancer Books went out of business before bringing out the entire series, the publication of which was completed by Ace Books. Eight of the eventual twelve volumes published featured dynamic cover paintings by Frank Frazetta that, for many fans, presented the definitive, iconic impression of Conan and his world. For decades to come, most other portrayals of the Cimmerian and his imitators were heavily influenced by the cover paintings of this series.",
"title": "Book editions"
},
{
"paragraph_id": 34,
"text": "Most editions after the Lancer/Ace series have been of either the original Howard stories or Conan material by others, but not both. The exception are the Ace Maroto editions (1978–1981), which include both new material by other authors and older material by Howard, though the latter are some of the non-Conan tales by Howard rewritten as Conan stories by de Camp. Notable later editions of the original Howard Conan stories include the Donald M. Grant editions (1974–1989, incomplete); Berkley editions (1977); Gollancz editions (2000–2006), and Wandering Star/Del Rey editions (2003–2005). Later series of new Conan material include the Bantam editions (1978–1982) and Tor editions (1982–2004).",
"title": "Book editions"
},
{
"paragraph_id": 35,
"text": "In an attempt to provide a coherent timeline which fit the numerous adventures of Conan penned by Robert E. Howard and later writers, various \"Conan chronologies\" have been prepared by many people from the 1930s onward. Note that no consistent timeline has yet accommodated every single Conan story. The following are the principal theories that have been advanced over the years.",
"title": "Conan chronologies"
},
{
"paragraph_id": 36,
"text": "The very first Conan cinematic project was planned by Edward Summer. Summer envisioned a series of Conan films, much like the James Bond franchise. He outlined six stories for this film series, but none were ever made. An original screenplay by Summer and Roy Thomas was written, but their lore-authentic screen story was never filmed. However, the resulting film, Conan the Barbarian (1982), was a combination of director John Milius's ideas and plots from Conan stories (written also by Howard's successors, notably Lin Carter and L. Sprague de Camp). A Nietzschean motto and Conan's life philosophy were notably added in this adaptation.",
"title": "Media"
},
{
"paragraph_id": 37,
"text": "The plot of Conan the Barbarian (1982) begins with Conan being enslaved by the Vanir raiders of Thulsa Doom, a malevolent warlord who is responsible for the slaying of Conan's parents and the genocide of his people. Later, Thulsa Doom becomes a cult leader of a religion that worships Set, a Snake God. The vengeful Conan, the archer Subotai and the thief Valeria set out on a quest to rescue a princess held captive by Thulsa Doom. The film was directed by John Milius and produced by Dino De Laurentiis. The character of Conan was played by Jorge Sanz as a child and Arnold Schwarzenegger as an adult. It was Schwarzenegger's break-through role as an actor.",
"title": "Media"
},
{
"paragraph_id": 38,
"text": "This film was followed by a less popular sequel, Conan the Destroyer in 1984. This sequel was a more typical fantasy-genre film and was even less faithful to Howard's Conan stories, being just a picaresque story of an assorted bunch of adventurers.",
"title": "Media"
},
{
"paragraph_id": 39,
"text": "The third film in the Conan trilogy was planned for 1987 to be titled Conan the Conqueror. The director was to be either Guy Hamilton or John Guillermin. Since Arnold Schwarzenegger was committed to the film Predator and De Laurentiis's contract with the star had expired after his obligation to Red Sonja and Raw Deal, he wasn't keen to negotiate a new one; thus the third Conan film sank into development hell. The script was eventually turned into Kull the Conqueror.",
"title": "Media"
},
{
"paragraph_id": 40,
"text": "There were rumors in the late 1990s of another Conan sequel, a story about an older Conan titled King Conan: Crown of Iron, but Schwarzenegger's election in 2003 as governor of California ended this project. Warner Bros. spent seven years trying to get the project off the ground. However, in June 2007 the rights reverted to Paradox Entertainment, though all drafts made under Warner remained with them. In August 2007, it was announced that Millennium Films had acquired the rights to the project. Production was aimed for a Spring 2006 start, with the intention of having stories more faithful to the Robert E. Howard creation. In June 2009, Millennium hired Marcus Nispel to direct. In January 2010, Jason Momoa was selected for the role of Conan. The film was released in August 2011, and met poor critical reviews and box office results.",
"title": "Media"
},
{
"paragraph_id": 41,
"text": "In 2012, producers Chris Morgan and Frederick Malmberg announced plans for a sequel to the 1982 Conan the Barbarian titled The Legend of Conan, with Arnold Schwarzenegger reprising his role as Conan. A year later, Deadline reported that Andrea Berloff would write the script. Years passed since the initial announcement as Schwarzenegger worked on other films, but as late as 2016, Schwarzenegger affirmed his enthusiasm for making the film, saying, \"Interest is high ... but we are not rushing.\" The script was finished, and Schwarzenegger and Morgan were meeting with possible directors. In April 2017, producer Chris Morgan stated that Universal had dropped the project, although there was a possibility of a TV show. The story of the film was supposed to be set 30 years after the first, with some inspiration from Clint Eastwood's Unforgiven.",
"title": "Media"
},
{
"paragraph_id": 42,
"text": "There have been three television series related to Conan:",
"title": "Media"
},
{
"paragraph_id": 43,
"text": "Conan the Barbarian has appeared in comics nearly non-stop since 1970. The comics are arguably, apart from the books, the vehicle that had the greatest influence on the longevity and popularity of the character. The earliest comic book adaptation of Conan was written in Spanish and first published in Mexico in the fifties. This version, which was done without authorization from the estate of Robert E. Howard, is loosely based on the short story Queen of the Black Coast. The earliest licensed comic adaptations were written in English and first published by Marvel Comics in the seventies, beginning with Conan the Barbarian (1970–1993) and the classic Savage Sword of Conan (1974–1995). Dark Horse Comics launched their Conan series in 2003. Dark Horse Comics is currently publishing compilations of the 1970s Marvel Comics series in trade paperback format.",
"title": "Media"
},
{
"paragraph_id": 44,
"text": "Barack Obama, former President of the United States, is a collector of Conan the Barbarian comic books and a big fan of the character and appeared as a character in a comic book called Barack the Barbarian from Devil's Due.",
"title": "Media"
},
{
"paragraph_id": 45,
"text": "Marvel Comics introduced a relatively lore-faithful version of Conan the Barbarian in 1970 with Conan the Barbarian, written by Roy Thomas and illustrated by Barry Windsor-Smith. Smith was succeeded by penciller John Buscema, while Thomas continued to write for many years. Later writers included J. M. DeMatteis, Bruce Jones, Michael Fleisher, Doug Moench, Jim Owsley, Alan Zelenetz, Chuck Dixon and Don Kraar. In 1974, Conan the Barbarian series spawned the more adult-oriented, black-and-white comics magazine Savage Sword of Conan, written by Thomas with art mostly by Buscema or Alfredo Alcala. Marvel also published several graphic novels starring the character , and a handbook with detailed information about the Hyborian world. Conan the Barbarian is also officially considered to be part of the larger Marvel Universe and has interacted with heroes and villains alike.",
"title": "Media"
},
{
"paragraph_id": 46,
"text": "The Marvel Conan stories were also adapted as a newspaper comic strip which appeared daily and Sunday from 4 September 1978 to 12 April 1981. Originally written by Roy Thomas and illustrated by John Buscema, the strip was continued by several different Marvel artists and writers.",
"title": "Media"
},
{
"paragraph_id": 47,
"text": "Dark Horse Comics began their comic adaptation of the Conan saga in 2003. Entitled simply Conan, the series was first written by Kurt Busiek and pencilled by Cary Nord. Tim Truman replaced Busiek, when Busiek signed an exclusive contract with DC Comics. However, Busiek issues were sometimes used for filler. This series is an interpretation of the original Conan material by Robert E. Howard with no connection whatsoever to the earlier Marvel comics or any Conan story not written or envisioned by Howard supplemented by wholly original material.",
"title": "Media"
},
{
"paragraph_id": 48,
"text": "A second series, Conan the Cimmerian was released in 2008 by Tim Truman (writer) and Tomás Giorello (artist). The series ran for twenty-six issues, including an introductory \"zero\" issue.",
"title": "Media"
},
{
"paragraph_id": 49,
"text": "Dark Horse's third series, Conan: Road of Kings, began in December 2010 by Roy Thomas (writer) and Mike Hawthorne (artist) and ran for twelve issues.",
"title": "Media"
},
{
"paragraph_id": 50,
"text": "A fourth series, Conan the Barbarian, began in February 2012 by Brian Wood (writer) and Becky Cloonan (artist). It ran for twenty-five issues, and expanded on Robert E. Howard's Queen of the Black Coast.",
"title": "Media"
},
{
"paragraph_id": 51,
"text": "A fifth series, Conan the Avenger, began in April 2014 by Fred Van Lente (writer) and Brian Ching (artist). It ran for twenty-five issues, and expanded on Robert E. Howard's The Snout in the Dark and A Witch Shall Be Born.",
"title": "Media"
},
{
"paragraph_id": 52,
"text": "Dark Horse's sixth series, Conan the Slayer, began in July 2016 by Cullen Bunn (writer) and Sergio Dávila (artist).",
"title": "Media"
},
{
"paragraph_id": 53,
"text": "In 2018, Marvel reacquired the rights and started new runs of both Conan the Barbarian and Savage Sword of Conan in January/February 2019. Conan is also a lead in the Savage Avengers title, which launched in 2019 and received a second volume in 2022.",
"title": "Media"
},
{
"paragraph_id": 54,
"text": "In 2022, it was revealed that Titan Publishing Group had acquired the rights from Heroic Signatures to make Conan comics, with a new ongoing series set to release in May 2023.",
"title": "Media"
},
{
"paragraph_id": 55,
"text": "TSR, Inc. signed a license agreement in 1984 to publish Conan-related gaming material:",
"title": "Media"
},
{
"paragraph_id": 56,
"text": "In 1988 Steve Jackson Games acquired a Conan license and started publishing Conan solo adventures for its GURPS generic system of rules as of 1988 and a GURPS Conan core rulebook in 1989:",
"title": "Media"
},
{
"paragraph_id": 57,
"text": "In 2003 the British company Mongoose Publishing bought a license and acquired in turn the rights to make use of the Conan gaming franchise, publishing a Conan role-playing game from 2004 until 2010. The game ran the OGL System of rules that Mongoose established for its OGL series of games:",
"title": "Media"
},
{
"paragraph_id": 58,
"text": "In 2010 Mongoose Publishing dropped the Conan license. In February 2015, another British company, Modiphius Entertainment, acquired the license, announcing plans to put out a new Conan role-playing game in August of that year. Actually, the core rulebook was not launched (via Kickstarter) until a whole year later, in February 2016, reaching by far all funds needed for publication. Long after the Kickstarter ended the core rulebook was launched in PDF format on January 31, 2017. The physical core rulebook finally started distribution in June 2017:",
"title": "Media"
},
{
"paragraph_id": 59,
"text": "Nine video games have been released based on the Conan mythos.",
"title": "Media"
},
{
"paragraph_id": 60,
"text": "",
"title": "Characters"
},
{
"paragraph_id": 61,
"text": "The name Conan and the names of some of Robert E. Howard's other characters are claimed as trademarked by Conan Properties International and licensed to Cabinet Entertainment, both entities controlled by CEO Fredrik Malmberg.",
"title": "Copyright and trademark dispute"
},
{
"paragraph_id": 62,
"text": "Since Robert E. Howard's Conan stories were published at a time when the date of publication was the marker (1932–1963), however, and any new owners failed to renew them to maintain the copyrights, the exact copyright status of all of Howard's 'Conan' works is in question. The majority of Howard's Conan fiction exist in at least two versions, subject to different copyright standards, namely 1) the original Weird Tales publications before or shortly after Howard's death, which are generally understood to be public domain and 2) restored versions based upon manuscripts which were unpublished during Howard's lifetime.",
"title": "Copyright and trademark dispute"
},
{
"paragraph_id": 63,
"text": "The Australian site of Project Gutenberg hosts digital copies of many of Howard's stories, including several works about Conan.",
"title": "Copyright and trademark dispute"
},
{
"paragraph_id": 64,
"text": "In the United Kingdom, works are released into the public domain 70 years after the death of an author. With Howard having died in 1936, his works have been in the public domain there since 2006. The same laws applies for Malmberg's home country Sweden.",
"title": "Copyright and trademark dispute"
},
{
"paragraph_id": 65,
"text": "In August 2018, Conan Properties International LLC won by default a suit against Spanish sculptor Ricardo Jove Sanchez after he failed to appear at court in the United States. Jove had started a crowdfunding campaign that raised around €3000 on Kickstarter, with the intent of selling barbarian figurines to online customers, including those in the United States. The Magistrate Judge originally recommended statutory damages for infringement on three Robert E. Howard characters not including Conan, but Jove was eventually fined $3,000 per character used in the campaign, including Conan, for a total of $21,000.",
"title": "Copyright and trademark dispute"
},
{
"paragraph_id": 66,
"text": "In September 2020, it was announced that Netflix had made a larger deal involving Malmberg and Mark Wheeler from Pathfinder Media between Netflix and Conan Properties International for the exclusive rights to the Conan library for the rights for live-action and animated films and TV shows.",
"title": "Copyright and trademark dispute"
}
] | Conan the Barbarian is a fictional sword and sorcery hero who originated in pulp magazines and has since been adapted to books, comics, films, television programs, video games, and role-playing games. Robert E. Howard created the character in 1932 for a series of fantasy stories published in Weird Tales magazine. The earliest appearance of a Robert E. Howard character named Conan was that of a black-haired barbarian with heroic attributes in the 1931 short story "People of the Dark". By 1932, Howard had fully conceptualized Conan. Before his death, Howard had written 21 stories starring the barbarian. Over the years many other writers have written works featuring Conan. Many Conan the Barbarian stories feature Conan embarking on heroic adventures filled with common fantasy elements such as princesses and wizards. Howard's mythopoeia has the stories set in the legendary Hyborian Age in the times after the fall of Atlantis. Conan is a Cimmerian, who are descendants of the Atlanteans and ancestors of the modern Gaels. Conan is himself a descendant of Kull of Atlantis. He was born on a battlefield and is the son of a blacksmith. Conan is characterized as chivalric due to his penchant to save damsels in distress. He is honorable and has a sense of enduring loyalty. In contrast to his brooding ancestor, Kull, Conan has a sense of humour. He possesses great strength, combativeness, intelligence, agility, and endurance. The barbarian's appearance is iconic, with square-cut black hair, blue eyes, tanned skin, and giant stature, often wearing a barbarian's garb. Licensed comics published in the 1970s by Marvel Comics drew further popularity to the character, introducing the now iconic image of Conan in his loincloth. The most popular cinematic adaptation is the 1982 film, Conan the Barbarian directed by John Milius and starring Arnold Schwarzenegger as Conan, in which the plot revolves around Conan facing the villainous Thulsa Doom. | 2001-10-06T19:24:24Z | 2023-12-31T02:55:04Z | [
"Template:NoteFoot",
"Template:Authority control",
"Template:Main",
"Template:Convert",
"Template:See also",
"Template:Sisterlinks",
"Template:Conan",
"Template:Robert E. Howard",
"Template:Citation needed",
"Template:NoteTag",
"Template:Fantasy fiction",
"Template:Short description",
"Template:Cite magazine",
"Template:StandardEbooks",
"Template:ISFDB series",
"Template:Other uses",
"Template:Quote box",
"Template:Long quote",
"Template:Cite journal",
"Template:Cite news",
"Template:Who",
"Template:Reflist",
"Template:Cite web",
"Template:Cite press release",
"Template:Infobox character",
"Template:Ill",
"Template:Anchor",
"Template:Webarchive",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Conan_the_Barbarian |
6,715 | Chris Marker | Chris Marker (French: [maʁkɛʁ]; 29 July 1921 – 29 July 2012) was a French writer, photographer, documentary film director, multimedia artist and film essayist. His best known films are La Jetée (1962), A Grin Without a Cat (1977) and Sans Soleil (1983). Marker is usually associated with the Left Bank subset of the French New Wave that occurred in the late 1950s and 1960s, and included such other filmmakers as Alain Resnais, Agnès Varda and Jacques Demy.
His friend and sometime collaborator Alain Resnais called him "the prototype of the twenty-first-century man." Film theorist Roy Armes has said of him: "Marker is unclassifiable because he is unique...The French Cinema has its dramatists and its poets, its technicians, and its autobiographers, but only has one true essayist: Chris Marker."
Marker was born Christian François Bouche-Villeneuve. He was always elusive about his past and known to refuse interviews and not allow photographs to be taken of him; his place of birth is highly disputed. Some sources and Marker himself claim that he was born in Ulaanbaatar, Mongolia. Other sources say he was born in Belleville, Paris, and others, in Neuilly-sur-Seine. The 1949 edition of Le Cœur Net gives his birthday as 22 July. Film critic David Thomson has said, "Marker told me himself that Mongolia is correct. I have since concluded that Belleville is correct—but that does not spoil the spiritual truth of Ulan Bator." When asked about his secretive nature, Marker said, "My films are enough for them [the audience]."
Marker was a philosophy student in France before World War II. During the German occupation of France, he joined the Maquis (FTP), a part of the French Resistance. At some point during the war he left France and joined the United States Air Force as a paratrooper, although some sources claim that this is not true. After the war, he began a career as a journalist, first writing for the journal Esprit, a neo-Catholic, Marxist magazine where he met fellow journalist André Bazin. For Esprit, Marker wrote political commentaries, poems, short stories, and film reviews.
During this period, Marker began to travel around the world as a journalist and photographer, a vocation he pursued for the rest of his life. The French publishing company Éditions du Seuil hired him as editor of the series Petite Planète ("Small World"). That collection devoted one edition to each country and included information and photographs, and would later be published in English translation by Studio Vista and The Viking Press. In 1949 Marker published his first novel, Le Coeur net (The Forthright Spirit), which was about aviation. In 1952 Marker published an illustrated essay on French writer Jean Giraudoux, Giraudoux Par Lui-Même.
During his early journalism career, Marker became increasingly interested in filmmaking and in the early 1950s experimented with photography. Around this time Marker met and befriended many members of the Left Bank Film Movement, including Alain Resnais, Agnès Varda, Henri Colpi, Armand Gatti, and the novelists Marguerite Duras and Jean Cayrol. This group is often associated with the French New Wave directors who came to prominence during the same time period, and the groups were often friends and journalistic co-workers. The term Left Bank was first coined by film critic Richard Roud, who described them as having "fondness for a kind of Bohemian life and an impatience with the conformity of the Right Bank, a high degree of involvement in literature and the plastic arts, and a consequent interest in experimental filmmaking", as well as an identification with the political left. Anatole Dauman produced many of Marker's earliest films.
In 1952 Marker made his first film, Olympia 52, a 16mm feature documentary about the 1952 Helsinki Olympic Games. In 1953 he collaborated with Resnais on the documentary Statues Also Die. The film examines traditional African art such as sculptures and masks, and its decline with coming of Western colonialism. It won the 1954 Prix Jean Vigo, but was banned by French censors for its criticism of French colonialism.
After working as assistant director on Resnais's Night and Fog in 1955, Marker made Sunday in Peking, a short documentary "film essay" in the style that characterized Marker's output for most of his career. Marker shot the film in two weeks while traveling through China with Armand Gatti in September 1955. In the film, Marker's commentary overlaps scenes from China, such as tombs that, contrary to Westernized understandings of Chinese legends, do not contain the remains of Ming Dynasty emperors.
After working on the commentary for Resnais's film Le mystère de l'atelier quinze in 1957, Marker continued to refine his style with the feature documentary Letter from Siberia. An essay film on the narrativization of Siberia, it contains Marker's signature commentary, which takes the form of a letter from the director, in the long tradition of epistolary treatments by French explorers of the "undeveloped" world. Letter looks at Siberia's movement into the 20th century and at some of the tribal cultural practices receding into the past. It combines footage Marker shot in Siberia with old newsreel footage, cartoon sequences, stills, and even an illustration of Alfred E. Neuman from Mad Magazine as well as a fake TV commercial as part of a humorous attack on Western mass culture. In producing a meta-commentary on narrativity and film, Marker uses the same brief filmic sequence three times but with different commentary—the first praising the Soviet Union, the second denouncing it, and the third taking an apparently neutral or "objective" stance.
In 1959 Marker made the animated film Les Astronautes with Walerian Borowczyk. The film was a combination of traditional drawings with still photography. In 1960 he made Description d'un combat, a documentary on the State of Israel that reflects on its past and future. The film won the Golden Bear for Best Documentary at the 1961 Berlin Film Festival.
In January 1961, Marker traveled to Cuba and shot the film ¡Cuba Sí! The film promotes and defends Fidel Castro and includes two interviews with him. It ends with an anti-American epilogue in which the United States is embarrassed by the Bay of Pigs Invasion fiasco, and was subsequently banned. The banned essay was included in Marker's first volume of collected film commentaries, Commentaires I, published in 1961. The following year Marker published Coréennes, a collection of photographs and essays on conditions in Korea.
Marker became known internationally for the short film La Jetée (The Pier) in 1962. It tells of a post-nuclear war experiment in time travel by using a series of filmed photographs developed as a photomontage of varying pace, with limited narration and sound effects. In the film, a survivor of a futuristic third World War is obsessed with distant and disconnected memories of a pier at the Orly Airport, the image of a mysterious woman, and a man's death. Scientists experimenting in time travel choose him for their studies, and the man travels back in time to contact the mysterious woman, and discovers that the man's death at the Orly Airport was his own. Except for one shot of the woman mentioned above sleeping and suddenly waking up, the film is composed entirely of photographs by Jean Chiabaud and stars Davos Hanich as the man, Hélène Châtelain as the woman and filmmaker William Klein as a man from the future.
While making La Jetée, Marker was simultaneously making the 150-minute documentary essay-film Le joli mai, released in 1963. Beginning in the spring of 1962, Marker and his camera operator Pierre Lhomme shot 55 hours of footage interviewing random people on the streets of Paris. The questions, asked by the unseen Marker, range from their personal lives, as well as social and political issues of relevance at that time. As he had with montages of landscapes and indigenous art, Marker created a film essay that contrasted and juxtaposed a variety of lives with his signature commentary (spoken by Marker's friends, singer-actor Yves Montand in the French version and Simone Signoret in the English version). The film has been compared to the Cinéma vérité films of Jean Rouch, and criticized by its practitioners at the time. The term "Cinéma vérité" was itself anathema to Marker, who never used it. Instead, he preferred his own term “ciné, ma vérité,” meaning "cinéma, my truth." It was shown in competition at the 1963 Venice Film Festival, where it won the award for Best First Work. It also won the Golden Dove Award at the Leipzig DOK Festival.
After the documentary Le Mystère Koumiko in 1965, Marker made Si j'avais quatre dromadaires, an essay-film that, like La Jetée, is a photomontage of over 800 photographs Marker had taken over the previous 10 years in 26 countries. The commentary involves a conversation between a fictitious photographer and two friends, who discuss the photos. The film's title is an allusion to a poem by Guillaume Apollinaire. It was the last film in which Marker included "travel footage" for many years.
In 1967 Marker published his second volume of collected film essays, Commentaires II. That same year, Marker organized the omnibus film Loin du Vietnam, a protest against the Vietnam War with segments contributed by Marker, Jean-Luc Godard, Alain Resnais, Agnès Varda, Claude Lelouch, William Klein, Michele Ray and Joris Ivens. The film includes footage of the war, from both sides, as well as anti-war protests in New York and Paris and other anti-war activities.
From this initial collection of filmmakers with left-wing political agendas, Marker created the group S.L.O.N. (Société pour le lancement des oeuvres nouvelles, "Society for launching new works", but also the Russian word for "elephant"). SLON was a film collective whose objectives were to make films and to encourage industrial workers to create film collectives of their own. Its members included Valerie Mayoux, Jean-Claude Lerner, Alain Adair and John Tooker. Marker is usually credited as director or co-director of all of the films made by SLON.
After the events of May 1968, Marker felt a moral obligation to abandon his own personal film career and devote himself to SLON and its activities. SLON's first film was about a strike at a Rhodiacéta factory in France, À bientôt, j'espère (Rhodiacéta) in 1968. Later that year SLON made La Sixième face du pentagone, about an anti-war protest in Washington, D.C., and was a reaction to what SLON considered to be the unfair and censored reportage of such events on mainstream television. The film was shot by François Reichenbach, who received co-director credit. La Bataille des dix millions was made in 1970 with Mayoux as co-director and Santiago Álvarez as cameraman and is about the 1970 sugar crop in Cuba and its disastrous effects on the country. In 1971, SLON made Le Train en marche, a new prologue to Soviet filmmaker Aleksandr Medvedkin's 1935 film Schastye, which had recently been re-released in France.
In 1974, SLON became I.S.K.R.A. (Images, Sons, Kinescope, Réalisations, Audiovisuelles, but also the name of Vladimir Lenin's political newspaper Iskra, which also is a Russian word for "spark").
In 1974 Marker returned to his personal work and made a film outside of ISKRA. La Solitude du chanteur de fond is a one-hour documentary about Marker's friend Yves Montand's benefit concert for Chilean refugees. The concert was Montand's first public performance in four years, and the documentary includes film clips from his long career as a singer and actor.
Marker had been working on a film about Chile with ISKRA since 1973. Marker had collaborated with Belgian sociologist Armand Mattelart and ISKRA members Valérie Mayoux and Jacqueline Meppiel to shoot and collect the visual materials, which Marker then edited together and provided the commentary for. The resulting film was the two and a half-hour documentary La Spirale, released in 1975. The film chronicles events in Chile, beginning with the 1970 election of socialist President Salvador Allende until his murder and the resulting coup in 1973.
Marker then began work on one of his most ambitious films, A Grin Without a Cat, released in 1977. The film's title refers to the Cheshire Cat from Alice in Wonderland. The metaphor compares the promise of the global socialist movement before May 1968 (the grin) with its actual presence in the world after May 1968 (the cat). The film's original French title is Le fond de l'air est rouge, which means "the air is essentially red", or "revolution is in the air", implying that the socialist movement was everywhere around the world.
The film was intended to be an all-encompassing portrait of political movements since May 1968, a summation of the work which he had taken part in for ten years. The film is divided into two parts: the first half focuses on the hopes and idealism before May 1968, and the second half on the disillusion and disappointments since those events. Marker begins the film with the Odessa Steps sequence from Sergei Eisenstein's film The Battleship Potemkin, which Marker points out is a fictitious creation of Eisenstein which has still influenced the image of the historical event. Marker used very little commentary in this film, but the film's montage structure and preoccupation with memory make it a Marker film. Upon release, the film was criticized for not addressing many current issues of the New Left such as the woman's movement, sexual liberation and worker self-management. The film was re-released in the US in 2002.
In the late 1970s, Marker traveled extensively throughout the world, including an extended period in Japan. From this inspiration, he first published the photo-essay Le Dépays in 1982, and then used the experience for his next film Sans Soleil, released in 1982.
Sans Soleil stretches the limits of what could be called a documentary. It is an essay, a montage, mixing pieces of documentary with fiction and philosophical comments, creating an atmosphere of dream and science fiction. The main themes are Japan, Africa, memory and travel. A sequence in the middle of the film takes place in San Francisco, and heavily references Alfred Hitchcock's Vertigo. Marker has said that Vertigo is the only film "capable of portraying impossible memory, insane memory." The film's commentary are credited to the fictitious cameraman Sandor Krasna, and read in the form of letters by an unnamed woman. Though centered around Japan, the film was also shot in such other countries as Guinea Bissau, Ireland and Iceland. Sans Soleil was shown at the 1983 Berlin Film Festival where it won the OCIC Award. It was also awarded the Sutherland Trophy at the 1983 British Film Institute Awards.
In 1984, Marker was invited by producer Serge Silberman to document the making of Akira Kurosawa's film Ran. From this Marker made A.K., released in 1985. The film focuses more on Kurosawa's remote but polite personality than on the making of the film. The film was screened in the Un Certain Regard section at the 1985 Cannes Film Festival, before Ran itself had been released.
In 1985, Marker's long-time friend and neighbor Simone Signoret died of cancer. Marker then made the one-hour TV documentary Mémoires pour Simone as a tribute to her in 1986.
Beginning with Sans Soleil, Marker developed a deep interest in digital technology. From 1985 to 1988, he worked on a conversational program (a prototypical chatbot) called "Dialector," which he wrote in Applesoft BASIC on an Apple II. He incorporated audiovisual elements in addition to the snippets of dialogue and poetry that "Computer" exchanged with the user. Version 6 of this program was revived from a floppy disk (with Marker's help and permission) and emulated online in 2015.
His interests in digital technology also led to his film Level Five (1996) and Immemory (1998, 2008), an interactive multimedia CD-ROM, produced for the Centre Pompidou (French language version) and from Exact Change (English version). Marker created a 19-minute multimedia piece in 2005 for the Museum of Modern Art in New York City titled Owls at Noon Prelude: The Hollow Men which was influenced by T. S. Eliot's poem.
Marker lived in Paris, and very rarely granted interviews. One exception was a lengthy interview with Libération in 2003 in which he explained his approach to filmmaking. When asked for a picture of himself, he usually offered a photograph of a cat instead. (Marker was represented in Agnes Varda's 2008 documentary The Beaches of Agnes by a cartoon drawing of a cat, speaking in a technologically altered voice.) Marker's own cat was named Guillaume-en-égypte. In 2009, Marker commissioned an Avatar of Guillaume-en-Egypte to represent him in machinima works. The avatar was created by Exosius Woolley and first appeared in the short film / machinima, Ouvroir the Movie by Chris Marker.
In the 2007 Criterion Collection release of La Jetée and Sans Soleil, Marker included a short essay, "Working on a Shoestring Budget". He confessed to shooting all of Sans Soleil with a silent film camera, and recording all the audio on a primitive audio cassette recorder. Marker also reminds the reader that only one short scene in La Jetée is of a moving image, as Marker could only borrow a movie camera for one afternoon while working on the film.
From 2007 through 2011 Marker collaborated with the art dealer and publisher Peter Blum on a variety of projects that were exhibited at the Peter Blum galleries in New York City's Soho and Chelsea neighborhoods. Marker's works were also exhibited at the Peter Blum Gallery on 57th Street in 2014. These projects include several series of printed photographs titled PASSENGERS, Koreans, Crush Art, Quelle heure est-elle?, and Staring Back; a set of photogravures titled After Dürer; a book, PASSENGERS; and digital prints of movie posters, whose titles were often appropriated, including Breathless, Hiroshima Mon Amour, Owl People, and Rin Tin Tin. The video installations Silent Movie and Owls at Noon Prelude: The Hollow Men were exhibited at Peter Blum in 2009. These works were also exhibited at the 2014 & 2015 Venice Biennale, Whitechapel Gallery in London, the MIT List Visual Arts Center in Cambridge, Massachusetts, the Carpenter Center for the Visual Arts at Harvard University, the Moscow Photobiennale, Les Recontres d'Arles de la Photographie in Arles, France, the Centre de la Photographie in Geneva, Switzerland, the Walker Art Center in Minneapolis, Minnesota, the Wexner Center for the Arts in Columbus, Ohio, The Museum of Modern Art in New York, and the Pacific Film Archive in Berkeley, California. Since 2014 the artworks of the Estate of Chris Marker are represented by Peter Blum Gallery, New York.
Marker died on 29 July 2012, his 91st birthday.
La Jetée was the inspiration for Mamoru Oshii's 1987 debut live action feature The Red Spectacles (and later for parts of Oshii's 2001 film Avalon) and also inspired Terry Gilliam's 12 Monkeys (1995) and Jonás Cuarón's Year of the Nail (2007) as well as many of Mira Nair's shots in her 2006 film The Namesake. | [
{
"paragraph_id": 0,
"text": "Chris Marker (French: [maʁkɛʁ]; 29 July 1921 – 29 July 2012) was a French writer, photographer, documentary film director, multimedia artist and film essayist. His best known films are La Jetée (1962), A Grin Without a Cat (1977) and Sans Soleil (1983). Marker is usually associated with the Left Bank subset of the French New Wave that occurred in the late 1950s and 1960s, and included such other filmmakers as Alain Resnais, Agnès Varda and Jacques Demy.",
"title": ""
},
{
"paragraph_id": 1,
"text": "His friend and sometime collaborator Alain Resnais called him \"the prototype of the twenty-first-century man.\" Film theorist Roy Armes has said of him: \"Marker is unclassifiable because he is unique...The French Cinema has its dramatists and its poets, its technicians, and its autobiographers, but only has one true essayist: Chris Marker.\"",
"title": ""
},
{
"paragraph_id": 2,
"text": "Marker was born Christian François Bouche-Villeneuve. He was always elusive about his past and known to refuse interviews and not allow photographs to be taken of him; his place of birth is highly disputed. Some sources and Marker himself claim that he was born in Ulaanbaatar, Mongolia. Other sources say he was born in Belleville, Paris, and others, in Neuilly-sur-Seine. The 1949 edition of Le Cœur Net gives his birthday as 22 July. Film critic David Thomson has said, \"Marker told me himself that Mongolia is correct. I have since concluded that Belleville is correct—but that does not spoil the spiritual truth of Ulan Bator.\" When asked about his secretive nature, Marker said, \"My films are enough for them [the audience].\"",
"title": "Early life"
},
{
"paragraph_id": 3,
"text": "Marker was a philosophy student in France before World War II. During the German occupation of France, he joined the Maquis (FTP), a part of the French Resistance. At some point during the war he left France and joined the United States Air Force as a paratrooper, although some sources claim that this is not true. After the war, he began a career as a journalist, first writing for the journal Esprit, a neo-Catholic, Marxist magazine where he met fellow journalist André Bazin. For Esprit, Marker wrote political commentaries, poems, short stories, and film reviews.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "During this period, Marker began to travel around the world as a journalist and photographer, a vocation he pursued for the rest of his life. The French publishing company Éditions du Seuil hired him as editor of the series Petite Planète (\"Small World\"). That collection devoted one edition to each country and included information and photographs, and would later be published in English translation by Studio Vista and The Viking Press. In 1949 Marker published his first novel, Le Coeur net (The Forthright Spirit), which was about aviation. In 1952 Marker published an illustrated essay on French writer Jean Giraudoux, Giraudoux Par Lui-Même.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "During his early journalism career, Marker became increasingly interested in filmmaking and in the early 1950s experimented with photography. Around this time Marker met and befriended many members of the Left Bank Film Movement, including Alain Resnais, Agnès Varda, Henri Colpi, Armand Gatti, and the novelists Marguerite Duras and Jean Cayrol. This group is often associated with the French New Wave directors who came to prominence during the same time period, and the groups were often friends and journalistic co-workers. The term Left Bank was first coined by film critic Richard Roud, who described them as having \"fondness for a kind of Bohemian life and an impatience with the conformity of the Right Bank, a high degree of involvement in literature and the plastic arts, and a consequent interest in experimental filmmaking\", as well as an identification with the political left. Anatole Dauman produced many of Marker's earliest films.",
"title": "Early career (1950–1961)"
},
{
"paragraph_id": 6,
"text": "In 1952 Marker made his first film, Olympia 52, a 16mm feature documentary about the 1952 Helsinki Olympic Games. In 1953 he collaborated with Resnais on the documentary Statues Also Die. The film examines traditional African art such as sculptures and masks, and its decline with coming of Western colonialism. It won the 1954 Prix Jean Vigo, but was banned by French censors for its criticism of French colonialism.",
"title": "Early career (1950–1961)"
},
{
"paragraph_id": 7,
"text": "After working as assistant director on Resnais's Night and Fog in 1955, Marker made Sunday in Peking, a short documentary \"film essay\" in the style that characterized Marker's output for most of his career. Marker shot the film in two weeks while traveling through China with Armand Gatti in September 1955. In the film, Marker's commentary overlaps scenes from China, such as tombs that, contrary to Westernized understandings of Chinese legends, do not contain the remains of Ming Dynasty emperors.",
"title": "Early career (1950–1961)"
},
{
"paragraph_id": 8,
"text": "After working on the commentary for Resnais's film Le mystère de l'atelier quinze in 1957, Marker continued to refine his style with the feature documentary Letter from Siberia. An essay film on the narrativization of Siberia, it contains Marker's signature commentary, which takes the form of a letter from the director, in the long tradition of epistolary treatments by French explorers of the \"undeveloped\" world. Letter looks at Siberia's movement into the 20th century and at some of the tribal cultural practices receding into the past. It combines footage Marker shot in Siberia with old newsreel footage, cartoon sequences, stills, and even an illustration of Alfred E. Neuman from Mad Magazine as well as a fake TV commercial as part of a humorous attack on Western mass culture. In producing a meta-commentary on narrativity and film, Marker uses the same brief filmic sequence three times but with different commentary—the first praising the Soviet Union, the second denouncing it, and the third taking an apparently neutral or \"objective\" stance.",
"title": "Early career (1950–1961)"
},
{
"paragraph_id": 9,
"text": "In 1959 Marker made the animated film Les Astronautes with Walerian Borowczyk. The film was a combination of traditional drawings with still photography. In 1960 he made Description d'un combat, a documentary on the State of Israel that reflects on its past and future. The film won the Golden Bear for Best Documentary at the 1961 Berlin Film Festival.",
"title": "Early career (1950–1961)"
},
{
"paragraph_id": 10,
"text": "In January 1961, Marker traveled to Cuba and shot the film ¡Cuba Sí! The film promotes and defends Fidel Castro and includes two interviews with him. It ends with an anti-American epilogue in which the United States is embarrassed by the Bay of Pigs Invasion fiasco, and was subsequently banned. The banned essay was included in Marker's first volume of collected film commentaries, Commentaires I, published in 1961. The following year Marker published Coréennes, a collection of photographs and essays on conditions in Korea.",
"title": "Early career (1950–1961)"
},
{
"paragraph_id": 11,
"text": "Marker became known internationally for the short film La Jetée (The Pier) in 1962. It tells of a post-nuclear war experiment in time travel by using a series of filmed photographs developed as a photomontage of varying pace, with limited narration and sound effects. In the film, a survivor of a futuristic third World War is obsessed with distant and disconnected memories of a pier at the Orly Airport, the image of a mysterious woman, and a man's death. Scientists experimenting in time travel choose him for their studies, and the man travels back in time to contact the mysterious woman, and discovers that the man's death at the Orly Airport was his own. Except for one shot of the woman mentioned above sleeping and suddenly waking up, the film is composed entirely of photographs by Jean Chiabaud and stars Davos Hanich as the man, Hélène Châtelain as the woman and filmmaker William Klein as a man from the future.",
"title": "La Jetée and Le Joli Mai (1962–1966)"
},
{
"paragraph_id": 12,
"text": "While making La Jetée, Marker was simultaneously making the 150-minute documentary essay-film Le joli mai, released in 1963. Beginning in the spring of 1962, Marker and his camera operator Pierre Lhomme shot 55 hours of footage interviewing random people on the streets of Paris. The questions, asked by the unseen Marker, range from their personal lives, as well as social and political issues of relevance at that time. As he had with montages of landscapes and indigenous art, Marker created a film essay that contrasted and juxtaposed a variety of lives with his signature commentary (spoken by Marker's friends, singer-actor Yves Montand in the French version and Simone Signoret in the English version). The film has been compared to the Cinéma vérité films of Jean Rouch, and criticized by its practitioners at the time. The term \"Cinéma vérité\" was itself anathema to Marker, who never used it. Instead, he preferred his own term “ciné, ma vérité,” meaning \"cinéma, my truth.\" It was shown in competition at the 1963 Venice Film Festival, where it won the award for Best First Work. It also won the Golden Dove Award at the Leipzig DOK Festival.",
"title": "La Jetée and Le Joli Mai (1962–1966)"
},
{
"paragraph_id": 13,
"text": "After the documentary Le Mystère Koumiko in 1965, Marker made Si j'avais quatre dromadaires, an essay-film that, like La Jetée, is a photomontage of over 800 photographs Marker had taken over the previous 10 years in 26 countries. The commentary involves a conversation between a fictitious photographer and two friends, who discuss the photos. The film's title is an allusion to a poem by Guillaume Apollinaire. It was the last film in which Marker included \"travel footage\" for many years.",
"title": "La Jetée and Le Joli Mai (1962–1966)"
},
{
"paragraph_id": 14,
"text": "In 1967 Marker published his second volume of collected film essays, Commentaires II. That same year, Marker organized the omnibus film Loin du Vietnam, a protest against the Vietnam War with segments contributed by Marker, Jean-Luc Godard, Alain Resnais, Agnès Varda, Claude Lelouch, William Klein, Michele Ray and Joris Ivens. The film includes footage of the war, from both sides, as well as anti-war protests in New York and Paris and other anti-war activities.",
"title": "SLON and ISKRA (1967–1974)"
},
{
"paragraph_id": 15,
"text": "From this initial collection of filmmakers with left-wing political agendas, Marker created the group S.L.O.N. (Société pour le lancement des oeuvres nouvelles, \"Society for launching new works\", but also the Russian word for \"elephant\"). SLON was a film collective whose objectives were to make films and to encourage industrial workers to create film collectives of their own. Its members included Valerie Mayoux, Jean-Claude Lerner, Alain Adair and John Tooker. Marker is usually credited as director or co-director of all of the films made by SLON.",
"title": "SLON and ISKRA (1967–1974)"
},
{
"paragraph_id": 16,
"text": "After the events of May 1968, Marker felt a moral obligation to abandon his own personal film career and devote himself to SLON and its activities. SLON's first film was about a strike at a Rhodiacéta factory in France, À bientôt, j'espère (Rhodiacéta) in 1968. Later that year SLON made La Sixième face du pentagone, about an anti-war protest in Washington, D.C., and was a reaction to what SLON considered to be the unfair and censored reportage of such events on mainstream television. The film was shot by François Reichenbach, who received co-director credit. La Bataille des dix millions was made in 1970 with Mayoux as co-director and Santiago Álvarez as cameraman and is about the 1970 sugar crop in Cuba and its disastrous effects on the country. In 1971, SLON made Le Train en marche, a new prologue to Soviet filmmaker Aleksandr Medvedkin's 1935 film Schastye, which had recently been re-released in France.",
"title": "SLON and ISKRA (1967–1974)"
},
{
"paragraph_id": 17,
"text": "In 1974, SLON became I.S.K.R.A. (Images, Sons, Kinescope, Réalisations, Audiovisuelles, but also the name of Vladimir Lenin's political newspaper Iskra, which also is a Russian word for \"spark\").",
"title": "SLON and ISKRA (1967–1974)"
},
{
"paragraph_id": 18,
"text": "In 1974 Marker returned to his personal work and made a film outside of ISKRA. La Solitude du chanteur de fond is a one-hour documentary about Marker's friend Yves Montand's benefit concert for Chilean refugees. The concert was Montand's first public performance in four years, and the documentary includes film clips from his long career as a singer and actor.",
"title": "Return to personal work (1974–1986)"
},
{
"paragraph_id": 19,
"text": "Marker had been working on a film about Chile with ISKRA since 1973. Marker had collaborated with Belgian sociologist Armand Mattelart and ISKRA members Valérie Mayoux and Jacqueline Meppiel to shoot and collect the visual materials, which Marker then edited together and provided the commentary for. The resulting film was the two and a half-hour documentary La Spirale, released in 1975. The film chronicles events in Chile, beginning with the 1970 election of socialist President Salvador Allende until his murder and the resulting coup in 1973.",
"title": "Return to personal work (1974–1986)"
},
{
"paragraph_id": 20,
"text": "Marker then began work on one of his most ambitious films, A Grin Without a Cat, released in 1977. The film's title refers to the Cheshire Cat from Alice in Wonderland. The metaphor compares the promise of the global socialist movement before May 1968 (the grin) with its actual presence in the world after May 1968 (the cat). The film's original French title is Le fond de l'air est rouge, which means \"the air is essentially red\", or \"revolution is in the air\", implying that the socialist movement was everywhere around the world.",
"title": "Return to personal work (1974–1986)"
},
{
"paragraph_id": 21,
"text": "The film was intended to be an all-encompassing portrait of political movements since May 1968, a summation of the work which he had taken part in for ten years. The film is divided into two parts: the first half focuses on the hopes and idealism before May 1968, and the second half on the disillusion and disappointments since those events. Marker begins the film with the Odessa Steps sequence from Sergei Eisenstein's film The Battleship Potemkin, which Marker points out is a fictitious creation of Eisenstein which has still influenced the image of the historical event. Marker used very little commentary in this film, but the film's montage structure and preoccupation with memory make it a Marker film. Upon release, the film was criticized for not addressing many current issues of the New Left such as the woman's movement, sexual liberation and worker self-management. The film was re-released in the US in 2002.",
"title": "Return to personal work (1974–1986)"
},
{
"paragraph_id": 22,
"text": "In the late 1970s, Marker traveled extensively throughout the world, including an extended period in Japan. From this inspiration, he first published the photo-essay Le Dépays in 1982, and then used the experience for his next film Sans Soleil, released in 1982.",
"title": "Return to personal work (1974–1986)"
},
{
"paragraph_id": 23,
"text": "Sans Soleil stretches the limits of what could be called a documentary. It is an essay, a montage, mixing pieces of documentary with fiction and philosophical comments, creating an atmosphere of dream and science fiction. The main themes are Japan, Africa, memory and travel. A sequence in the middle of the film takes place in San Francisco, and heavily references Alfred Hitchcock's Vertigo. Marker has said that Vertigo is the only film \"capable of portraying impossible memory, insane memory.\" The film's commentary are credited to the fictitious cameraman Sandor Krasna, and read in the form of letters by an unnamed woman. Though centered around Japan, the film was also shot in such other countries as Guinea Bissau, Ireland and Iceland. Sans Soleil was shown at the 1983 Berlin Film Festival where it won the OCIC Award. It was also awarded the Sutherland Trophy at the 1983 British Film Institute Awards.",
"title": "Return to personal work (1974–1986)"
},
{
"paragraph_id": 24,
"text": "In 1984, Marker was invited by producer Serge Silberman to document the making of Akira Kurosawa's film Ran. From this Marker made A.K., released in 1985. The film focuses more on Kurosawa's remote but polite personality than on the making of the film. The film was screened in the Un Certain Regard section at the 1985 Cannes Film Festival, before Ran itself had been released.",
"title": "Return to personal work (1974–1986)"
},
{
"paragraph_id": 25,
"text": "In 1985, Marker's long-time friend and neighbor Simone Signoret died of cancer. Marker then made the one-hour TV documentary Mémoires pour Simone as a tribute to her in 1986.",
"title": "Return to personal work (1974–1986)"
},
{
"paragraph_id": 26,
"text": "Beginning with Sans Soleil, Marker developed a deep interest in digital technology. From 1985 to 1988, he worked on a conversational program (a prototypical chatbot) called \"Dialector,\" which he wrote in Applesoft BASIC on an Apple II. He incorporated audiovisual elements in addition to the snippets of dialogue and poetry that \"Computer\" exchanged with the user. Version 6 of this program was revived from a floppy disk (with Marker's help and permission) and emulated online in 2015.",
"title": "Multimedia and later career (1987–2012)"
},
{
"paragraph_id": 27,
"text": "His interests in digital technology also led to his film Level Five (1996) and Immemory (1998, 2008), an interactive multimedia CD-ROM, produced for the Centre Pompidou (French language version) and from Exact Change (English version). Marker created a 19-minute multimedia piece in 2005 for the Museum of Modern Art in New York City titled Owls at Noon Prelude: The Hollow Men which was influenced by T. S. Eliot's poem.",
"title": "Multimedia and later career (1987–2012)"
},
{
"paragraph_id": 28,
"text": "Marker lived in Paris, and very rarely granted interviews. One exception was a lengthy interview with Libération in 2003 in which he explained his approach to filmmaking. When asked for a picture of himself, he usually offered a photograph of a cat instead. (Marker was represented in Agnes Varda's 2008 documentary The Beaches of Agnes by a cartoon drawing of a cat, speaking in a technologically altered voice.) Marker's own cat was named Guillaume-en-égypte. In 2009, Marker commissioned an Avatar of Guillaume-en-Egypte to represent him in machinima works. The avatar was created by Exosius Woolley and first appeared in the short film / machinima, Ouvroir the Movie by Chris Marker.",
"title": "Multimedia and later career (1987–2012)"
},
{
"paragraph_id": 29,
"text": "In the 2007 Criterion Collection release of La Jetée and Sans Soleil, Marker included a short essay, \"Working on a Shoestring Budget\". He confessed to shooting all of Sans Soleil with a silent film camera, and recording all the audio on a primitive audio cassette recorder. Marker also reminds the reader that only one short scene in La Jetée is of a moving image, as Marker could only borrow a movie camera for one afternoon while working on the film.",
"title": "Multimedia and later career (1987–2012)"
},
{
"paragraph_id": 30,
"text": "From 2007 through 2011 Marker collaborated with the art dealer and publisher Peter Blum on a variety of projects that were exhibited at the Peter Blum galleries in New York City's Soho and Chelsea neighborhoods. Marker's works were also exhibited at the Peter Blum Gallery on 57th Street in 2014. These projects include several series of printed photographs titled PASSENGERS, Koreans, Crush Art, Quelle heure est-elle?, and Staring Back; a set of photogravures titled After Dürer; a book, PASSENGERS; and digital prints of movie posters, whose titles were often appropriated, including Breathless, Hiroshima Mon Amour, Owl People, and Rin Tin Tin. The video installations Silent Movie and Owls at Noon Prelude: The Hollow Men were exhibited at Peter Blum in 2009. These works were also exhibited at the 2014 & 2015 Venice Biennale, Whitechapel Gallery in London, the MIT List Visual Arts Center in Cambridge, Massachusetts, the Carpenter Center for the Visual Arts at Harvard University, the Moscow Photobiennale, Les Recontres d'Arles de la Photographie in Arles, France, the Centre de la Photographie in Geneva, Switzerland, the Walker Art Center in Minneapolis, Minnesota, the Wexner Center for the Arts in Columbus, Ohio, The Museum of Modern Art in New York, and the Pacific Film Archive in Berkeley, California. Since 2014 the artworks of the Estate of Chris Marker are represented by Peter Blum Gallery, New York.",
"title": "Multimedia and later career (1987–2012)"
},
{
"paragraph_id": 31,
"text": "Marker died on 29 July 2012, his 91st birthday.",
"title": "Multimedia and later career (1987–2012)"
},
{
"paragraph_id": 32,
"text": "La Jetée was the inspiration for Mamoru Oshii's 1987 debut live action feature The Red Spectacles (and later for parts of Oshii's 2001 film Avalon) and also inspired Terry Gilliam's 12 Monkeys (1995) and Jonás Cuarón's Year of the Nail (2007) as well as many of Mira Nair's shots in her 2006 film The Namesake.",
"title": "Legacy"
}
] | Chris Marker was a French writer, photographer, documentary film director, multimedia artist and film essayist. His best known films are La Jetée (1962), A Grin Without a Cat (1977) and Sans Soleil (1983). Marker is usually associated with the Left Bank subset of the French New Wave that occurred in the late 1950s and 1960s, and included such other filmmakers as Alain Resnais, Agnès Varda and Jacques Demy. His friend and sometime collaborator Alain Resnais called him "the prototype of the twenty-first-century man." Film theorist Roy Armes has said of him: "Marker is unclassifiable because he is unique...The French Cinema has its dramatists and its poets, its technicians, and its autobiographers, but only has one true essayist: Chris Marker." | 2001-10-06T19:47:10Z | 2023-12-21T07:49:40Z | [
"Template:Cite book",
"Template:IMDb name",
"Template:For",
"Template:Infobox person",
"Template:Anchor",
"Template:Expand list",
"Template:Spaced ndash",
"Template:Reflist",
"Template:Cite web",
"Template:Cite news",
"Template:Use dmy dates",
"Template:Authority control",
"Template:Cite journal",
"Template:IPA-fr",
"Template:Citation",
"Template:French New Wave",
"Template:Short description",
"Template:Cbignore"
] | https://en.wikipedia.org/wiki/Chris_Marker |
6,716 | Cardinal vowels | Cardinal vowels are a set of reference vowels used by phoneticians in describing the sounds of languages. They are classified depending on the position of the tongue relative to the roof of the mouth, how far forward or back is the highest point of the tongue, and the position of the lips (rounded or unrounded).
A cardinal vowel is a vowel sound produced when the tongue is in an extreme position, either front or back, high or low. The current system was systematised by Daniel Jones in the early 20th century, though the idea goes back to earlier phoneticians, notably Ellis and Bell.
Three of the cardinal vowels—[i], [ɑ] and [u]—have articulatory definitions. The vowel [i] is produced with the tongue as far forward and as high in the mouth as is possible (without producing friction), with spread lips. The vowel [u] is produced with the tongue as far back and as high in the mouth as is possible, with protruded lips. This sound can be approximated by adopting the posture to whistle a very low note, or to blow out a candle. And [ɑ] is produced with the tongue as low and as far back in the mouth as possible.
The other vowels are 'auditorily equidistant' between these three 'corner vowels', at four degrees of aperture or 'height': close (high tongue position), close-mid, open-mid, and open (low tongue position).
These degrees of aperture plus the front-back distinction define eight reference points on a mixture of articulatory and auditory criteria. These eight vowels are known as the eight 'primary cardinal vowels', and vowels like these are common in the world's languages.
The lip positions can be reversed with the lip position for the corresponding vowel on the opposite side of the front-back dimension, so that e.g. Cardinal 1 can be produced with rounding somewhat similar to that of Cardinal 8; these are known as 'secondary cardinal vowels'. Sounds such as these are claimed to be less common in the world's languages. Other vowel sounds are also recognised on the vowel chart of the International Phonetic Alphabet.
Jones argued that to be able to use the cardinal vowel system effectively one must undergo training with an expert phonetician, working both on the recognition and the production of the vowels.
Cardinal vowels are not vowels of any particular language, but a measuring system. However, some languages contain vowel or vowels that are close to the cardinal vowel(s). An example of such language is Ngwe, which is spoken in Cameroon. It has been cited as a language with a vowel system that has eight vowels which are rather similar to the eight primary cardinal vowels (Ladefoged 1971:67).
Cardinal vowels 19–22 were added by David Abercrombie. In IPA Numbers, cardinal vowels 1–18 have the same numbers but added to 300.
The usual explanation of the cardinal vowel system implies that the competent user can reliably distinguish between sixteen Primary and Secondary vowels plus a small number of central vowels. The provision of diacritics by the International Phonetic Association further implies that intermediate values may also be reliably recognized, so that a phonetician might be able to produce and recognize not only a close-mid front unrounded vowel [e] and an open-mid front unrounded vowel [ɛ] but also a mid front unrounded vowel [e̞], a centralized mid front unrounded vowel [ë], and so on. This suggests a range of vowels nearer to forty or fifty than to twenty in number. Empirical evidence for this ability in trained phoneticians is hard to come by.
Ladefoged, in a series of pioneering experiments published in the 1950s and 60s, studied how trained phoneticians coped with the vowels of a dialect of Scottish Gaelic. He asked eighteen phoneticians to listen to a recording of ten words spoken by a native speaker of Gaelic and to place the vowels on a cardinal vowel quadrilateral. He then studied the degree of agreement or disagreement among the phoneticians. Ladefoged himself drew attention to the fact that the phoneticians who were trained in the British tradition established by Daniel Jones were closer to each other in their judgments than those who had not had this training. However, the most striking result is the great divergence of judgments among all the listeners regarding vowels that were distant from Cardinal values. | [
{
"paragraph_id": 0,
"text": "Cardinal vowels are a set of reference vowels used by phoneticians in describing the sounds of languages. They are classified depending on the position of the tongue relative to the roof of the mouth, how far forward or back is the highest point of the tongue, and the position of the lips (rounded or unrounded).",
"title": ""
},
{
"paragraph_id": 1,
"text": "A cardinal vowel is a vowel sound produced when the tongue is in an extreme position, either front or back, high or low. The current system was systematised by Daniel Jones in the early 20th century, though the idea goes back to earlier phoneticians, notably Ellis and Bell.",
"title": ""
},
{
"paragraph_id": 2,
"text": "",
"title": "Table of cardinal vowels"
},
{
"paragraph_id": 3,
"text": "Three of the cardinal vowels—[i], [ɑ] and [u]—have articulatory definitions. The vowel [i] is produced with the tongue as far forward and as high in the mouth as is possible (without producing friction), with spread lips. The vowel [u] is produced with the tongue as far back and as high in the mouth as is possible, with protruded lips. This sound can be approximated by adopting the posture to whistle a very low note, or to blow out a candle. And [ɑ] is produced with the tongue as low and as far back in the mouth as possible.",
"title": "Table of cardinal vowels"
},
{
"paragraph_id": 4,
"text": "The other vowels are 'auditorily equidistant' between these three 'corner vowels', at four degrees of aperture or 'height': close (high tongue position), close-mid, open-mid, and open (low tongue position).",
"title": "Table of cardinal vowels"
},
{
"paragraph_id": 5,
"text": "These degrees of aperture plus the front-back distinction define eight reference points on a mixture of articulatory and auditory criteria. These eight vowels are known as the eight 'primary cardinal vowels', and vowels like these are common in the world's languages.",
"title": "Table of cardinal vowels"
},
{
"paragraph_id": 6,
"text": "The lip positions can be reversed with the lip position for the corresponding vowel on the opposite side of the front-back dimension, so that e.g. Cardinal 1 can be produced with rounding somewhat similar to that of Cardinal 8; these are known as 'secondary cardinal vowels'. Sounds such as these are claimed to be less common in the world's languages. Other vowel sounds are also recognised on the vowel chart of the International Phonetic Alphabet.",
"title": "Table of cardinal vowels"
},
{
"paragraph_id": 7,
"text": "Jones argued that to be able to use the cardinal vowel system effectively one must undergo training with an expert phonetician, working both on the recognition and the production of the vowels.",
"title": "Table of cardinal vowels"
},
{
"paragraph_id": 8,
"text": "Cardinal vowels are not vowels of any particular language, but a measuring system. However, some languages contain vowel or vowels that are close to the cardinal vowel(s). An example of such language is Ngwe, which is spoken in Cameroon. It has been cited as a language with a vowel system that has eight vowels which are rather similar to the eight primary cardinal vowels (Ladefoged 1971:67).",
"title": "Table of cardinal vowels"
},
{
"paragraph_id": 9,
"text": "",
"title": "Table of cardinal vowels"
},
{
"paragraph_id": 10,
"text": "Cardinal vowels 19–22 were added by David Abercrombie. In IPA Numbers, cardinal vowels 1–18 have the same numbers but added to 300.",
"title": "Table of cardinal vowels"
},
{
"paragraph_id": 11,
"text": "The usual explanation of the cardinal vowel system implies that the competent user can reliably distinguish between sixteen Primary and Secondary vowels plus a small number of central vowels. The provision of diacritics by the International Phonetic Association further implies that intermediate values may also be reliably recognized, so that a phonetician might be able to produce and recognize not only a close-mid front unrounded vowel [e] and an open-mid front unrounded vowel [ɛ] but also a mid front unrounded vowel [e̞], a centralized mid front unrounded vowel [ë], and so on. This suggests a range of vowels nearer to forty or fifty than to twenty in number. Empirical evidence for this ability in trained phoneticians is hard to come by.",
"title": "Limits on the accuracy of the system"
},
{
"paragraph_id": 12,
"text": "Ladefoged, in a series of pioneering experiments published in the 1950s and 60s, studied how trained phoneticians coped with the vowels of a dialect of Scottish Gaelic. He asked eighteen phoneticians to listen to a recording of ten words spoken by a native speaker of Gaelic and to place the vowels on a cardinal vowel quadrilateral. He then studied the degree of agreement or disagreement among the phoneticians. Ladefoged himself drew attention to the fact that the phoneticians who were trained in the British tradition established by Daniel Jones were closer to each other in their judgments than those who had not had this training. However, the most striking result is the great divergence of judgments among all the listeners regarding vowels that were distant from Cardinal values.",
"title": "Limits on the accuracy of the system"
}
] | Cardinal vowels are a set of reference vowels used by phoneticians in describing the sounds of languages. They are classified depending on the position of the tongue relative to the roof of the mouth, how far forward or back is the highest point of the tongue, and the position of the lips. A cardinal vowel is a vowel sound produced when the tongue is in an extreme position, either front or back, high or low. The current system was systematised by Daniel Jones in the early 20th century, though the idea goes back to earlier phoneticians, notably Ellis and Bell. | 2002-02-25T15:51:15Z | 2023-12-28T08:43:23Z | [
"Template:IPA",
"Template:Reflist",
"Template:Cite book",
"Template:Citation",
"Template:IPA notice"
] | https://en.wikipedia.org/wiki/Cardinal_vowels |
6,719 | Columbia, Missouri | Columbia /kəˈlʌmbiə/ is a city in the U.S. state of Missouri. It is the county seat of Boone County and home to the University of Missouri. Founded in 1821, it is the principal city of the five-county Columbia metropolitan area. It is Missouri's 4th most populous with an estimated 128,555 residents in 2022.
As a Midwestern college town, Columbia maintains high-quality health care facilities, cultural opportunities, and a low cost of living. The tripartite establishment of Stephens College (1833), the University of Missouri (1839), and Columbia College (1851), which surround the city's Downtown to the east, south, and north, has made Columbia a center of learning. At its center is 8th Street (also known as the Avenue of the Columns), which connects Francis Quadrangle and Jesse Hall to the Boone County Courthouse and the City Hall. Originally an agricultural town, education is now Columbia's primary economic concern, with secondary interests in the healthcare, insurance, and technology sectors; it has never been a manufacturing center. Companies like Shelter Insurance, Carfax, Veterans United Home Loans, and Slackers CDs and Games, were founded in the city. Cultural institutions include the State Historical Society of Missouri, the Museum of Art and Archaeology, and the annual True/False Film Festival and the Roots N Blues Festival. The Missouri Tigers, the state's only major college athletic program, play football at Faurot Field and basketball at Mizzou Arena as members of the rigorous Southeastern Conference.
The city rests upon the forested hills and rolling prairies of Mid-Missouri, near the Missouri River valley, where the Ozark Mountains begin to transform into plains and savanna. Limestone forms bluffs and glades while rain dissolves the bedrock, creating caves and springs which water the Hinkson, Roche Perche, and Bonne Femme creeks. Surrounding the city, Rock Bridge Memorial State Park, Mark Twain National Forest, and Big Muddy National Fish and Wildlife Refuge form a greenbelt preserving sensitive and rare environments. The Columbia Agriculture Park is home to the Columbia Farmers Market.
The first humans who entered the area at least 12,000 years ago were nomadic hunters. Later, woodland tribes lived in villages along waterways and built mounds in high places. The Osage and Missouria nations were expelled by the exploration of French traders and the rapid settlement of American pioneers. The latter arrived by the Boone's Lick Road and hailed from the culture of the Upland South, especially Virginia, Kentucky, and Tennessee. From 1812, the Boonslick area played a pivotal role in Missouri's early history and the nation's westward expansion. German, Irish, and other European immigrants soon joined. The modern populace is unusually diverse, over 8% foreign-born. White and black people are the largest ethnicities, and people of Asian descent are the third-largest group. Columbia has been known as the "Athens of Missouri" for its classic beauty and educational emphasis, but is more commonly called "CoMo".
Columbia's origins begin with the settlement of American pioneers from Kentucky and Virginia in an early 1800s region known as the Boonslick. Before 1815 settlement in the region was confined to small log forts due to the threat of Native American attack during the War of 1812. When the war ended settlers came on foot, horseback, and wagon, often moving entire households along the Boone's Lick Road and sometimes bringing enslaved African Americans. By 1818 it was clear that the increased population would necessitate a new county be created from territorial Howard County. The Moniteau Creek on the west and Cedar Creek on the east were obvious natural boundaries.
Believing it was only a matter of time before a county seat was chosen, the Smithton Land Company was formed to purchase over 2,000 acres (8.1 km) to establish the village of Smithton (near the present-day intersection of Walnut and Garth). In 1819 Smithton was a small cluster of log cabins in an ancient forest of oak and hickory; chief among them was the cabin of Richard Gentry, a trustee of the Smithton Company who would become first mayor of Columbia. In 1820, Boone County was formed and named after the recently deceased explorer Daniel Boone. The Missouri Legislature appointed John Gray, Jefferson Fulcher, Absalom Hicks, Lawrence Bass, and David Jackson as commissioners to select and establish a permanent county seat. Smithton never had more than twenty people, and it was quickly realized that well digging was difficult because of the bedrock.
Springs were discovered across the Flat Branch Creek, so in the spring of 1821 Columbia was laid out, and the inhabitants of Smithton moved their cabins to the new town. The first house in Columbia was built by Thomas Duly in 1820 at what became Fifth and Broadway. Columbia's permanence was ensured when it was chosen as county seat in 1821 and the Boone's Lick Road was rerouted down Broadway.
The roots of Columbia's three economic foundations—education, medicine, and insurance— can be traced to the city's incorporation in 1821. Original plans for the town set aside land for a state university. In 1833, Columbia Baptist Female College opened, which later became Stephens College. Columbia College, distinct from today's and later to become the University of Missouri, was founded in 1839. When the state legislature decided to establish a state university, Columbia raised three times as much money as any competing city, and James S. Rollins donated the land that is today the Francis Quadrangle. Soon other educational institutions were founded in Columbia, such as Christian Female College, the first college for women west of the Mississippi, which later became Columbia College.
The city benefited from being a stagecoach stop of the Santa Fe and Oregon trails, and later from the Missouri–Kansas–Texas Railroad. In 1822, William Jewell set up the first hospital. In 1830, the first newspaper began; in 1832, the first theater in the state was opened; and in 1835, the state's first agricultural fair was held. By 1839, the population of 13,000 and wealth of Boone County was exceeded in Missouri only by that of St. Louis County, which, at that time, included the City of St. Louis.
Columbia's infrastructure was relatively untouched by the Civil War. As a slave state, Missouri had many residents with Southern sympathies, but it stayed in the Union. The majority of the city was pro-Union; however, the surrounding agricultural areas of Boone County and the rest of central Missouri were decidedly pro-Confederate. Because of this, the University of Missouri became a base from which Union troops operated. No battles were fought within the city because the presence of Union troops dissuaded Confederate guerrillas from attacking, though several major battles occurred at nearby Boonville and Centralia.
After Reconstruction, race relations in Columbia followed the Southern pattern of increasing violence of whites against blacks in efforts to suppress voting and free movement: George Burke, a black man who worked at the university, was lynched in 1889. In the spring of 1923, James T. Scott, an African-American janitor at the University of Missouri, was arrested on allegations of raping a university professor's daughter. He was taken from the county jail and lynched on April 29 before a white mob of roughly two thousand people, hanged from the Old Stewart Road Bridge.
In the 21st century, a number of efforts have been undertaken to recognize Scott's death. In 2010 his death certificate was changed to reflect that he was never tried or convicted of charges, and that he had been lynched. In 2011 a headstone was put at his grave at Columbia Cemetery; it includes his wife's and parents' names and dates, to provide a fuller account of his life. In 2016, a marker was erected at the lynching site to memorialize Scott. In 1901, Rufus Logan established The Columbia Professional newspaper to serve Columbia's large African American population.
In 1963, University of Missouri System and the Columbia College system established their headquarters in Columbia. The insurance industry also became important to the local economy as several companies established headquarters in Columbia, including Shelter Insurance, Missouri Employers Mutual, and Columbia Insurance Group. State Farm Insurance has a regional office in Columbia. In addition, the now-defunct Silvey Insurance was a large local employer.
Columbia became a transportation crossroads when U.S. Route 63 and U.S. Route 40 (which was improved as present-day Interstate 70) were routed through the city. Soon after, the city opened the Columbia Regional Airport. By 2000, the city's population was nearly 85,000.
In 2017, Columbia was in the path of totality for the Solar eclipse of August 21, 2017. The city was expecting upwards of 400,000 tourists coming to view the eclipse.
Columbia, in northern mid-Missouri, is 120 miles (190 km) away from both St. Louis and Kansas City, and 29 miles (47 km) north of the state capital of Jefferson City. The city is near the Missouri River, between the Ozark Plateau and the Northern Plains.
According to the United States Census Bureau, the city has a total area of 67.45 square miles (174.69 km), of which 67.17 square miles (173.97 km) is land and 0.28 square miles (0.73 km) is water.
The city generally slopes from the highest point in the Northeast to the lowest point in the Southwest towards the Missouri River. Prominent tributaries of the river are Perche Creek, Hinkson Creek, and Flat Branch Creek. Along these and other creeks in the area can be found large valleys, cliffs, and cave systems such as that in Rock Bridge State Park just south of the city. These creeks are largely responsible for numerous stream valleys giving Columbia hilly terrain similar to the Ozarks while also having prairie flatland typical of northern Missouri. Columbia also operates several greenbelts with trails and parks throughout town.
Large mammals found in the city include urbanized coyotes, red foxes, and numerous whitetail deer. Eastern gray squirrel, and other rodents are abundant, as well as cottontail rabbits and the nocturnal opossum and raccoon. Large bird species are abundant in parks and include the Canada goose, mallard duck, as well as shorebirds, including the great egret and great blue heron. Turkeys are also common in wooded areas and can occasionally be seen on the MKT recreation trail. Populations of bald eagles are found by the Missouri River. The city is on the Mississippi Flyway, used by migrating birds, and has a large variety of small bird species, common to the eastern U.S. The Eurasian tree sparrow, an introduced species, is limited in North America to the counties surrounding St. Louis. Columbia has large areas of forested and open land and many of these areas are home to wildlife.
Columbia has a humid continental climate (Köppen Dfa) marked by sharp seasonal contrasts in temperature, and is in USDA Plant Hardiness Zone 6a. The monthly daily average temperature ranges from 31.0 °F (−0.6 °C) in January to 78.5 °F (25.8 °C) in July, while the high reaches or exceeds 90 °F (32 °C) on an average of 35 days per year, 100 °F (38 °C) on two days, while two nights of sub-0 °F (−18 °C) lows can be expected. Precipitation tends to be greatest and most frequent in the latter half of spring, when severe weather is also most common. Snow averages 16.5 inches (42 cm) per season, mostly from December to March, with occasional November accumulation and falls in April being rarer; historically seasonal snow accumulation has ranged from 3.4 in (8.6 cm) in 2005–06 to 54.9 in (139 cm) in 1977–78. Extreme temperatures have ranged from −26 °F (−32 °C) on February 12, 1899 to 113 °F (45 °C) on July 12 and 14, 1954. Readings of −10 °F (−23 °C) or 105 °F (41 °C) are uncommon, the last occurrences being January 7, 2014 and July 31, 2012.
Columbia's most significant and well-known architecture is found in buildings located in its downtown area and on the university campuses. The University of Missouri's Jesse Hall and the neo-gothic Memorial Union have become icons of the city. The David R. Francis Quadrangle is an example of Thomas Jefferson's academic village concept.
Nine historic districts located within the city are listed on the National Register of Historic Places: Downtown Columbia, the East Campus neighborhood, the West Broadway neighborhood, the Francis Quadrangle, the south campus of Stephens College, the Pierce Pennant Motor Hotel, Maplewood, and the David Guitar House. The downtown skyline is relatively low and is dominated by the 10-story Tiger Hotel and the 15-story Paquin Tower.
Downtown Columbia is an area of approximately one square mile surrounded by the University of Missouri on the south, Stephens College to the east, and Columbia College on the north. The area serves as Columbia's financial and business district.
Since the early-21st century, a large number of high-rise apartment complexes have been built in downtown Columbia. Many of these buildings also offer mixed-use business and retail space on the lower levels. These developments have not been without criticism, with some expressing concern the buildings hurt the historic feel of the area, or that the city does not yet have the infrastructure to support them.
The city's historic residential core lies in a ring around downtown, extending especially to the west along Broadway, and south into the East Campus Neighborhood. The city government recognizes 63 neighborhood associations. The city's most dense commercial areas are primarily along Interstate 70, U.S. Route 63, Stadium Boulevard, Grindstone Parkway, and Downtown.
The 2020 United States census counted 126,254 people, 49,371 households, and 25,144 families in Columbia. The population density was 1,879.6 inhabitants per square mile (725.7/km). There were 53,746 housing units at an average density of 800.1 per square mile (308.9/km). The racial makeup was 72.49% (91,516) white, 11.91% (15,038) black or African-American, 0.32% (398) Native American, 5.61% (7,084) Asian, 0.07% (89) Pacific Islander, 2.17% (2,734) from other races, and 7.44% (9,395) from two or more races. Hispanic or Latino of any race was 3.4% (4,173) of the population.
Of the 49,371 households, 24.0% had children under the age of 18; 38.7% were married couples living together; 31.4% had a female householder with no husband present. Of all households, 34.7% were individuals and 8.6% had someone living alone who was 65 years of age or older. The average household size was 2.3 and the average family size was 3.0.
18.2% of the population was under the age of 18, 23.8% from 18 to 24, 26.4% from 25 to 44, 18.0% from 45 to 64, and 10.7% who were 65 years of age or older. The median age was 28.8 years. For every 100 females, the population had 93.3 males. For every 100 females ages 18 and older, there were 89.8 males.
The 2016-2020 5-year American Community Survey estimates show that the median household income was $53,447 (with a margin of error of +/- $2,355) and the median family income $81,392 (+/- $5,687). Males had a median income of $30,578 (+/- $2,131) versus $23,705 (+/- $1,849) for females. The median income for those above 16 years old was $26,870 (+/- $1,429). Approximately, 8.5% of families and 20.2% of the population were below the poverty line, including 15.7% of those under the age of 18 and 5.2% of those ages 65 or over.
As of the census of 2010, 108,500 people, 43,065 households, and 21,418 families resided in the city. The population density was 1,720.0 inhabitants per square mile (664.1/km). There were 46,758 housing units at an average density of 741.2 per square mile (286.2/km). The racial makeup of the city was 79.0% White, 11.3% African American, 0.3% Native American, 5.2% Asian, 0.1% Pacific Islander, 1.1% from other races, and 3.1% from two or more races. Hispanic or Latino of any race were 3.4% of the population.
There were 43,065 households, of which 26.1% had children under the age of 18 living with them, 35.6% were married couples living together, 10.6% had a female householder with no husband present, 3.5% had a male householder with no wife present, and 50.3% were non-families. 32.0% of all households were made up of individuals, and 6.6% had someone living alone who was 65 years of age or older. The average household size was 2.32 and the average family size was 2.94.
In the city the population was spread out, with 18.8% of residents under the age of 18; 27.3% between the ages of 18 and 24; 26.7% from 25 to 44; 18.6% from 45 to 64; and 8.5% who were 65 years of age or older. The median age in the city was 26.8 years. The gender makeup of the city was 48.3% male and 51.7% female.
As of the census of 2000, there were 84,531 people, 33,689 households, and 17,282 families residing in the city. The population density was 1,592.8 inhabitants per square mile (615.0/km). There were 35,916 housing units at an average density of 676.8 per square mile (261.3/km). The racial makeup of the city was 81.54% White, 10.85% Black or African American, 0.39% Native American, 4.30% Asian, 0.04% Pacific Islander, 0.81% from other races, and 2.07% from two or more races. Hispanic or Latino of any race were 2.05% of the population.
There were 33,689 households, out of which 26.1% had children under the age of 18 living with them, 38.2% were married couples living together, 10.3% had a female householder with no husband present, and 48.7% were non-families. 33.1% of all households were made up of individuals, and 6.5% had someone living alone who was 65 years of age or older. The average household size was 2.26 and the average family size was 2.92.
In the city, the population was spread out, with 19.7% under the age of 18, 26.7% from 18 to 24, 28.7% from 25 to 44, 16.2% from 45 to 64, and 8.6% who were 65 years of age or older. The median age was 27 years. For every 100 females, there were 91.8 males. For every 100 females age 18 and over, there were 89.1 males.
The median income for a household in the city was $33,729, and the median income for a family was $52,288. Males had a median income of $34,710 versus $26,694 for females. The per capita income for the city was $19,507. About 9.4% of families and 19.2% of the population were below the poverty line, including 14.8% of those under age 18 and 5.2% of those age 65 or over. However, traditional statistics of income and poverty can be misleading when applied to cities with high student populations, such as Columbia.
Columbia's economy is historically dominated by education, healthcare, and insurance. Jobs in government are also common, either in Columbia or a half-hour south in Jefferson City. The Columbia Regional Airport and the Missouri River Port of Rocheport connect the region with trade and transportation.
With a Gross Metropolitan Product of $9.6 billion in 2018, Columbia's economy makes up 3% of the Gross State Product of Missouri. Columbia's metro area economy is slightly larger than the economy of Rwanda. Insurance corporations headquartered in Columbia include Shelter Insurance and the Columbia Insurance Group. Other organizations include StorageMart, Veterans United Home Loans, MFA Incorporated, the Missouri State High School Activities Association, and MFA Oil. Companies such as Socket, Datastorm Technologies, Inc. (no longer existent), Slackers CDs and Games, Carfax, and MBS Textbook Exchange were all founded in Columbia.
According to Columbia's 2022 Annual Comprehensive Financial Report, the top employers in the city are:
The Missouri Theatre Center for the Arts and Jesse Auditorium are Columbia's largest fine arts venues. Ragtag Cinema annually hosts the True/False Film Festival.
In 2008, filmmaker Todd Sklar completed the film Box Elder, which was filmed entirely in and around Columbia and the University of Missouri.
The North Village Arts District, located on the north side of downtown, is home to galleries, restaurants, theaters, bars, music venues, and the Missouri Contemporary Ballet.
The University of Missouri's Museum of Art and Archaeology displays 14,000 works of art and archaeological objects in five galleries for no charge to the public. Libraries include the Columbia Public Library, the University of Missouri Libraries, with over three million volumes in Ellis Library, and the State Historical Society of Missouri.
The "We Always Swing" Jazz Series and the Roots N Blues Festival is held in Columbia. "9th Street Summerfest" (now hosted in Rose Park at Rose Music Hall) closes part of that street several nights each summer to hold outdoor performances and has featured Willie Nelson (2009), Snoop Dogg (2010), The Flaming Lips (2010), Weird Al Yankovic (2013), and others. The "University Concert Series" regularly includes musicians and dancers from various genres, typically in Jesse Hall. Other musical venues in town include the Missouri Theatre, the university's multipurpose Hearnes Center, the university's Mizzou Arena, The Blue Note, and Rose Music Hall. Shelter Gardens, a park on the campus of Shelter Insurance headquarters, also hosts outdoor performances during the summer.
The University of Missouri School of Music attracts hundreds of musicians to Columbia, student performances are held in Whitmore Recital Hall. Among many non-profit organizations for classical music are included the "Odyssey Chamber Music Series", "Missouri Symphony", "Columbia Community Band", and "Columbia Civic Orchestra". Founded in 2006, the "Plowman Chamber Music Competition" is a biennial competition held in March/April of odd-numbered years, considered to be one of the finest, top five chamber music competitions in the nation.
Columbia has multiple opportunities to watch and perform in theatrical productions. Ragtag Cinema is one of the most well known theaters in Columbia. The city is home to Stephens College, a private institution known for performing arts. Their season includes multiple plays and musicals. The University of Missouri and Columbia College also present multiple productions a year.
The city's three public high schools are also known for their productions. Rock Bridge High School performs a musical in November and two plays in the spring. Hickman High School also performs a similar season with two musical performances (one in the fall, and one in the spring) and 2 plays (one in the winter, and one at the end of their school year). The newest high school, Battle High, opened in 2013 and also is known for their productions. Battle presents a musical in the fall and a play in the spring, along with improv nights and more productions throughout the year.
The city is also home to the indoor/outdoor theatre Maplewood Barn Theatre in Nifong Park and other community theatre programs such as Columbia Entertainment Company, Talking Horse Productions, Pace Youth Theatre and TRYPS.
The University of Missouri's sports teams, the Missouri Tigers, play a significant role in the city's sports culture. Faurot Field at Memorial Stadium, which has a capacity of 62,621, hosts home football games. The Hearnes Center and Mizzou Arena are two other large sport and event venues, the latter being the home arena for Mizzou's basketball team. Taylor Stadium is host to their baseball team and was the regional host for the 2007 NCAA Baseball Championship. Columbia College has several men and women collegiate sports teams as well. In 2007, Columbia hosted the National Association of Intercollegiate Athletics Volleyball National Championship, which the Lady Cougars participated in.
Columbia also hosts the Show-Me State Games, a non-profit program of the Missouri Governor's Council on Physical Fitness and Health. They are the largest state games in the United States.
Situated midway between St. Louis and Kansas City, Columbians will often have allegiances to the professional sports teams housed there, such as the St. Louis Cardinals, the Kansas City Royals, the Kansas City Chiefs, the St. Louis Blues, Sporting Kansas City, and St. Louis City SC.
Columbia has many bars and restaurants that provide diverse styles of cuisine, due in part to having three colleges. The oldest is the historic Booches bar, restaurant, and pool hall, which was established in 1884 and is frequented by college students. Shakespeare's Pizza was founded in Columbia and is known for its college town pizza.
Throughout the city are many parks and trails for public usage. Among the more popularly frequented is the MKT which is a spur that connects to the Katy Trail, meeting up just south of Columbia proper. The MKT ranked second in the nation for "Best Urban Trail" in the 2015 USA Today's 10 Best Readers' Choice Awards. This 10-foot wide trail built on the old railbed of the MKT railroad begins in downtown Columbia in Flat Branch Park at 4th and Cherry Streets. The all-weather crushed limestone surface provides opportunities for walking, jogging, running, and bicycling. Stephens Lake Park is the highlight of Columbia's park system and is known for its 11-acre fishing/swimming lake, mature trees, and historical significance in the community. It serves as the center for outdoor winter sports, a variety of community festivals such as the Roots N Blues Festival, and outdoor concert series at the amphitheater. Stephens Lake has reservable shelters, playgrounds, swimming beach and spraygrounds, art sculptures, waterfalls, and walking trails. Rock Bridge Memorial State Park is open year-round giving visitors the chance to scramble, hike, and bicycle through a scenic environment. Rock Bridge State Park contains some of the most popular hiking trails in the state, including the Gans Creek Wild Area. Columbia is home to Harmony Bends Disc Golf Course (https://www.como.gov/contacts/harmony-bends-championship-disc-golf-course-strawn-park/), which was named the 2017 Disc Golf Course of the Year by DGCourseReview.com. As of June, 2022, Harmony Bends still continues to rank on DGCourseReview.com as the No. 1 public course, and #2 overall course in the United States
The city has two daily morning newspapers: the Columbia Missourian and the Columbia Daily Tribune. The Missourian is directed by professional editors and staffed by Missouri School of Journalism students who do reporting, design, copy editing, information graphics, photography, and multimedia. The Missourian publishes the monthly city magazine, Vox Magazine. The University of Missouri has the independent official bi-weekly student newspaper called The Maneater, and the quarterly literary magazine, The Missouri Review. The now-defunct Prysms Weekly was also published in Columbia. In late 2009, KCOU News launched full operations out of KCOU 88.1 FM on the MU Campus. The entirely student-run news organization airs a weekday newscast, The Pulse.
The city has 4 television channels. Columbia Access Television (CAT or CAT-TV) is the public access channel. CPSTV is the education access channel, managed by Columbia Public Schools as a function of the Columbia Public Schools Community Relations Department. The Government Access channel broadcasts City Council, Planning and Zoning Commission, and Board of Adjustment meetings.
Columbia has 19 radio stations as well as stations licensed from Jefferson City, Macon and, Lake of the Ozarks.
Columbia's current government was established by a home rule charter adopted by voters on November 11, 1974, which established a council-manager government that invested power in the city council. The city council has seven members: six elected by each of Columbia's six single-member districts or wards and an at-large member, the mayor, who is elected by all city voters. The mayor receives a $9,000 annual stipend, and the six other members receive a $6,000 annual stipend. They are elected to staggered three-year terms. As well as serving as a voting member of the city council, the mayor is recognized as the head of city government for ceremonial purposes. Chief executive authority is invested in a hired city manager, who oversees the government's day-to-day operations.
Columbia is the county seat of Boone County, and houses the county court and government center. The city is in Missouri's 4th congressional district. The 19th Missouri State Senate district covers all of Boone County. There are five Missouri House of Representatives districts (9, 21, 23, 24, and 25) in the city. The Columbia Police Department provides law enforcement across the city, while the Columbia Fire Department provides fire protection. The University of Missouri Police Department also patrols areas on and around the University of Missouri campus and has jurisdiction throughout the state. Additionally, the Boone County Sheriff's Department, the law enforcement agency for the county, regularly patrols the city. The Public Service Joint Communications Center coordinates efforts between the two organizations as well as the Boone County Fire Protection District, which operates Urban Search and Rescue Missouri Task Force 1.
The population generally supports progressive causes, such as recycling programs and the decriminalization of cannabis both for medical and recreational use at the municipal level, though the scope of the latter of the two cannabis ordinances has since been restricted. The city is one of only four in the state to offer medical benefits to same-sex partners of city employees. The new health plan extends health benefits to unmarried heterosexual domestic partners of city employees.
On October 10, 2006, the city council approved an ordinance to prohibit smoking in public places, including restaurants and bars. The ordinance was passed over protest, and several amendments to the ordinance reflect this. Over half of residents possess at least a bachelor's degree, while over a quarter hold a graduate degree. Columbia is the 13th most-highly educated municipality in the United States.
Almost all of the Columbia city limits, and much of the surrounding area, lies within the Columbia Public School District. The district enrolled more than 18,000 students and had a budget of $281 million for the 2019–20 school year.
While 95.4% of adults age 25 and older in the city have a high school diploma. In 2022, Columbia Public Schools recorded a 67.7% attendance rate, lower than the state average of 76.2%. Last year’s graduation rate for the class of 2022 was 90%, while the class of 2021’s graduation rate was reported at 89%. According to statewide numbers for 2022, Missouri’s overall graduation rate was 91.16%. The Columbia school district operates four public high schools which cover grades 9–12: David H. Hickman High School, Rock Bridge High School, Muriel Battle High School, and Frederick Douglass High School. Rock Bridge is one of two Missouri high schools to receive a silver medal by U.S. News & World Report, putting it in the Top 3% of all high schools in the nation. Hickman has been on Newsweek magazine's list of Top 1,300 schools in the country for the past three years and has more named presidential scholars than any other public high school in the US. There are also several private high schools located in the city, including Christian Fellowship School, Columbia Independent School, Heritage Academy, Christian Chapel Academy, and Tolton High School.
CPS also manages seven middle schools: Jefferson, West, Oakland, Gentry, Smithton, Lange, and John Warner. John Warner Middle School first opened for the 2020/21 school year.
A very small portion of the city limits is in Hallsville R-IV School District. The sole high school of that district is Hallsville High School.
The United States census estimated that 55.3% of adults ages 25 and up hold a bachelors degree or higher.
The city has three institutions of higher education: the University of Missouri, Stephens College, and Columbia College, all of which surround Downtown Columbia. The city is the headquarters of the University of Missouri System, which operates campuses in St. Louis, Kansas City, and Rolla. Moberly Area Community College, Central Methodist University, and William Woods University as well as operates satellite campuses in Columbia.
The Columbia Transit provides public bus and para-transit service, and is owned and operated by the city. In 2008, 1,414,400 passengers boarded along the system's six fixed routes and nine University of Missouri shuttle routes, and 27,000 boarded the Para-transit service. The system is constantly experiencing growth in service and technology. A $3.5 million project to renovate and expand the Wabash Station, a rail depot built in 1910 and converted into the city's transit center in the mid-1980s, was completed in summer of 2007. In 2007, a Transit Master Plan was created to address the future transit needs of the city and county with a comprehensive plan to add infrastructure in three key phases. The five to 15-year plan intends to add service along the southwest, southeast and northeast sections of Columbia and develop alternative transportation models for Boone County.
The city is served by Columbia Regional Airport. The closest rail station is Jefferson City station, in the state capital Jefferson City.
Columbia is also known for its MKT Trail, a spur of the Katy Trail State Park, which allows foot and bike traffic across the city, and, conceivably, the state. It consists of a soft gravel surface for running and biking. Columbia also is preparing to embark on construction of several new bike paths and street bike lanes thanks to a $25 million grant from the federal government. The city is also served by American Airlines at the Columbia Regional Airport, the only commercial airport in mid-Missouri.
I-70 (concurrent with US 40) and US 63 are the two main freeways used for travel to and from Columbia. Within the city, there are also three state highways: Routes 763 (Rangeline Street & College Avenue), 163 (Providence Road), and 740 (Stadium Boulevard).
Rail service is provided by the city-owned Columbia Terminal Railroad (COLT), which runs from the north side of Columbia to Centralia and a connection to the Norfolk Southern Railway. Columbia would be at the center of the proposed Missouri Hyperloop, reducing travel times to Kansas City and St. Louis to around 15 minutes.
Health care is a big part of Columbia's economy, with nearly one in six people working in a health-care related profession and a physician density that is about three times the United States average. The city's hospitals and supporting facilities are a large referral center for the state, and medical related trips to the city are common. There are three hospital systems within the city and five hospitals with a total of 1,105 beds.
The University of Missouri Health Care operates three hospitals in Columbia: the University of Missouri Hospital, the University of Missouri Women's and Children's Hospital (formerly Columbia Regional Hospital), and the Ellis Fischel Cancer Center. Boone Hospital Center is administered by BJC Healthcare and operates several clinics as well as outpatient locations. The Harry S. Truman Memorial Veterans' Hospital, adjacent to University Hospital, is administered by the United States Department of Veterans Affairs.
There are a large number of medical-related industries in Columbia. The University of Missouri School of Medicine uses university-owned facilities as teaching hospitals. The University of Missouri Research Reactor Center is the largest research reactor in the United States and produces radioisotopes used in nuclear medicine. The center serves as the sole supplier of the active ingredients in two U.S. Food and Drug Administration-approved radiopharmaceuticals and produces Fluorine-18 used in PET imaging with its cyclotron.
In accordance with the Columbia Sister Cities Program, which operates in conjunction with Sister Cities International, Columbia has been paired with five international sister cities in an attempt to foster cross-cultural understanding: | [
{
"paragraph_id": 0,
"text": "Columbia /kəˈlʌmbiə/ is a city in the U.S. state of Missouri. It is the county seat of Boone County and home to the University of Missouri. Founded in 1821, it is the principal city of the five-county Columbia metropolitan area. It is Missouri's 4th most populous with an estimated 128,555 residents in 2022.",
"title": ""
},
{
"paragraph_id": 1,
"text": "As a Midwestern college town, Columbia maintains high-quality health care facilities, cultural opportunities, and a low cost of living. The tripartite establishment of Stephens College (1833), the University of Missouri (1839), and Columbia College (1851), which surround the city's Downtown to the east, south, and north, has made Columbia a center of learning. At its center is 8th Street (also known as the Avenue of the Columns), which connects Francis Quadrangle and Jesse Hall to the Boone County Courthouse and the City Hall. Originally an agricultural town, education is now Columbia's primary economic concern, with secondary interests in the healthcare, insurance, and technology sectors; it has never been a manufacturing center. Companies like Shelter Insurance, Carfax, Veterans United Home Loans, and Slackers CDs and Games, were founded in the city. Cultural institutions include the State Historical Society of Missouri, the Museum of Art and Archaeology, and the annual True/False Film Festival and the Roots N Blues Festival. The Missouri Tigers, the state's only major college athletic program, play football at Faurot Field and basketball at Mizzou Arena as members of the rigorous Southeastern Conference.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The city rests upon the forested hills and rolling prairies of Mid-Missouri, near the Missouri River valley, where the Ozark Mountains begin to transform into plains and savanna. Limestone forms bluffs and glades while rain dissolves the bedrock, creating caves and springs which water the Hinkson, Roche Perche, and Bonne Femme creeks. Surrounding the city, Rock Bridge Memorial State Park, Mark Twain National Forest, and Big Muddy National Fish and Wildlife Refuge form a greenbelt preserving sensitive and rare environments. The Columbia Agriculture Park is home to the Columbia Farmers Market.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The first humans who entered the area at least 12,000 years ago were nomadic hunters. Later, woodland tribes lived in villages along waterways and built mounds in high places. The Osage and Missouria nations were expelled by the exploration of French traders and the rapid settlement of American pioneers. The latter arrived by the Boone's Lick Road and hailed from the culture of the Upland South, especially Virginia, Kentucky, and Tennessee. From 1812, the Boonslick area played a pivotal role in Missouri's early history and the nation's westward expansion. German, Irish, and other European immigrants soon joined. The modern populace is unusually diverse, over 8% foreign-born. White and black people are the largest ethnicities, and people of Asian descent are the third-largest group. Columbia has been known as the \"Athens of Missouri\" for its classic beauty and educational emphasis, but is more commonly called \"CoMo\".",
"title": ""
},
{
"paragraph_id": 4,
"text": "Columbia's origins begin with the settlement of American pioneers from Kentucky and Virginia in an early 1800s region known as the Boonslick. Before 1815 settlement in the region was confined to small log forts due to the threat of Native American attack during the War of 1812. When the war ended settlers came on foot, horseback, and wagon, often moving entire households along the Boone's Lick Road and sometimes bringing enslaved African Americans. By 1818 it was clear that the increased population would necessitate a new county be created from territorial Howard County. The Moniteau Creek on the west and Cedar Creek on the east were obvious natural boundaries.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Believing it was only a matter of time before a county seat was chosen, the Smithton Land Company was formed to purchase over 2,000 acres (8.1 km) to establish the village of Smithton (near the present-day intersection of Walnut and Garth). In 1819 Smithton was a small cluster of log cabins in an ancient forest of oak and hickory; chief among them was the cabin of Richard Gentry, a trustee of the Smithton Company who would become first mayor of Columbia. In 1820, Boone County was formed and named after the recently deceased explorer Daniel Boone. The Missouri Legislature appointed John Gray, Jefferson Fulcher, Absalom Hicks, Lawrence Bass, and David Jackson as commissioners to select and establish a permanent county seat. Smithton never had more than twenty people, and it was quickly realized that well digging was difficult because of the bedrock.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Springs were discovered across the Flat Branch Creek, so in the spring of 1821 Columbia was laid out, and the inhabitants of Smithton moved their cabins to the new town. The first house in Columbia was built by Thomas Duly in 1820 at what became Fifth and Broadway. Columbia's permanence was ensured when it was chosen as county seat in 1821 and the Boone's Lick Road was rerouted down Broadway.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The roots of Columbia's three economic foundations—education, medicine, and insurance— can be traced to the city's incorporation in 1821. Original plans for the town set aside land for a state university. In 1833, Columbia Baptist Female College opened, which later became Stephens College. Columbia College, distinct from today's and later to become the University of Missouri, was founded in 1839. When the state legislature decided to establish a state university, Columbia raised three times as much money as any competing city, and James S. Rollins donated the land that is today the Francis Quadrangle. Soon other educational institutions were founded in Columbia, such as Christian Female College, the first college for women west of the Mississippi, which later became Columbia College.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The city benefited from being a stagecoach stop of the Santa Fe and Oregon trails, and later from the Missouri–Kansas–Texas Railroad. In 1822, William Jewell set up the first hospital. In 1830, the first newspaper began; in 1832, the first theater in the state was opened; and in 1835, the state's first agricultural fair was held. By 1839, the population of 13,000 and wealth of Boone County was exceeded in Missouri only by that of St. Louis County, which, at that time, included the City of St. Louis.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Columbia's infrastructure was relatively untouched by the Civil War. As a slave state, Missouri had many residents with Southern sympathies, but it stayed in the Union. The majority of the city was pro-Union; however, the surrounding agricultural areas of Boone County and the rest of central Missouri were decidedly pro-Confederate. Because of this, the University of Missouri became a base from which Union troops operated. No battles were fought within the city because the presence of Union troops dissuaded Confederate guerrillas from attacking, though several major battles occurred at nearby Boonville and Centralia.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "After Reconstruction, race relations in Columbia followed the Southern pattern of increasing violence of whites against blacks in efforts to suppress voting and free movement: George Burke, a black man who worked at the university, was lynched in 1889. In the spring of 1923, James T. Scott, an African-American janitor at the University of Missouri, was arrested on allegations of raping a university professor's daughter. He was taken from the county jail and lynched on April 29 before a white mob of roughly two thousand people, hanged from the Old Stewart Road Bridge.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "In the 21st century, a number of efforts have been undertaken to recognize Scott's death. In 2010 his death certificate was changed to reflect that he was never tried or convicted of charges, and that he had been lynched. In 2011 a headstone was put at his grave at Columbia Cemetery; it includes his wife's and parents' names and dates, to provide a fuller account of his life. In 2016, a marker was erected at the lynching site to memorialize Scott. In 1901, Rufus Logan established The Columbia Professional newspaper to serve Columbia's large African American population.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 1963, University of Missouri System and the Columbia College system established their headquarters in Columbia. The insurance industry also became important to the local economy as several companies established headquarters in Columbia, including Shelter Insurance, Missouri Employers Mutual, and Columbia Insurance Group. State Farm Insurance has a regional office in Columbia. In addition, the now-defunct Silvey Insurance was a large local employer.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Columbia became a transportation crossroads when U.S. Route 63 and U.S. Route 40 (which was improved as present-day Interstate 70) were routed through the city. Soon after, the city opened the Columbia Regional Airport. By 2000, the city's population was nearly 85,000.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In 2017, Columbia was in the path of totality for the Solar eclipse of August 21, 2017. The city was expecting upwards of 400,000 tourists coming to view the eclipse.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Columbia, in northern mid-Missouri, is 120 miles (190 km) away from both St. Louis and Kansas City, and 29 miles (47 km) north of the state capital of Jefferson City. The city is near the Missouri River, between the Ozark Plateau and the Northern Plains.",
"title": "Geography"
},
{
"paragraph_id": 16,
"text": "According to the United States Census Bureau, the city has a total area of 67.45 square miles (174.69 km), of which 67.17 square miles (173.97 km) is land and 0.28 square miles (0.73 km) is water.",
"title": "Geography"
},
{
"paragraph_id": 17,
"text": "The city generally slopes from the highest point in the Northeast to the lowest point in the Southwest towards the Missouri River. Prominent tributaries of the river are Perche Creek, Hinkson Creek, and Flat Branch Creek. Along these and other creeks in the area can be found large valleys, cliffs, and cave systems such as that in Rock Bridge State Park just south of the city. These creeks are largely responsible for numerous stream valleys giving Columbia hilly terrain similar to the Ozarks while also having prairie flatland typical of northern Missouri. Columbia also operates several greenbelts with trails and parks throughout town.",
"title": "Geography"
},
{
"paragraph_id": 18,
"text": "Large mammals found in the city include urbanized coyotes, red foxes, and numerous whitetail deer. Eastern gray squirrel, and other rodents are abundant, as well as cottontail rabbits and the nocturnal opossum and raccoon. Large bird species are abundant in parks and include the Canada goose, mallard duck, as well as shorebirds, including the great egret and great blue heron. Turkeys are also common in wooded areas and can occasionally be seen on the MKT recreation trail. Populations of bald eagles are found by the Missouri River. The city is on the Mississippi Flyway, used by migrating birds, and has a large variety of small bird species, common to the eastern U.S. The Eurasian tree sparrow, an introduced species, is limited in North America to the counties surrounding St. Louis. Columbia has large areas of forested and open land and many of these areas are home to wildlife.",
"title": "Geography"
},
{
"paragraph_id": 19,
"text": "Columbia has a humid continental climate (Köppen Dfa) marked by sharp seasonal contrasts in temperature, and is in USDA Plant Hardiness Zone 6a. The monthly daily average temperature ranges from 31.0 °F (−0.6 °C) in January to 78.5 °F (25.8 °C) in July, while the high reaches or exceeds 90 °F (32 °C) on an average of 35 days per year, 100 °F (38 °C) on two days, while two nights of sub-0 °F (−18 °C) lows can be expected. Precipitation tends to be greatest and most frequent in the latter half of spring, when severe weather is also most common. Snow averages 16.5 inches (42 cm) per season, mostly from December to March, with occasional November accumulation and falls in April being rarer; historically seasonal snow accumulation has ranged from 3.4 in (8.6 cm) in 2005–06 to 54.9 in (139 cm) in 1977–78. Extreme temperatures have ranged from −26 °F (−32 °C) on February 12, 1899 to 113 °F (45 °C) on July 12 and 14, 1954. Readings of −10 °F (−23 °C) or 105 °F (41 °C) are uncommon, the last occurrences being January 7, 2014 and July 31, 2012.",
"title": "Geography"
},
{
"paragraph_id": 20,
"text": "Columbia's most significant and well-known architecture is found in buildings located in its downtown area and on the university campuses. The University of Missouri's Jesse Hall and the neo-gothic Memorial Union have become icons of the city. The David R. Francis Quadrangle is an example of Thomas Jefferson's academic village concept.",
"title": "Cityscape"
},
{
"paragraph_id": 21,
"text": "Nine historic districts located within the city are listed on the National Register of Historic Places: Downtown Columbia, the East Campus neighborhood, the West Broadway neighborhood, the Francis Quadrangle, the south campus of Stephens College, the Pierce Pennant Motor Hotel, Maplewood, and the David Guitar House. The downtown skyline is relatively low and is dominated by the 10-story Tiger Hotel and the 15-story Paquin Tower.",
"title": "Cityscape"
},
{
"paragraph_id": 22,
"text": "Downtown Columbia is an area of approximately one square mile surrounded by the University of Missouri on the south, Stephens College to the east, and Columbia College on the north. The area serves as Columbia's financial and business district.",
"title": "Cityscape"
},
{
"paragraph_id": 23,
"text": "Since the early-21st century, a large number of high-rise apartment complexes have been built in downtown Columbia. Many of these buildings also offer mixed-use business and retail space on the lower levels. These developments have not been without criticism, with some expressing concern the buildings hurt the historic feel of the area, or that the city does not yet have the infrastructure to support them.",
"title": "Cityscape"
},
{
"paragraph_id": 24,
"text": "The city's historic residential core lies in a ring around downtown, extending especially to the west along Broadway, and south into the East Campus Neighborhood. The city government recognizes 63 neighborhood associations. The city's most dense commercial areas are primarily along Interstate 70, U.S. Route 63, Stadium Boulevard, Grindstone Parkway, and Downtown.",
"title": "Cityscape"
},
{
"paragraph_id": 25,
"text": "The 2020 United States census counted 126,254 people, 49,371 households, and 25,144 families in Columbia. The population density was 1,879.6 inhabitants per square mile (725.7/km). There were 53,746 housing units at an average density of 800.1 per square mile (308.9/km). The racial makeup was 72.49% (91,516) white, 11.91% (15,038) black or African-American, 0.32% (398) Native American, 5.61% (7,084) Asian, 0.07% (89) Pacific Islander, 2.17% (2,734) from other races, and 7.44% (9,395) from two or more races. Hispanic or Latino of any race was 3.4% (4,173) of the population.",
"title": "Demographics"
},
{
"paragraph_id": 26,
"text": "Of the 49,371 households, 24.0% had children under the age of 18; 38.7% were married couples living together; 31.4% had a female householder with no husband present. Of all households, 34.7% were individuals and 8.6% had someone living alone who was 65 years of age or older. The average household size was 2.3 and the average family size was 3.0.",
"title": "Demographics"
},
{
"paragraph_id": 27,
"text": "18.2% of the population was under the age of 18, 23.8% from 18 to 24, 26.4% from 25 to 44, 18.0% from 45 to 64, and 10.7% who were 65 years of age or older. The median age was 28.8 years. For every 100 females, the population had 93.3 males. For every 100 females ages 18 and older, there were 89.8 males.",
"title": "Demographics"
},
{
"paragraph_id": 28,
"text": "The 2016-2020 5-year American Community Survey estimates show that the median household income was $53,447 (with a margin of error of +/- $2,355) and the median family income $81,392 (+/- $5,687). Males had a median income of $30,578 (+/- $2,131) versus $23,705 (+/- $1,849) for females. The median income for those above 16 years old was $26,870 (+/- $1,429). Approximately, 8.5% of families and 20.2% of the population were below the poverty line, including 15.7% of those under the age of 18 and 5.2% of those ages 65 or over.",
"title": "Demographics"
},
{
"paragraph_id": 29,
"text": "As of the census of 2010, 108,500 people, 43,065 households, and 21,418 families resided in the city. The population density was 1,720.0 inhabitants per square mile (664.1/km). There were 46,758 housing units at an average density of 741.2 per square mile (286.2/km). The racial makeup of the city was 79.0% White, 11.3% African American, 0.3% Native American, 5.2% Asian, 0.1% Pacific Islander, 1.1% from other races, and 3.1% from two or more races. Hispanic or Latino of any race were 3.4% of the population.",
"title": "Demographics"
},
{
"paragraph_id": 30,
"text": "There were 43,065 households, of which 26.1% had children under the age of 18 living with them, 35.6% were married couples living together, 10.6% had a female householder with no husband present, 3.5% had a male householder with no wife present, and 50.3% were non-families. 32.0% of all households were made up of individuals, and 6.6% had someone living alone who was 65 years of age or older. The average household size was 2.32 and the average family size was 2.94.",
"title": "Demographics"
},
{
"paragraph_id": 31,
"text": "In the city the population was spread out, with 18.8% of residents under the age of 18; 27.3% between the ages of 18 and 24; 26.7% from 25 to 44; 18.6% from 45 to 64; and 8.5% who were 65 years of age or older. The median age in the city was 26.8 years. The gender makeup of the city was 48.3% male and 51.7% female.",
"title": "Demographics"
},
{
"paragraph_id": 32,
"text": "As of the census of 2000, there were 84,531 people, 33,689 households, and 17,282 families residing in the city. The population density was 1,592.8 inhabitants per square mile (615.0/km). There were 35,916 housing units at an average density of 676.8 per square mile (261.3/km). The racial makeup of the city was 81.54% White, 10.85% Black or African American, 0.39% Native American, 4.30% Asian, 0.04% Pacific Islander, 0.81% from other races, and 2.07% from two or more races. Hispanic or Latino of any race were 2.05% of the population.",
"title": "Demographics"
},
{
"paragraph_id": 33,
"text": "There were 33,689 households, out of which 26.1% had children under the age of 18 living with them, 38.2% were married couples living together, 10.3% had a female householder with no husband present, and 48.7% were non-families. 33.1% of all households were made up of individuals, and 6.5% had someone living alone who was 65 years of age or older. The average household size was 2.26 and the average family size was 2.92.",
"title": "Demographics"
},
{
"paragraph_id": 34,
"text": "In the city, the population was spread out, with 19.7% under the age of 18, 26.7% from 18 to 24, 28.7% from 25 to 44, 16.2% from 45 to 64, and 8.6% who were 65 years of age or older. The median age was 27 years. For every 100 females, there were 91.8 males. For every 100 females age 18 and over, there were 89.1 males.",
"title": "Demographics"
},
{
"paragraph_id": 35,
"text": "The median income for a household in the city was $33,729, and the median income for a family was $52,288. Males had a median income of $34,710 versus $26,694 for females. The per capita income for the city was $19,507. About 9.4% of families and 19.2% of the population were below the poverty line, including 14.8% of those under age 18 and 5.2% of those age 65 or over. However, traditional statistics of income and poverty can be misleading when applied to cities with high student populations, such as Columbia.",
"title": "Demographics"
},
{
"paragraph_id": 36,
"text": "Columbia's economy is historically dominated by education, healthcare, and insurance. Jobs in government are also common, either in Columbia or a half-hour south in Jefferson City. The Columbia Regional Airport and the Missouri River Port of Rocheport connect the region with trade and transportation.",
"title": "Economy"
},
{
"paragraph_id": 37,
"text": "With a Gross Metropolitan Product of $9.6 billion in 2018, Columbia's economy makes up 3% of the Gross State Product of Missouri. Columbia's metro area economy is slightly larger than the economy of Rwanda. Insurance corporations headquartered in Columbia include Shelter Insurance and the Columbia Insurance Group. Other organizations include StorageMart, Veterans United Home Loans, MFA Incorporated, the Missouri State High School Activities Association, and MFA Oil. Companies such as Socket, Datastorm Technologies, Inc. (no longer existent), Slackers CDs and Games, Carfax, and MBS Textbook Exchange were all founded in Columbia.",
"title": "Economy"
},
{
"paragraph_id": 38,
"text": "According to Columbia's 2022 Annual Comprehensive Financial Report, the top employers in the city are:",
"title": "Economy"
},
{
"paragraph_id": 39,
"text": "The Missouri Theatre Center for the Arts and Jesse Auditorium are Columbia's largest fine arts venues. Ragtag Cinema annually hosts the True/False Film Festival.",
"title": "Culture"
},
{
"paragraph_id": 40,
"text": "In 2008, filmmaker Todd Sklar completed the film Box Elder, which was filmed entirely in and around Columbia and the University of Missouri.",
"title": "Culture"
},
{
"paragraph_id": 41,
"text": "The North Village Arts District, located on the north side of downtown, is home to galleries, restaurants, theaters, bars, music venues, and the Missouri Contemporary Ballet.",
"title": "Culture"
},
{
"paragraph_id": 42,
"text": "The University of Missouri's Museum of Art and Archaeology displays 14,000 works of art and archaeological objects in five galleries for no charge to the public. Libraries include the Columbia Public Library, the University of Missouri Libraries, with over three million volumes in Ellis Library, and the State Historical Society of Missouri.",
"title": "Culture"
},
{
"paragraph_id": 43,
"text": "The \"We Always Swing\" Jazz Series and the Roots N Blues Festival is held in Columbia. \"9th Street Summerfest\" (now hosted in Rose Park at Rose Music Hall) closes part of that street several nights each summer to hold outdoor performances and has featured Willie Nelson (2009), Snoop Dogg (2010), The Flaming Lips (2010), Weird Al Yankovic (2013), and others. The \"University Concert Series\" regularly includes musicians and dancers from various genres, typically in Jesse Hall. Other musical venues in town include the Missouri Theatre, the university's multipurpose Hearnes Center, the university's Mizzou Arena, The Blue Note, and Rose Music Hall. Shelter Gardens, a park on the campus of Shelter Insurance headquarters, also hosts outdoor performances during the summer.",
"title": "Culture"
},
{
"paragraph_id": 44,
"text": "The University of Missouri School of Music attracts hundreds of musicians to Columbia, student performances are held in Whitmore Recital Hall. Among many non-profit organizations for classical music are included the \"Odyssey Chamber Music Series\", \"Missouri Symphony\", \"Columbia Community Band\", and \"Columbia Civic Orchestra\". Founded in 2006, the \"Plowman Chamber Music Competition\" is a biennial competition held in March/April of odd-numbered years, considered to be one of the finest, top five chamber music competitions in the nation.",
"title": "Culture"
},
{
"paragraph_id": 45,
"text": "Columbia has multiple opportunities to watch and perform in theatrical productions. Ragtag Cinema is one of the most well known theaters in Columbia. The city is home to Stephens College, a private institution known for performing arts. Their season includes multiple plays and musicals. The University of Missouri and Columbia College also present multiple productions a year.",
"title": "Culture"
},
{
"paragraph_id": 46,
"text": "The city's three public high schools are also known for their productions. Rock Bridge High School performs a musical in November and two plays in the spring. Hickman High School also performs a similar season with two musical performances (one in the fall, and one in the spring) and 2 plays (one in the winter, and one at the end of their school year). The newest high school, Battle High, opened in 2013 and also is known for their productions. Battle presents a musical in the fall and a play in the spring, along with improv nights and more productions throughout the year.",
"title": "Culture"
},
{
"paragraph_id": 47,
"text": "The city is also home to the indoor/outdoor theatre Maplewood Barn Theatre in Nifong Park and other community theatre programs such as Columbia Entertainment Company, Talking Horse Productions, Pace Youth Theatre and TRYPS.",
"title": "Culture"
},
{
"paragraph_id": 48,
"text": "The University of Missouri's sports teams, the Missouri Tigers, play a significant role in the city's sports culture. Faurot Field at Memorial Stadium, which has a capacity of 62,621, hosts home football games. The Hearnes Center and Mizzou Arena are two other large sport and event venues, the latter being the home arena for Mizzou's basketball team. Taylor Stadium is host to their baseball team and was the regional host for the 2007 NCAA Baseball Championship. Columbia College has several men and women collegiate sports teams as well. In 2007, Columbia hosted the National Association of Intercollegiate Athletics Volleyball National Championship, which the Lady Cougars participated in.",
"title": "Culture"
},
{
"paragraph_id": 49,
"text": "Columbia also hosts the Show-Me State Games, a non-profit program of the Missouri Governor's Council on Physical Fitness and Health. They are the largest state games in the United States.",
"title": "Culture"
},
{
"paragraph_id": 50,
"text": "Situated midway between St. Louis and Kansas City, Columbians will often have allegiances to the professional sports teams housed there, such as the St. Louis Cardinals, the Kansas City Royals, the Kansas City Chiefs, the St. Louis Blues, Sporting Kansas City, and St. Louis City SC.",
"title": "Culture"
},
{
"paragraph_id": 51,
"text": "Columbia has many bars and restaurants that provide diverse styles of cuisine, due in part to having three colleges. The oldest is the historic Booches bar, restaurant, and pool hall, which was established in 1884 and is frequented by college students. Shakespeare's Pizza was founded in Columbia and is known for its college town pizza.",
"title": "Culture"
},
{
"paragraph_id": 52,
"text": "Throughout the city are many parks and trails for public usage. Among the more popularly frequented is the MKT which is a spur that connects to the Katy Trail, meeting up just south of Columbia proper. The MKT ranked second in the nation for \"Best Urban Trail\" in the 2015 USA Today's 10 Best Readers' Choice Awards. This 10-foot wide trail built on the old railbed of the MKT railroad begins in downtown Columbia in Flat Branch Park at 4th and Cherry Streets. The all-weather crushed limestone surface provides opportunities for walking, jogging, running, and bicycling. Stephens Lake Park is the highlight of Columbia's park system and is known for its 11-acre fishing/swimming lake, mature trees, and historical significance in the community. It serves as the center for outdoor winter sports, a variety of community festivals such as the Roots N Blues Festival, and outdoor concert series at the amphitheater. Stephens Lake has reservable shelters, playgrounds, swimming beach and spraygrounds, art sculptures, waterfalls, and walking trails. Rock Bridge Memorial State Park is open year-round giving visitors the chance to scramble, hike, and bicycle through a scenic environment. Rock Bridge State Park contains some of the most popular hiking trails in the state, including the Gans Creek Wild Area. Columbia is home to Harmony Bends Disc Golf Course (https://www.como.gov/contacts/harmony-bends-championship-disc-golf-course-strawn-park/), which was named the 2017 Disc Golf Course of the Year by DGCourseReview.com. As of June, 2022, Harmony Bends still continues to rank on DGCourseReview.com as the No. 1 public course, and #2 overall course in the United States",
"title": "Parks and recreation"
},
{
"paragraph_id": 53,
"text": "The city has two daily morning newspapers: the Columbia Missourian and the Columbia Daily Tribune. The Missourian is directed by professional editors and staffed by Missouri School of Journalism students who do reporting, design, copy editing, information graphics, photography, and multimedia. The Missourian publishes the monthly city magazine, Vox Magazine. The University of Missouri has the independent official bi-weekly student newspaper called The Maneater, and the quarterly literary magazine, The Missouri Review. The now-defunct Prysms Weekly was also published in Columbia. In late 2009, KCOU News launched full operations out of KCOU 88.1 FM on the MU Campus. The entirely student-run news organization airs a weekday newscast, The Pulse.",
"title": "Media"
},
{
"paragraph_id": 54,
"text": "The city has 4 television channels. Columbia Access Television (CAT or CAT-TV) is the public access channel. CPSTV is the education access channel, managed by Columbia Public Schools as a function of the Columbia Public Schools Community Relations Department. The Government Access channel broadcasts City Council, Planning and Zoning Commission, and Board of Adjustment meetings.",
"title": "Media"
},
{
"paragraph_id": 55,
"text": "Columbia has 19 radio stations as well as stations licensed from Jefferson City, Macon and, Lake of the Ozarks.",
"title": "Media"
},
{
"paragraph_id": 56,
"text": "",
"title": "Media"
},
{
"paragraph_id": 57,
"text": "",
"title": "Media"
},
{
"paragraph_id": 58,
"text": "Columbia's current government was established by a home rule charter adopted by voters on November 11, 1974, which established a council-manager government that invested power in the city council. The city council has seven members: six elected by each of Columbia's six single-member districts or wards and an at-large member, the mayor, who is elected by all city voters. The mayor receives a $9,000 annual stipend, and the six other members receive a $6,000 annual stipend. They are elected to staggered three-year terms. As well as serving as a voting member of the city council, the mayor is recognized as the head of city government for ceremonial purposes. Chief executive authority is invested in a hired city manager, who oversees the government's day-to-day operations.",
"title": "Government and politics"
},
{
"paragraph_id": 59,
"text": "Columbia is the county seat of Boone County, and houses the county court and government center. The city is in Missouri's 4th congressional district. The 19th Missouri State Senate district covers all of Boone County. There are five Missouri House of Representatives districts (9, 21, 23, 24, and 25) in the city. The Columbia Police Department provides law enforcement across the city, while the Columbia Fire Department provides fire protection. The University of Missouri Police Department also patrols areas on and around the University of Missouri campus and has jurisdiction throughout the state. Additionally, the Boone County Sheriff's Department, the law enforcement agency for the county, regularly patrols the city. The Public Service Joint Communications Center coordinates efforts between the two organizations as well as the Boone County Fire Protection District, which operates Urban Search and Rescue Missouri Task Force 1.",
"title": "Government and politics"
},
{
"paragraph_id": 60,
"text": "The population generally supports progressive causes, such as recycling programs and the decriminalization of cannabis both for medical and recreational use at the municipal level, though the scope of the latter of the two cannabis ordinances has since been restricted. The city is one of only four in the state to offer medical benefits to same-sex partners of city employees. The new health plan extends health benefits to unmarried heterosexual domestic partners of city employees.",
"title": "Government and politics"
},
{
"paragraph_id": 61,
"text": "On October 10, 2006, the city council approved an ordinance to prohibit smoking in public places, including restaurants and bars. The ordinance was passed over protest, and several amendments to the ordinance reflect this. Over half of residents possess at least a bachelor's degree, while over a quarter hold a graduate degree. Columbia is the 13th most-highly educated municipality in the United States.",
"title": "Government and politics"
},
{
"paragraph_id": 62,
"text": "Almost all of the Columbia city limits, and much of the surrounding area, lies within the Columbia Public School District. The district enrolled more than 18,000 students and had a budget of $281 million for the 2019–20 school year.",
"title": "Education"
},
{
"paragraph_id": 63,
"text": "While 95.4% of adults age 25 and older in the city have a high school diploma. In 2022, Columbia Public Schools recorded a 67.7% attendance rate, lower than the state average of 76.2%. Last year’s graduation rate for the class of 2022 was 90%, while the class of 2021’s graduation rate was reported at 89%. According to statewide numbers for 2022, Missouri’s overall graduation rate was 91.16%. The Columbia school district operates four public high schools which cover grades 9–12: David H. Hickman High School, Rock Bridge High School, Muriel Battle High School, and Frederick Douglass High School. Rock Bridge is one of two Missouri high schools to receive a silver medal by U.S. News & World Report, putting it in the Top 3% of all high schools in the nation. Hickman has been on Newsweek magazine's list of Top 1,300 schools in the country for the past three years and has more named presidential scholars than any other public high school in the US. There are also several private high schools located in the city, including Christian Fellowship School, Columbia Independent School, Heritage Academy, Christian Chapel Academy, and Tolton High School.",
"title": "Education"
},
{
"paragraph_id": 64,
"text": "CPS also manages seven middle schools: Jefferson, West, Oakland, Gentry, Smithton, Lange, and John Warner. John Warner Middle School first opened for the 2020/21 school year.",
"title": "Education"
},
{
"paragraph_id": 65,
"text": "A very small portion of the city limits is in Hallsville R-IV School District. The sole high school of that district is Hallsville High School.",
"title": "Education"
},
{
"paragraph_id": 66,
"text": "The United States census estimated that 55.3% of adults ages 25 and up hold a bachelors degree or higher.",
"title": "Education"
},
{
"paragraph_id": 67,
"text": "The city has three institutions of higher education: the University of Missouri, Stephens College, and Columbia College, all of which surround Downtown Columbia. The city is the headquarters of the University of Missouri System, which operates campuses in St. Louis, Kansas City, and Rolla. Moberly Area Community College, Central Methodist University, and William Woods University as well as operates satellite campuses in Columbia.",
"title": "Education"
},
{
"paragraph_id": 68,
"text": "The Columbia Transit provides public bus and para-transit service, and is owned and operated by the city. In 2008, 1,414,400 passengers boarded along the system's six fixed routes and nine University of Missouri shuttle routes, and 27,000 boarded the Para-transit service. The system is constantly experiencing growth in service and technology. A $3.5 million project to renovate and expand the Wabash Station, a rail depot built in 1910 and converted into the city's transit center in the mid-1980s, was completed in summer of 2007. In 2007, a Transit Master Plan was created to address the future transit needs of the city and county with a comprehensive plan to add infrastructure in three key phases. The five to 15-year plan intends to add service along the southwest, southeast and northeast sections of Columbia and develop alternative transportation models for Boone County.",
"title": "Infrastructure"
},
{
"paragraph_id": 69,
"text": "The city is served by Columbia Regional Airport. The closest rail station is Jefferson City station, in the state capital Jefferson City.",
"title": "Infrastructure"
},
{
"paragraph_id": 70,
"text": "Columbia is also known for its MKT Trail, a spur of the Katy Trail State Park, which allows foot and bike traffic across the city, and, conceivably, the state. It consists of a soft gravel surface for running and biking. Columbia also is preparing to embark on construction of several new bike paths and street bike lanes thanks to a $25 million grant from the federal government. The city is also served by American Airlines at the Columbia Regional Airport, the only commercial airport in mid-Missouri.",
"title": "Infrastructure"
},
{
"paragraph_id": 71,
"text": "I-70 (concurrent with US 40) and US 63 are the two main freeways used for travel to and from Columbia. Within the city, there are also three state highways: Routes 763 (Rangeline Street & College Avenue), 163 (Providence Road), and 740 (Stadium Boulevard).",
"title": "Infrastructure"
},
{
"paragraph_id": 72,
"text": "Rail service is provided by the city-owned Columbia Terminal Railroad (COLT), which runs from the north side of Columbia to Centralia and a connection to the Norfolk Southern Railway. Columbia would be at the center of the proposed Missouri Hyperloop, reducing travel times to Kansas City and St. Louis to around 15 minutes.",
"title": "Infrastructure"
},
{
"paragraph_id": 73,
"text": "Health care is a big part of Columbia's economy, with nearly one in six people working in a health-care related profession and a physician density that is about three times the United States average. The city's hospitals and supporting facilities are a large referral center for the state, and medical related trips to the city are common. There are three hospital systems within the city and five hospitals with a total of 1,105 beds.",
"title": "Infrastructure"
},
{
"paragraph_id": 74,
"text": "The University of Missouri Health Care operates three hospitals in Columbia: the University of Missouri Hospital, the University of Missouri Women's and Children's Hospital (formerly Columbia Regional Hospital), and the Ellis Fischel Cancer Center. Boone Hospital Center is administered by BJC Healthcare and operates several clinics as well as outpatient locations. The Harry S. Truman Memorial Veterans' Hospital, adjacent to University Hospital, is administered by the United States Department of Veterans Affairs.",
"title": "Infrastructure"
},
{
"paragraph_id": 75,
"text": "There are a large number of medical-related industries in Columbia. The University of Missouri School of Medicine uses university-owned facilities as teaching hospitals. The University of Missouri Research Reactor Center is the largest research reactor in the United States and produces radioisotopes used in nuclear medicine. The center serves as the sole supplier of the active ingredients in two U.S. Food and Drug Administration-approved radiopharmaceuticals and produces Fluorine-18 used in PET imaging with its cyclotron.",
"title": "Infrastructure"
},
{
"paragraph_id": 76,
"text": "In accordance with the Columbia Sister Cities Program, which operates in conjunction with Sister Cities International, Columbia has been paired with five international sister cities in an attempt to foster cross-cultural understanding:",
"title": "Sister cities"
},
{
"paragraph_id": 77,
"text": "",
"title": "External links"
}
] | Columbia is a city in the U.S. state of Missouri. It is the county seat of Boone County and home to the University of Missouri. Founded in 1821, it is the principal city of the five-county Columbia metropolitan area. It is Missouri's 4th most populous with an estimated 128,555 residents in 2022. As a Midwestern college town, Columbia maintains high-quality health care facilities, cultural opportunities, and a low cost of living. The tripartite establishment of Stephens College (1833), the University of Missouri (1839), and Columbia College (1851), which surround the city's Downtown to the east, south, and north, has made Columbia a center of learning. At its center is 8th Street, which connects Francis Quadrangle and Jesse Hall to the Boone County Courthouse and the City Hall. Originally an agricultural town, education is now Columbia's primary economic concern, with secondary interests in the healthcare, insurance, and technology sectors; it has never been a manufacturing center. Companies like Shelter Insurance, Carfax, Veterans United Home Loans, and Slackers CDs and Games, were founded in the city. Cultural institutions include the State Historical Society of Missouri, the Museum of Art and Archaeology, and the annual True/False Film Festival and the Roots N Blues Festival. The Missouri Tigers, the state's only major college athletic program, play football at Faurot Field and basketball at Mizzou Arena as members of the rigorous Southeastern Conference. The city rests upon the forested hills and rolling prairies of Mid-Missouri, near the Missouri River valley, where the Ozark Mountains begin to transform into plains and savanna. Limestone forms bluffs and glades while rain dissolves the bedrock, creating caves and springs which water the Hinkson, Roche Perche, and Bonne Femme creeks. Surrounding the city, Rock Bridge Memorial State Park, Mark Twain National Forest, and Big Muddy National Fish and Wildlife Refuge form a greenbelt preserving sensitive and rare environments. The Columbia Agriculture Park is home to the Columbia Farmers Market. The first humans who entered the area at least 12,000 years ago were nomadic hunters. Later, woodland tribes lived in villages along waterways and built mounds in high places. The Osage and Missouria nations were expelled by the exploration of French traders and the rapid settlement of American pioneers. The latter arrived by the Boone's Lick Road and hailed from the culture of the Upland South, especially Virginia, Kentucky, and Tennessee. From 1812, the Boonslick area played a pivotal role in Missouri's early history and the nation's westward expansion. German, Irish, and other European immigrants soon joined. The modern populace is unusually diverse, over 8% foreign-born. White and black people are the largest ethnicities, and people of Asian descent are the third-largest group. Columbia has been known as the "Athens of Missouri" for its classic beauty and educational emphasis, but is more commonly called "CoMo". | 2001-10-07T22:31:08Z | 2023-12-20T21:31:43Z | [
"Template:Columbia, Missouri weatherbox",
"Template:Boone County, Missouri",
"Template:Div col end",
"Template:Reflist",
"Template:Dead link",
"Template:Columbia MO TV",
"Template:Columbia, Missouri",
"Template:Good article",
"Template:Main",
"Template:Columbia MO Radio",
"Template:Col-begin",
"Template:Col-2-of-3",
"Template:Col-end",
"Template:Use mdy dates",
"Template:Infobox settlement",
"Template:Flagdeco",
"Template:Cite web",
"Template:Missouri county seats",
"Template:Missouri cities and mayors of 100,000 population",
"Template:Authority control",
"Template:Div col",
"Template:Col-1-of-3",
"Template:Col-3-of-3",
"Template:Notelist",
"Template:Cite book",
"Template:Convert",
"Template:Historical populations",
"Template:Sister project links",
"Template:Missouri",
"Template:IPAc-en",
"Template:'"
] | https://en.wikipedia.org/wiki/Columbia,_Missouri |
6,720 | Charlton Athletic F.C. | Charlton Athletic Football Club is an English professional football club based in Charlton, south-east London, which compete in EFL League One, the third tier of the English football league system.
Their home ground is The Valley, where the club have played since 1919. They have also played at The Mount in Catford during the 1923–24 season, and spent seven years at Selhurst Park and the Boleyn Ground between 1985 and 1992, because of financial issues, and then safety concerns raised by the local council. The club's traditional kit consists of red shirts, white shorts and red socks, and their most commonly used nickname is The Addicks. Charlton share local rivalries with fellow South London clubs Crystal Palace and Millwall.
The club was founded on 9 June 1905 and turned professional in 1920. They spent one season in the Kent League and one season in the Southern League, before being invited to join the newly-formed Football League Third Division South in 1921. They won the division in the 1928–29 season, and again in 1934–35 following relegation in 1933. Charlton were promoted out of the Second Division in 1935–36, and finished second in the First Division the next season. Having been beaten finalists in 1946, they lifted the FA Cup the following year with a 1–0 victory over Burnley. The departure of Jimmy Seed in 1956, manager for 23 years, saw the club relegated out of the top-flight the following year. Relegated again in 1972, Charlton were promoted from the Third Division in 1974–75, and again in 1980–81 following relegation the previous season.
Charlton recovered from administration to secure promotion back to the First Division in 1985–86, and went on to lose in the 1987 final of the Full Members' Cup, though they won the 1987 play-off final to retain their top-flight status. Having been relegated in 1990, Charlton won the 1998 play-off final to make their debut in the Premier League. Though they were relegated the next year, manager Alan Curbishley took them back up as champions in 1999–2000. Charlton spent seven successive years in the Premier League, before suffering two relegations in three years. They won League One with 101 points in 2011–12, though were relegated from the Championship in 2016, and again in 2020 after they won the 2019 League One play-off final.
Charlton Athletic F.C. was formed on 9 June 1905 by a group of 14 to 15-year-olds in East Street, Charlton, which is now known as Eastmoor Street and no longer residential.
Contrary to some histories, the club was founded as "Charlton Athletic" and had no connection to other teams or institutions such as East St Mission, Blundell Mission or Charlton Reds; it was not founded by a church, school, employer or as a franchise for an existing ground. Charlton spent most of the years before the First World War playing in local leagues but progressing rapidly, winning successive leagues and so promotions eight years in a row. In 1905–06 the team played only friendly games but joined, and won, the Lewisham League Division III for the 1906–07 season. For the 1907–08 season the team contested the Lewisham League, Woolwich League and entered the Woolwich Cup. It was also around this time the Addicks nickname was first used in the local press although it may have been in use before then. In the 1908–09 season Charlton Athletic were playing in the Blackheath and District League and by 1910–11 had progressed to the Southern Suburban League. During this period Charlton Athletic won the Woolwich Cup four times, the championship of the Woolwich League three times, won the Blackheath League twice and the Southern Suburban League three times.
They became a senior side in 1913 the same year that nearby Woolwich Arsenal relocated to North London.
At the outbreak of World War One, Charlton were one of the first clubs to close down to take part in the "Greater Game" overseas. The club was reformed in 1917, playing mainly friendlies to raise funds for charities connected to the war and for the Woolwich Memorial Hospital Cup, the trophy for which Charlton donated. It had previously been the Woolwich Cup that the team had won outright following three consecutive victories.
After the war, they joined the Kent League for one season (1919–20) before becoming professional, appointing Walter Rayner as the first full-time manager. They were accepted by the Southern League and played just a single season (1920–21) before being voted into the Football League. Charlton's first Football League match was against Exeter City in August 1921, which they won 1–0. In 1923, Charlton became "giant killers" in the FA Cup beating top flight sides Manchester City, West Bromwich Albion, and Preston North End before losing to eventual winners Bolton Wanderers in the Quarter-Finals. Later that year, it was proposed that Charlton merge with Catford Southend to create a larger team with bigger support. In the 1923–24 season Charlton played in Catford at The Mount stadium and wore the colours of "The Enders", light and dark blue vertical stripes. However, the move fell through and the Addicks returned to the Charlton area in 1924, returning to the traditional red and white colours in the process.
Charlton finished second bottom in the Football League in 1926 and were forced to apply for re-election which was successful. Three years later the Addicks won the Division Three championship in 1929 and they remained at the Division Two level for four years. After relegation into the Third Division south at the end of the 1932–33 season the club appointed Jimmy Seed as manager and he oversaw the most successful period in Charlton's history either side of the Second World War. Seed, an ex-miner who had made a career as a footballer despite suffering the effects of poison gas in the First World War, remains the most successful manager in Charlton's history. He is commemorated in the name of a stand at the Valley. Seed was an innovative thinker about the game at a time when tactical formations were still relatively unsophisticated. He later recalled "a simple scheme that enabled us to pull several matches out of the fire" during the 1934–35 season: when the team was in trouble "the centre-half was to forsake his defensive role and go up into the attack to add weight to the five forwards." The organisation Seed brought to the team proved effective and the Addicks gained successive promotions from the Third Division to the First Division between 1934 and 1936, becoming the first club to ever do so. Charlton finally secured promotion to the First Division by beating local rivals West Ham United at the Boleyn Ground, with their centre-half John Oakes playing on despite concussion and a broken nose.
In 1937, Charlton finished runners up in the First Division, in 1938 finished fourth and 1939 finished third. They were the most consistent team in the top flight of English football over the three seasons immediately before the Second World War. This continued during the war years and they won the Football League War Cup and appeared in finals.
Charlton reached the 1946 FA Cup Final, but lost 4–1 to Derby County at Wembley. Charlton's Bert Turner scored an own goal in the 80th minute before equalising for the Addicks a minute later to take them into extra time, but they conceded three further goals in the extra period. When the full league programme resumed in 1946–47 Charlton could finish only 19th in the First Division, just above the relegation spots, but they made amends with their performance in the FA Cup, reaching the 1947 FA Cup Final. This time they were successful, beating Burnley 1–0, with Chris Duffy scoring the only goal of the day. In this period of renewed football attendances, Charlton became one of only 13 English football teams to average over 40,000 as their attendance during a full season. The Valley was the largest football ground in the League, drawing crowds in excess of 70,000. However, in the 1950s little investment was made either for players or to The Valley, hampering the club's growth. In 1956, the then board undermined Jimmy Seed and asked for his resignation; Charlton were relegated the following year.
From the late 1950s until the early 1970s, Charlton remained a mainstay of the Second Division before relegation to the Third Division in 1972. It caused the team's support to drop, and even a promotion in 1975 back to the second division did little to re-invigorate the team's support and finances. In 1979–80 Charlton were relegated again to the Third Division, but won immediate promotion back to the Second Division in 1980–81. This was a turning point in the club's history leading to a period of turbulence and change including further promotion and exile. A change in management and shortly after a change in club ownership led to severe problems, such as the reckless signing of former European Footballer of the Year Allan Simonsen, and the club looked like it would go out of business.
In 1984 financial matters came to a head and the club went into administration, to be reformed as Charlton Athletic (1984) Ltd. although the club's finances were still far from secure. They were forced to leave the Valley just after the start of the 1985–86 season, after its safety was criticised by Football League officials in the wake of the Bradford City stadium fire. The club began to ground-share with Crystal Palace at Selhurst Park and this arrangement looked to be for the long-term, as Charlton did not have enough funds to revamp the Valley to meet safety requirements.
Despite the move away from the Valley, Charlton were promoted to the First Division as Second Division runners-up at the end of 1985–86, and remained at this level for four years (achieving a highest league finish of 14th) often with late escapes, most notably against Leeds in 1987, where the Addicks triumphed in extra-time of the play-off final replay to secure their top flight place. In 1987 Charlton also returned to Wembley for the first time since the 1947 FA Cup final for the Full Members Cup final against Blackburn. Eventually, Charlton were relegated in 1990 along with Sheffield Wednesday and bottom club Millwall. Manager Lennie Lawrence remained in charge for one more season before he accepted an offer to take charge of Middlesbrough. He was replaced by joint player-managers Alan Curbishley and Steve Gritt. The pair had unexpected success in their first season finishing just outside the play-offs, and 1992–93 began promisingly and Charlton looked good bets for promotion in the new Division One (the new name of the old Second Division following the formation of the Premier League). However, the club was forced to sell players such as Rob Lee to help pay for a return to the Valley, while club fans formed the Valley Party, nominating candidates to stand in local elections in 1990, pressing the local council to enable the club's return to the Valley – finally achieved in December 1992.
In March 1993, defender Tommy Caton, who had been out of action because of injury since January 1991, announced his retirement from playing on medical advice. He died suddenly at the end of the following month at the age of 30.
In 1995, new chairman Richard Murray appointed Alan Curbishley as sole manager of Charlton. Under his sole leadership Charlton made an appearance in the play-off in 1996 but were eliminated by Crystal Palace in the semi-finals and the following season brought a disappointing 15th-place finish. 1997–98 was Charlton's best season for years. They reached the Division One play-off final and battled against Sunderland in a thrilling game which ended with a 4–4 draw after extra time. Charlton won 7–6 on penalties, with the match described as "arguably the most dramatic game of football in Wembley's history", and were promoted to the Premier League.
Charlton's first Premier League campaign began promisingly (they went top after two games) but they were unable to keep up their good form and were soon battling relegation. The battle was lost on the final day of the season but the club's board kept faith in Curbishley, confident that they could bounce back. Curbishley rewarded the chairman's loyalty with the Division One title in 2000 which signalled a return to the Premier League.
After the club's return, Curbishley proved an astute spender and by 2003 he had succeeded in establishing Charlton in the top flight. Charlton spent much of the 2003–04 Premier League season challenging for a Champions League place, but a late-season slump in form and the sale of star player Scott Parker to Chelsea, left Charlton in seventh place, which was still the club's highest finish since the 1950s. Charlton were unable to build on this level of achievement and Curbishley departed in 2006, with the club still established as a solid mid-table side.
In May 2006, Iain Dowie was named as Curbishley's successor, but was sacked after 12 league matches in November 2006, with only two wins. Les Reed replaced Dowie as manager, however he too failed to improve Charlton's position in the league table and on Christmas Eve 2006, Reed was replaced by former player Alan Pardew. Although results did improve, Pardew was unable to keep Charlton up and relegation was confirmed in the penultimate match of the season.
Charlton's return to the second tier of English football was a disappointment, with their promotion campaign tailing off to an 11th-place finish. Early in the following season the Addicks were linked with a foreign takeover, but this was swiftly denied by the club. On 10 October 2008, Charlton received an indicative offer for the club from a Dubai-based diversified investment company. However, the deal later fell through. The full significance of this soon became apparent as the club recorded net losses of over £13 million for that financial year. Pardew left on 22 November after a 2–5 home loss to Sheffield United that saw the team fall into the relegation places. Matters did not improve under caretaker manager Phil Parkinson, and the team went a club record 18 games without a win, a new club record, before finally achieving a 1–0 away victory over Norwich City in an FA Cup third round replay; Parkinson was hired on a permanent basis. The team were relegated to League One after a 2–2 draw against Blackpool on 18 April 2009.
After spending almost the entire 2009–10 season in the top six of League One, Charlton were defeated in the Football League One play-offs semi-final second leg on penalties against Swindon Town.
After a change in ownership, Parkinson and Charlton legend Mark Kinsella left after a poor run of results. Another Charlton legend, Chris Powell, was appointed manager of the club in January 2011, winning his first game in charge 2–0 over Plymouth at the Valley. This was Charlton's first league win since November. Powell's bright start continued with a further three victories, before running into a downturn which saw the club go 11 games in succession without a win. Yet the fans' respect for Powell saw him come under remarkably little criticism. The club's fortunes picked up towards the end of the season, but leaving them far short of the play-offs. In a busy summer, Powell brought in 19 new players and after a successful season, on 14 April 2012, Charlton Athletic won promotion back to the Championship with a 1–0 away win at Carlisle United. A week later, on 21 April 2012, they were confirmed as champions after a 2–1 home win over Wycombe Wanderers. Charlton then lifted the League One trophy on 5 May 2012, having been in the top position since 15 September 2011, and after recording a 3–2 victory over Hartlepool United, recorded their highest ever league points score of 101, the highest in any professional European league that year.
In the first season back in the Championship, the 2012–13 season saw Charlton finish ninth place with 65 points, just three points short of the play-off places to the Premier League.
In early January 2014 during the 2013–14 season, Belgian businessman Roland Duchâtelet took over Charlton as owner in a deal worth £14million. This made Charlton a part of a network of football clubs owned by Duchâtelet. On 11 March 2014, two days after an FA Cup quarter-final loss to Sheffield United, and with Charlton sitting bottom of the table, Powell was sacked, private emails suggesting a rift with the owner.
New manager Jose Riga, despite having to join Charlton long after the transfer window had closed, was able to improve Charlton's form and eventually guide them to 18th place, successfully avoiding relegation. After Riga's departure to manage Blackpool, former Millwall player Bob Peeters was appointed as manager in May 2014 on a 12-month contract. Charlton started strong, but a long run of draws meant that after only 25 games in charge Peeters was dismissed with the team in 14th place. His replacement, Guy Luzon, ensured there was no relegation battle by winning most of the remaining matches, resulting in a 12th-place finish.
The 2015–16 season began promisingly but results under Luzon deteriorated and on 24 October 2015 after a 3–0 defeat at home to Brentford he was sacked. Luzon said in a News Shopper interview that he "was not the one who chose how to do the recruitment" as the reason why he failed as manager. Karel Fraeye was appointed "interim head coach", but was sacked after 14 games and just two wins, with the club then second from bottom in the Championship. On 14 January 2016, Jose Riga was appointed head coach for a second spell, but could not prevent Charlton from being relegated to League One for the 2016–17 season. Riga resigned at the end of the season. To many fans, the managerial changes and subsequent relegation to League One were symptomatic of the mismanagement of the club under Duchâtelet's ownership and several protests began.
After a slow start to the new season, with the club in 15th place of League One, the club announced that it had "parted company" with Russell Slade in November 2016. Karl Robinson was appointed on a permanent basis soon after. He led the Addicks to an uneventful 13th-place finish. The following season Robinson had the team challenging for the play-offs, but a drop in form in March led him to resign by mutual consent. He was replaced by former player Lee Bowyer as caretaker manager who guided them to a 6th-place finish, but lost in the play-off semi-final.
Bowyer was appointed permanently in September on a one-year contract and after finishing third in the regular 2018-19 EFL League One season, Charlton beat Sunderland 2–1 in the League One play-off final to earn promotion back to the EFL Championship after a three-season absence. Bowyer later signed a new one-year contract following promotion, which was later extended to three years in January 2020.
On 29 November 2019, Charlton Athletic were acquired by East Street Investments (ESI) from Abu Dhabi, subject to EFL approval. Approval was reportedly granted on 2 January 2020. However, on 10 March 2020, a public disagreement between the new owners erupted along with reports that the main investor was pulling out, and the EFL said the takeover had not been approved. The Valley and Charlton's training ground were still owned by Duchâtelet, and a transfer embargo was in place as the new owners had not provided evidence of funding through to June 2021. On 20 April 2020, the EFL said the club was being investigated for misconduct regarding the takeover. In June 2020, Charlton confirmed that ESI had been taken over by a consortium led by businessman Paul Elliott, and said it had contacted the EFL to finalise the ownership change. However, a legal dispute involving former ESI director Matt Southall continued. He attempted to regain control of the club to prevent Elliot's takeover from going ahead, but failed and was subsequently fined and dismissed for challenging the club's directors. On 7 August 2020, the EFL said three individuals, including ESI owner Elliot and lawyer Chris Farnell, had failed its Owners' and Directors' Test, leaving the club's ownership unclear; Charlton appealed against the decision. Meanwhile, Charlton were relegated to League One at the end of the 2019–20 season after finishing 22nd. Because of the COVID-19 pandemic, the final games of the season were played behind closed doors, which remained the case for the majority of the following season.
Later in August, Thomas Sandgaard, a Danish businessman based in Colorado, was reported to be negotiating to buy the club. After further court hearings, Elliott was granted an injunction blocking the sale of ESI until a hearing in November 2020.
On 25 September 2020, Thomas Sandgaard acquired the club itself from ESI, and was reported to have passed the EFL's Owners' and Directors' Tests; the EFL noted the change in control, but said the club's sale was now "a matter for the interested parties".
On 15 March 2021, with the club lying in eighth place, Bowyer resigned as club manager and was appointed manager of Birmingham City. His successor, Nigel Adkins, was appointed three days later. The club finished the 2020–21 season in seventh place, but started the following season by winning only two out of 13 League One matches and were in the relegation zone when Adkins was sacked on 21 October 2021.
After a successful spell as caretaker manager, Johnnie Jackson was appointed manager in December 2021, but he was also sacked after finishing the season in 13th. Swindon Town manager Ben Garner was appointed as his replacement in June 2022, but was sacked on 5 December 2022 with the team in 17th place. After the club was knocked out of the FA Cup by League Two side Stockport County on 7 December, supporters said Charlton was at its "lowest ebb in living memory", with fans "losing confidence" in owner Thomas Sandgaard. Dean Holden was appointed manager on 20 December 2022, and Charlton improved to finish the 2022–23 season in 10th place.
On 5 June 2023, the club announced that SE7 Partners, comprising former Sunderland director Charlie Methven and Edward Warrick, had agreed a takeover of Charlton Athletic, becoming the club's fourth set of owners in under four years. On 19 July, the EFL and FA cleared SE7 Partners to take over the club, and the deal was completed on 21 July 2023. On 27 August 2023, after one win in the opening six games of the 2023–24 season, Holden was sacked as manager, and succeeded by Michael Appleton.
Charlton have used a number of crests and badges during their history, although the current design has not been changed since 1968. The first known badge, from the 1930s, consisted of the letters CAF in the shape of a club from a pack of cards. In the 1940s, Charlton used a design featuring a robin sitting in a football within a shield, sometimes with the letters CAFC in the four-quarters of the shield, which was worn for the 1946 FA Cup Final. In the late 1940s and early 1950s, the crest of the former metropolitan borough of Greenwich was used as a symbol for the club but this was not used on the team's shirts.
In 1963, a competition was held to find a new badge for the club, and the winning entry was a hand holding a sword, which complied with Charlton's nickname of the time, the Valiants. Over the next five years modifications were made to this design, such as the addition of a circle surrounding the hand and sword and including the club's name in the badge. By 1968, the design had reached the one known today, and has been used continuously from this year, apart from a period in the 1970s when just the letters CAFC appeared on the team's shirts.
With the exception of one season, Charlton have always played in red and white – colours chosen by the boys who founded Charlton Athletic in 1905 after having to play their first matches in the borrowed kits of their local rivals Woolwich Arsenal, who also played in red and white. The exception came during part of the 1923–24 season when Charlton wore the colours of Catford Southend as part of the proposed move to Catford, which were light and dark blue stripes. However, after the move fell through, Charlton returned to wearing red and white as their home colours.
The sponsors were as follows:
Charlton's most common nickname is The Addicks. The origin of this name is from a local fishmonger, Arthur "Ikey" Bryan, who rewarded the team with meals of haddock and chips with vinegar
The progression of the nickname can be seen in the book The Addicks Cartoons: An Affectionate Look into the Early History of Charlton Athletic, which covers the pre-First World War history of Charlton through a narrative based on 56 cartoons which appeared in the now defunct Kentish Independent. The very first cartoon, from 31 October 1908, calls the team the Haddocks. By 1910, the name had changed to Addicks although it also appeared as Haddick. The club also have two other nicknames, The Robins, adopted in 1931, and The Valiants, chosen in a fan competition in the 1960s which also led to the adoption of the sword badge which is still in use. The Addicks nickname never went away and was revived by fans after the club lost its Valley home in 1985 and went into exile at Crystal Palace. It is now once again the official nickname of the club.
Charlton fans' chants have included "Valley, Floyd Road", a song noting the stadium's address to the tune of "Mull of Kintyre".
The club's first ground was Siemens Meadow (1905–1907), a patch of rough ground by the River Thames. This was over-shadowed by the Siemens Brothers Telegraph Works. Then followed Woolwich Common (1907–1908), Pound Park (1908–1913), and Angerstein Lane (1913–1915). After the end of the First World War, a chalk quarry known as the Swamps was identified as Charlton's new ground, and in the summer of 1919 work began to create the level playing area and remove debris from the site. The first match at this site, now known as the club's current ground The Valley, was in September 1919. Charlton stayed at The Valley until 1923, when the club moved to The Mount stadium in Catford as part of a proposed merger with Catford Southend Football Club. However, after this move collapsed in 1924 Charlton returned to The Valley.
During the 1930s and 1940s, significant improvements were made to the ground, making it one of the largest in the country at that time. In 1938 the highest attendance to date at the ground was recorded at over 75,000 for a FA Cup match against Aston Villa. During the 1940s and 1950s the attendance was often above 40,000, and Charlton had one of the largest support bases in the country. However, after the club's relegation little investment was made in The Valley as it fell into decline.
In the 1980s matters came to a head as the ownership of the club and The Valley was divided. The large East Terrace had been closed down by the authorities after the Bradford City stadium fire and the ground's owner wanted to use part of the site for housing. In September 1985, Charlton made the controversial move to ground-share with South London neighbours Crystal Palace at Selhurst Park. This move was unpopular with supporters and in the late 1980s significant steps were taken to bring about the club's return to The Valley.
A single issue political party, the Valley Party, contested the 1990 local Greenwich Borough Council elections on a ticket of reopening the stadium, capturing 11% of the vote, aiding the club's return. The Valley Gold investment scheme was created to help supporters fund the return to The Valley, and several players were also sold to raise funds. For the 1991–92 season and part of the 1992–93 season, the Addicks played at West Ham's Upton Park as Wimbledon had moved into Selhurst Park alongside Crystal Palace. Charlton finally returned to The Valley in December 1992, celebrating with a 1–0 victory against Portsmouth.
Since the return to The Valley, three sides of the ground have been completely redeveloped turning The Valley into a modern, all-seater stadium with a 27,111 capacity which is the biggest in South London. There are plans in place to increase the ground's capacity to approximately 31,000 and even around 40,000 in the future.
The bulk of the club's support base comes from South East London and Kent, particularly the London boroughs of Greenwich, Bexley and Bromley. Supporters played a key role in the return of the club to The Valley in 1992 and were rewarded by being granted a voice on the board in the form of an elected supporter director. Any season ticket holder could put themselves forward for election, with a certain number of nominations, and votes were cast by all season ticket holders over the age of 18. The last such director, Ben Hayes, was elected in 2006 to serve until 2008, when the role was discontinued as a result of legal issues. Its functions were replaced by a fans forum, which met for the first time in December 2008 and is still active to this day.
Charlton's main rivals are their South London neighbours, Crystal Palace and Millwall. Unlike those rivals Charlton have never competed in football's fourth tier and are the only one of the three to have won the FA Cup.
In 1985, Charlton were forced to ground-share with Crystal Palace after safety concerns at The Valley. They played their home fixtures at the Glaziers' Selhurst Park stadium until 1991. The arrangement was seen by Crystal Palace chairman Ron Noades as essential for the future of football, but it was unpopular with both sets of fans. Charlton fans campaigned for a return to The Valley throughout their time at Selhurst Park. In 2005, Palace were relegated by Charlton at the Valley after a 2–2 draw. Palace needed a win to survive. However, with seven minutes left, Charlton equalised, relegating their rivals. Post-match, there was a well-publicised altercation between the two chairmen of the respective clubs, Richard Murray and Simon Jordan. Since their first meeting in the Football League in 1925, Charlton have won 17, drawn 13 and lost 26 games against Palace. The teams last met in 2015, a 4–1 win for Palace in the League Cup.
Charlton are closest in proximity to Millwall than any other EFL club, with The Valley and The Den being less than four miles (6.4 km) apart. They last met in July 2020, a 1–0 win for Millwall at the Valley. Since their first Football League game in 1921, Charlton have won 11, drawn 26 and lost 37 league games (the two sides also met twice in the Anglo-Italian Cup in the 1992–93 season; Charlton winning one tie, and one draw). The Addicks have not beaten Millwall in the last 12 league fixtures between the sides; their last win came on 9 March 1996 at The Valley.
Charlton Athletic featured in the ITV one-off drama Albert's Memorial, shown on 12 September 2010 and starring David Jason and David Warner.
In the long-running BBC sitcom Only Fools and Horses, Rodney Charlton Trotter is named after the club.
In the BBC sitcom Brush Strokes, the lead character Jacko was a Charlton fan, reflecting the real life allegiance to the club of the actor who portrayed him, Karl Howman.
In the BBC science-fiction series Doctor Who, the seventh Doctor's companion Ace (played by Sophie Aldred from 1987 to 1989) is a fan of Charlton Athletic.
Charlton's ground and the then manager, Alan Curbishley, made appearances in the Sky One TV series Dream Team.
Charlton Athletic assumes a pivotal role in the film The Silent Playground (1963). Three children get in to trouble when their mother's boyfriend 'Uncle' Alan (John Ronane), gives them pocket money to wander off on their own, so that he can attend a Charlton football match. There is some footage from the ground which Ronane is later seen leaving.
In Amazon Prime series The Boys, a flashback scene shows Charlton Athletic flags and scarfs on display in Billy Butcher's childhood bedroom. A Charlton flag can also be seen in a later flashback scene which takes place in a pub.
Charlton Athletic has also featured in a number of book publications, in both the realm of fiction and factual/sports writing. These include works by Charlie Connelly and Paul Breen's work of popular fiction which is entitled The Charlton Men. The book is set against Charlton's successful 2011–12 season when they won the League One title and promotion back to the Championship in concurrence with the 2011 London riots.
Timothy Young, the protagonist in Out of the Shelter, a novel by David Lodge, supports Charlton Athletic. The book describes Timothy listening to Charlton's victory in the 1947 FA Cup Final on the radio.
Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.
Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.
Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.
Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.
Source:
League
Cup | [
{
"paragraph_id": 0,
"text": "Charlton Athletic Football Club is an English professional football club based in Charlton, south-east London, which compete in EFL League One, the third tier of the English football league system.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Their home ground is The Valley, where the club have played since 1919. They have also played at The Mount in Catford during the 1923–24 season, and spent seven years at Selhurst Park and the Boleyn Ground between 1985 and 1992, because of financial issues, and then safety concerns raised by the local council. The club's traditional kit consists of red shirts, white shorts and red socks, and their most commonly used nickname is The Addicks. Charlton share local rivalries with fellow South London clubs Crystal Palace and Millwall.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The club was founded on 9 June 1905 and turned professional in 1920. They spent one season in the Kent League and one season in the Southern League, before being invited to join the newly-formed Football League Third Division South in 1921. They won the division in the 1928–29 season, and again in 1934–35 following relegation in 1933. Charlton were promoted out of the Second Division in 1935–36, and finished second in the First Division the next season. Having been beaten finalists in 1946, they lifted the FA Cup the following year with a 1–0 victory over Burnley. The departure of Jimmy Seed in 1956, manager for 23 years, saw the club relegated out of the top-flight the following year. Relegated again in 1972, Charlton were promoted from the Third Division in 1974–75, and again in 1980–81 following relegation the previous season.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Charlton recovered from administration to secure promotion back to the First Division in 1985–86, and went on to lose in the 1987 final of the Full Members' Cup, though they won the 1987 play-off final to retain their top-flight status. Having been relegated in 1990, Charlton won the 1998 play-off final to make their debut in the Premier League. Though they were relegated the next year, manager Alan Curbishley took them back up as champions in 1999–2000. Charlton spent seven successive years in the Premier League, before suffering two relegations in three years. They won League One with 101 points in 2011–12, though were relegated from the Championship in 2016, and again in 2020 after they won the 2019 League One play-off final.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Charlton Athletic F.C. was formed on 9 June 1905 by a group of 14 to 15-year-olds in East Street, Charlton, which is now known as Eastmoor Street and no longer residential.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Contrary to some histories, the club was founded as \"Charlton Athletic\" and had no connection to other teams or institutions such as East St Mission, Blundell Mission or Charlton Reds; it was not founded by a church, school, employer or as a franchise for an existing ground. Charlton spent most of the years before the First World War playing in local leagues but progressing rapidly, winning successive leagues and so promotions eight years in a row. In 1905–06 the team played only friendly games but joined, and won, the Lewisham League Division III for the 1906–07 season. For the 1907–08 season the team contested the Lewisham League, Woolwich League and entered the Woolwich Cup. It was also around this time the Addicks nickname was first used in the local press although it may have been in use before then. In the 1908–09 season Charlton Athletic were playing in the Blackheath and District League and by 1910–11 had progressed to the Southern Suburban League. During this period Charlton Athletic won the Woolwich Cup four times, the championship of the Woolwich League three times, won the Blackheath League twice and the Southern Suburban League three times.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "They became a senior side in 1913 the same year that nearby Woolwich Arsenal relocated to North London.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "At the outbreak of World War One, Charlton were one of the first clubs to close down to take part in the \"Greater Game\" overseas. The club was reformed in 1917, playing mainly friendlies to raise funds for charities connected to the war and for the Woolwich Memorial Hospital Cup, the trophy for which Charlton donated. It had previously been the Woolwich Cup that the team had won outright following three consecutive victories.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "After the war, they joined the Kent League for one season (1919–20) before becoming professional, appointing Walter Rayner as the first full-time manager. They were accepted by the Southern League and played just a single season (1920–21) before being voted into the Football League. Charlton's first Football League match was against Exeter City in August 1921, which they won 1–0. In 1923, Charlton became \"giant killers\" in the FA Cup beating top flight sides Manchester City, West Bromwich Albion, and Preston North End before losing to eventual winners Bolton Wanderers in the Quarter-Finals. Later that year, it was proposed that Charlton merge with Catford Southend to create a larger team with bigger support. In the 1923–24 season Charlton played in Catford at The Mount stadium and wore the colours of \"The Enders\", light and dark blue vertical stripes. However, the move fell through and the Addicks returned to the Charlton area in 1924, returning to the traditional red and white colours in the process.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Charlton finished second bottom in the Football League in 1926 and were forced to apply for re-election which was successful. Three years later the Addicks won the Division Three championship in 1929 and they remained at the Division Two level for four years. After relegation into the Third Division south at the end of the 1932–33 season the club appointed Jimmy Seed as manager and he oversaw the most successful period in Charlton's history either side of the Second World War. Seed, an ex-miner who had made a career as a footballer despite suffering the effects of poison gas in the First World War, remains the most successful manager in Charlton's history. He is commemorated in the name of a stand at the Valley. Seed was an innovative thinker about the game at a time when tactical formations were still relatively unsophisticated. He later recalled \"a simple scheme that enabled us to pull several matches out of the fire\" during the 1934–35 season: when the team was in trouble \"the centre-half was to forsake his defensive role and go up into the attack to add weight to the five forwards.\" The organisation Seed brought to the team proved effective and the Addicks gained successive promotions from the Third Division to the First Division between 1934 and 1936, becoming the first club to ever do so. Charlton finally secured promotion to the First Division by beating local rivals West Ham United at the Boleyn Ground, with their centre-half John Oakes playing on despite concussion and a broken nose.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In 1937, Charlton finished runners up in the First Division, in 1938 finished fourth and 1939 finished third. They were the most consistent team in the top flight of English football over the three seasons immediately before the Second World War. This continued during the war years and they won the Football League War Cup and appeared in finals.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Charlton reached the 1946 FA Cup Final, but lost 4–1 to Derby County at Wembley. Charlton's Bert Turner scored an own goal in the 80th minute before equalising for the Addicks a minute later to take them into extra time, but they conceded three further goals in the extra period. When the full league programme resumed in 1946–47 Charlton could finish only 19th in the First Division, just above the relegation spots, but they made amends with their performance in the FA Cup, reaching the 1947 FA Cup Final. This time they were successful, beating Burnley 1–0, with Chris Duffy scoring the only goal of the day. In this period of renewed football attendances, Charlton became one of only 13 English football teams to average over 40,000 as their attendance during a full season. The Valley was the largest football ground in the League, drawing crowds in excess of 70,000. However, in the 1950s little investment was made either for players or to The Valley, hampering the club's growth. In 1956, the then board undermined Jimmy Seed and asked for his resignation; Charlton were relegated the following year.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "From the late 1950s until the early 1970s, Charlton remained a mainstay of the Second Division before relegation to the Third Division in 1972. It caused the team's support to drop, and even a promotion in 1975 back to the second division did little to re-invigorate the team's support and finances. In 1979–80 Charlton were relegated again to the Third Division, but won immediate promotion back to the Second Division in 1980–81. This was a turning point in the club's history leading to a period of turbulence and change including further promotion and exile. A change in management and shortly after a change in club ownership led to severe problems, such as the reckless signing of former European Footballer of the Year Allan Simonsen, and the club looked like it would go out of business.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1984 financial matters came to a head and the club went into administration, to be reformed as Charlton Athletic (1984) Ltd. although the club's finances were still far from secure. They were forced to leave the Valley just after the start of the 1985–86 season, after its safety was criticised by Football League officials in the wake of the Bradford City stadium fire. The club began to ground-share with Crystal Palace at Selhurst Park and this arrangement looked to be for the long-term, as Charlton did not have enough funds to revamp the Valley to meet safety requirements.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Despite the move away from the Valley, Charlton were promoted to the First Division as Second Division runners-up at the end of 1985–86, and remained at this level for four years (achieving a highest league finish of 14th) often with late escapes, most notably against Leeds in 1987, where the Addicks triumphed in extra-time of the play-off final replay to secure their top flight place. In 1987 Charlton also returned to Wembley for the first time since the 1947 FA Cup final for the Full Members Cup final against Blackburn. Eventually, Charlton were relegated in 1990 along with Sheffield Wednesday and bottom club Millwall. Manager Lennie Lawrence remained in charge for one more season before he accepted an offer to take charge of Middlesbrough. He was replaced by joint player-managers Alan Curbishley and Steve Gritt. The pair had unexpected success in their first season finishing just outside the play-offs, and 1992–93 began promisingly and Charlton looked good bets for promotion in the new Division One (the new name of the old Second Division following the formation of the Premier League). However, the club was forced to sell players such as Rob Lee to help pay for a return to the Valley, while club fans formed the Valley Party, nominating candidates to stand in local elections in 1990, pressing the local council to enable the club's return to the Valley – finally achieved in December 1992.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In March 1993, defender Tommy Caton, who had been out of action because of injury since January 1991, announced his retirement from playing on medical advice. He died suddenly at the end of the following month at the age of 30.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 1995, new chairman Richard Murray appointed Alan Curbishley as sole manager of Charlton. Under his sole leadership Charlton made an appearance in the play-off in 1996 but were eliminated by Crystal Palace in the semi-finals and the following season brought a disappointing 15th-place finish. 1997–98 was Charlton's best season for years. They reached the Division One play-off final and battled against Sunderland in a thrilling game which ended with a 4–4 draw after extra time. Charlton won 7–6 on penalties, with the match described as \"arguably the most dramatic game of football in Wembley's history\", and were promoted to the Premier League.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Charlton's first Premier League campaign began promisingly (they went top after two games) but they were unable to keep up their good form and were soon battling relegation. The battle was lost on the final day of the season but the club's board kept faith in Curbishley, confident that they could bounce back. Curbishley rewarded the chairman's loyalty with the Division One title in 2000 which signalled a return to the Premier League.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "After the club's return, Curbishley proved an astute spender and by 2003 he had succeeded in establishing Charlton in the top flight. Charlton spent much of the 2003–04 Premier League season challenging for a Champions League place, but a late-season slump in form and the sale of star player Scott Parker to Chelsea, left Charlton in seventh place, which was still the club's highest finish since the 1950s. Charlton were unable to build on this level of achievement and Curbishley departed in 2006, with the club still established as a solid mid-table side.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In May 2006, Iain Dowie was named as Curbishley's successor, but was sacked after 12 league matches in November 2006, with only two wins. Les Reed replaced Dowie as manager, however he too failed to improve Charlton's position in the league table and on Christmas Eve 2006, Reed was replaced by former player Alan Pardew. Although results did improve, Pardew was unable to keep Charlton up and relegation was confirmed in the penultimate match of the season.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Charlton's return to the second tier of English football was a disappointment, with their promotion campaign tailing off to an 11th-place finish. Early in the following season the Addicks were linked with a foreign takeover, but this was swiftly denied by the club. On 10 October 2008, Charlton received an indicative offer for the club from a Dubai-based diversified investment company. However, the deal later fell through. The full significance of this soon became apparent as the club recorded net losses of over £13 million for that financial year. Pardew left on 22 November after a 2–5 home loss to Sheffield United that saw the team fall into the relegation places. Matters did not improve under caretaker manager Phil Parkinson, and the team went a club record 18 games without a win, a new club record, before finally achieving a 1–0 away victory over Norwich City in an FA Cup third round replay; Parkinson was hired on a permanent basis. The team were relegated to League One after a 2–2 draw against Blackpool on 18 April 2009.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "After spending almost the entire 2009–10 season in the top six of League One, Charlton were defeated in the Football League One play-offs semi-final second leg on penalties against Swindon Town.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "After a change in ownership, Parkinson and Charlton legend Mark Kinsella left after a poor run of results. Another Charlton legend, Chris Powell, was appointed manager of the club in January 2011, winning his first game in charge 2–0 over Plymouth at the Valley. This was Charlton's first league win since November. Powell's bright start continued with a further three victories, before running into a downturn which saw the club go 11 games in succession without a win. Yet the fans' respect for Powell saw him come under remarkably little criticism. The club's fortunes picked up towards the end of the season, but leaving them far short of the play-offs. In a busy summer, Powell brought in 19 new players and after a successful season, on 14 April 2012, Charlton Athletic won promotion back to the Championship with a 1–0 away win at Carlisle United. A week later, on 21 April 2012, they were confirmed as champions after a 2–1 home win over Wycombe Wanderers. Charlton then lifted the League One trophy on 5 May 2012, having been in the top position since 15 September 2011, and after recording a 3–2 victory over Hartlepool United, recorded their highest ever league points score of 101, the highest in any professional European league that year.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In the first season back in the Championship, the 2012–13 season saw Charlton finish ninth place with 65 points, just three points short of the play-off places to the Premier League.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In early January 2014 during the 2013–14 season, Belgian businessman Roland Duchâtelet took over Charlton as owner in a deal worth £14million. This made Charlton a part of a network of football clubs owned by Duchâtelet. On 11 March 2014, two days after an FA Cup quarter-final loss to Sheffield United, and with Charlton sitting bottom of the table, Powell was sacked, private emails suggesting a rift with the owner.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "New manager Jose Riga, despite having to join Charlton long after the transfer window had closed, was able to improve Charlton's form and eventually guide them to 18th place, successfully avoiding relegation. After Riga's departure to manage Blackpool, former Millwall player Bob Peeters was appointed as manager in May 2014 on a 12-month contract. Charlton started strong, but a long run of draws meant that after only 25 games in charge Peeters was dismissed with the team in 14th place. His replacement, Guy Luzon, ensured there was no relegation battle by winning most of the remaining matches, resulting in a 12th-place finish.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The 2015–16 season began promisingly but results under Luzon deteriorated and on 24 October 2015 after a 3–0 defeat at home to Brentford he was sacked. Luzon said in a News Shopper interview that he \"was not the one who chose how to do the recruitment\" as the reason why he failed as manager. Karel Fraeye was appointed \"interim head coach\", but was sacked after 14 games and just two wins, with the club then second from bottom in the Championship. On 14 January 2016, Jose Riga was appointed head coach for a second spell, but could not prevent Charlton from being relegated to League One for the 2016–17 season. Riga resigned at the end of the season. To many fans, the managerial changes and subsequent relegation to League One were symptomatic of the mismanagement of the club under Duchâtelet's ownership and several protests began.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "After a slow start to the new season, with the club in 15th place of League One, the club announced that it had \"parted company\" with Russell Slade in November 2016. Karl Robinson was appointed on a permanent basis soon after. He led the Addicks to an uneventful 13th-place finish. The following season Robinson had the team challenging for the play-offs, but a drop in form in March led him to resign by mutual consent. He was replaced by former player Lee Bowyer as caretaker manager who guided them to a 6th-place finish, but lost in the play-off semi-final.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Bowyer was appointed permanently in September on a one-year contract and after finishing third in the regular 2018-19 EFL League One season, Charlton beat Sunderland 2–1 in the League One play-off final to earn promotion back to the EFL Championship after a three-season absence. Bowyer later signed a new one-year contract following promotion, which was later extended to three years in January 2020.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "On 29 November 2019, Charlton Athletic were acquired by East Street Investments (ESI) from Abu Dhabi, subject to EFL approval. Approval was reportedly granted on 2 January 2020. However, on 10 March 2020, a public disagreement between the new owners erupted along with reports that the main investor was pulling out, and the EFL said the takeover had not been approved. The Valley and Charlton's training ground were still owned by Duchâtelet, and a transfer embargo was in place as the new owners had not provided evidence of funding through to June 2021. On 20 April 2020, the EFL said the club was being investigated for misconduct regarding the takeover. In June 2020, Charlton confirmed that ESI had been taken over by a consortium led by businessman Paul Elliott, and said it had contacted the EFL to finalise the ownership change. However, a legal dispute involving former ESI director Matt Southall continued. He attempted to regain control of the club to prevent Elliot's takeover from going ahead, but failed and was subsequently fined and dismissed for challenging the club's directors. On 7 August 2020, the EFL said three individuals, including ESI owner Elliot and lawyer Chris Farnell, had failed its Owners' and Directors' Test, leaving the club's ownership unclear; Charlton appealed against the decision. Meanwhile, Charlton were relegated to League One at the end of the 2019–20 season after finishing 22nd. Because of the COVID-19 pandemic, the final games of the season were played behind closed doors, which remained the case for the majority of the following season.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Later in August, Thomas Sandgaard, a Danish businessman based in Colorado, was reported to be negotiating to buy the club. After further court hearings, Elliott was granted an injunction blocking the sale of ESI until a hearing in November 2020.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "On 25 September 2020, Thomas Sandgaard acquired the club itself from ESI, and was reported to have passed the EFL's Owners' and Directors' Tests; the EFL noted the change in control, but said the club's sale was now \"a matter for the interested parties\".",
"title": "History"
},
{
"paragraph_id": 32,
"text": "On 15 March 2021, with the club lying in eighth place, Bowyer resigned as club manager and was appointed manager of Birmingham City. His successor, Nigel Adkins, was appointed three days later. The club finished the 2020–21 season in seventh place, but started the following season by winning only two out of 13 League One matches and were in the relegation zone when Adkins was sacked on 21 October 2021.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "After a successful spell as caretaker manager, Johnnie Jackson was appointed manager in December 2021, but he was also sacked after finishing the season in 13th. Swindon Town manager Ben Garner was appointed as his replacement in June 2022, but was sacked on 5 December 2022 with the team in 17th place. After the club was knocked out of the FA Cup by League Two side Stockport County on 7 December, supporters said Charlton was at its \"lowest ebb in living memory\", with fans \"losing confidence\" in owner Thomas Sandgaard. Dean Holden was appointed manager on 20 December 2022, and Charlton improved to finish the 2022–23 season in 10th place.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "On 5 June 2023, the club announced that SE7 Partners, comprising former Sunderland director Charlie Methven and Edward Warrick, had agreed a takeover of Charlton Athletic, becoming the club's fourth set of owners in under four years. On 19 July, the EFL and FA cleared SE7 Partners to take over the club, and the deal was completed on 21 July 2023. On 27 August 2023, after one win in the opening six games of the 2023–24 season, Holden was sacked as manager, and succeeded by Michael Appleton.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Charlton have used a number of crests and badges during their history, although the current design has not been changed since 1968. The first known badge, from the 1930s, consisted of the letters CAF in the shape of a club from a pack of cards. In the 1940s, Charlton used a design featuring a robin sitting in a football within a shield, sometimes with the letters CAFC in the four-quarters of the shield, which was worn for the 1946 FA Cup Final. In the late 1940s and early 1950s, the crest of the former metropolitan borough of Greenwich was used as a symbol for the club but this was not used on the team's shirts.",
"title": "Club identity"
},
{
"paragraph_id": 36,
"text": "In 1963, a competition was held to find a new badge for the club, and the winning entry was a hand holding a sword, which complied with Charlton's nickname of the time, the Valiants. Over the next five years modifications were made to this design, such as the addition of a circle surrounding the hand and sword and including the club's name in the badge. By 1968, the design had reached the one known today, and has been used continuously from this year, apart from a period in the 1970s when just the letters CAFC appeared on the team's shirts.",
"title": "Club identity"
},
{
"paragraph_id": 37,
"text": "With the exception of one season, Charlton have always played in red and white – colours chosen by the boys who founded Charlton Athletic in 1905 after having to play their first matches in the borrowed kits of their local rivals Woolwich Arsenal, who also played in red and white. The exception came during part of the 1923–24 season when Charlton wore the colours of Catford Southend as part of the proposed move to Catford, which were light and dark blue stripes. However, after the move fell through, Charlton returned to wearing red and white as their home colours.",
"title": "Club identity"
},
{
"paragraph_id": 38,
"text": "The sponsors were as follows:",
"title": "Club identity"
},
{
"paragraph_id": 39,
"text": "Charlton's most common nickname is The Addicks. The origin of this name is from a local fishmonger, Arthur \"Ikey\" Bryan, who rewarded the team with meals of haddock and chips with vinegar",
"title": "Club identity"
},
{
"paragraph_id": 40,
"text": "The progression of the nickname can be seen in the book The Addicks Cartoons: An Affectionate Look into the Early History of Charlton Athletic, which covers the pre-First World War history of Charlton through a narrative based on 56 cartoons which appeared in the now defunct Kentish Independent. The very first cartoon, from 31 October 1908, calls the team the Haddocks. By 1910, the name had changed to Addicks although it also appeared as Haddick. The club also have two other nicknames, The Robins, adopted in 1931, and The Valiants, chosen in a fan competition in the 1960s which also led to the adoption of the sword badge which is still in use. The Addicks nickname never went away and was revived by fans after the club lost its Valley home in 1985 and went into exile at Crystal Palace. It is now once again the official nickname of the club.",
"title": "Club identity"
},
{
"paragraph_id": 41,
"text": "Charlton fans' chants have included \"Valley, Floyd Road\", a song noting the stadium's address to the tune of \"Mull of Kintyre\".",
"title": "Club identity"
},
{
"paragraph_id": 42,
"text": "The club's first ground was Siemens Meadow (1905–1907), a patch of rough ground by the River Thames. This was over-shadowed by the Siemens Brothers Telegraph Works. Then followed Woolwich Common (1907–1908), Pound Park (1908–1913), and Angerstein Lane (1913–1915). After the end of the First World War, a chalk quarry known as the Swamps was identified as Charlton's new ground, and in the summer of 1919 work began to create the level playing area and remove debris from the site. The first match at this site, now known as the club's current ground The Valley, was in September 1919. Charlton stayed at The Valley until 1923, when the club moved to The Mount stadium in Catford as part of a proposed merger with Catford Southend Football Club. However, after this move collapsed in 1924 Charlton returned to The Valley.",
"title": "Stadium"
},
{
"paragraph_id": 43,
"text": "During the 1930s and 1940s, significant improvements were made to the ground, making it one of the largest in the country at that time. In 1938 the highest attendance to date at the ground was recorded at over 75,000 for a FA Cup match against Aston Villa. During the 1940s and 1950s the attendance was often above 40,000, and Charlton had one of the largest support bases in the country. However, after the club's relegation little investment was made in The Valley as it fell into decline.",
"title": "Stadium"
},
{
"paragraph_id": 44,
"text": "In the 1980s matters came to a head as the ownership of the club and The Valley was divided. The large East Terrace had been closed down by the authorities after the Bradford City stadium fire and the ground's owner wanted to use part of the site for housing. In September 1985, Charlton made the controversial move to ground-share with South London neighbours Crystal Palace at Selhurst Park. This move was unpopular with supporters and in the late 1980s significant steps were taken to bring about the club's return to The Valley.",
"title": "Stadium"
},
{
"paragraph_id": 45,
"text": "A single issue political party, the Valley Party, contested the 1990 local Greenwich Borough Council elections on a ticket of reopening the stadium, capturing 11% of the vote, aiding the club's return. The Valley Gold investment scheme was created to help supporters fund the return to The Valley, and several players were also sold to raise funds. For the 1991–92 season and part of the 1992–93 season, the Addicks played at West Ham's Upton Park as Wimbledon had moved into Selhurst Park alongside Crystal Palace. Charlton finally returned to The Valley in December 1992, celebrating with a 1–0 victory against Portsmouth.",
"title": "Stadium"
},
{
"paragraph_id": 46,
"text": "Since the return to The Valley, three sides of the ground have been completely redeveloped turning The Valley into a modern, all-seater stadium with a 27,111 capacity which is the biggest in South London. There are plans in place to increase the ground's capacity to approximately 31,000 and even around 40,000 in the future.",
"title": "Stadium"
},
{
"paragraph_id": 47,
"text": "The bulk of the club's support base comes from South East London and Kent, particularly the London boroughs of Greenwich, Bexley and Bromley. Supporters played a key role in the return of the club to The Valley in 1992 and were rewarded by being granted a voice on the board in the form of an elected supporter director. Any season ticket holder could put themselves forward for election, with a certain number of nominations, and votes were cast by all season ticket holders over the age of 18. The last such director, Ben Hayes, was elected in 2006 to serve until 2008, when the role was discontinued as a result of legal issues. Its functions were replaced by a fans forum, which met for the first time in December 2008 and is still active to this day.",
"title": "Supporters and rivalries"
},
{
"paragraph_id": 48,
"text": "Charlton's main rivals are their South London neighbours, Crystal Palace and Millwall. Unlike those rivals Charlton have never competed in football's fourth tier and are the only one of the three to have won the FA Cup.",
"title": "Supporters and rivalries"
},
{
"paragraph_id": 49,
"text": "In 1985, Charlton were forced to ground-share with Crystal Palace after safety concerns at The Valley. They played their home fixtures at the Glaziers' Selhurst Park stadium until 1991. The arrangement was seen by Crystal Palace chairman Ron Noades as essential for the future of football, but it was unpopular with both sets of fans. Charlton fans campaigned for a return to The Valley throughout their time at Selhurst Park. In 2005, Palace were relegated by Charlton at the Valley after a 2–2 draw. Palace needed a win to survive. However, with seven minutes left, Charlton equalised, relegating their rivals. Post-match, there was a well-publicised altercation between the two chairmen of the respective clubs, Richard Murray and Simon Jordan. Since their first meeting in the Football League in 1925, Charlton have won 17, drawn 13 and lost 26 games against Palace. The teams last met in 2015, a 4–1 win for Palace in the League Cup.",
"title": "Supporters and rivalries"
},
{
"paragraph_id": 50,
"text": "Charlton are closest in proximity to Millwall than any other EFL club, with The Valley and The Den being less than four miles (6.4 km) apart. They last met in July 2020, a 1–0 win for Millwall at the Valley. Since their first Football League game in 1921, Charlton have won 11, drawn 26 and lost 37 league games (the two sides also met twice in the Anglo-Italian Cup in the 1992–93 season; Charlton winning one tie, and one draw). The Addicks have not beaten Millwall in the last 12 league fixtures between the sides; their last win came on 9 March 1996 at The Valley.",
"title": "Supporters and rivalries"
},
{
"paragraph_id": 51,
"text": "Charlton Athletic featured in the ITV one-off drama Albert's Memorial, shown on 12 September 2010 and starring David Jason and David Warner.",
"title": "In popular culture"
},
{
"paragraph_id": 52,
"text": "In the long-running BBC sitcom Only Fools and Horses, Rodney Charlton Trotter is named after the club.",
"title": "In popular culture"
},
{
"paragraph_id": 53,
"text": "In the BBC sitcom Brush Strokes, the lead character Jacko was a Charlton fan, reflecting the real life allegiance to the club of the actor who portrayed him, Karl Howman.",
"title": "In popular culture"
},
{
"paragraph_id": 54,
"text": "In the BBC science-fiction series Doctor Who, the seventh Doctor's companion Ace (played by Sophie Aldred from 1987 to 1989) is a fan of Charlton Athletic.",
"title": "In popular culture"
},
{
"paragraph_id": 55,
"text": "Charlton's ground and the then manager, Alan Curbishley, made appearances in the Sky One TV series Dream Team.",
"title": "In popular culture"
},
{
"paragraph_id": 56,
"text": "Charlton Athletic assumes a pivotal role in the film The Silent Playground (1963). Three children get in to trouble when their mother's boyfriend 'Uncle' Alan (John Ronane), gives them pocket money to wander off on their own, so that he can attend a Charlton football match. There is some footage from the ground which Ronane is later seen leaving.",
"title": "In popular culture"
},
{
"paragraph_id": 57,
"text": "In Amazon Prime series The Boys, a flashback scene shows Charlton Athletic flags and scarfs on display in Billy Butcher's childhood bedroom. A Charlton flag can also be seen in a later flashback scene which takes place in a pub.",
"title": "In popular culture"
},
{
"paragraph_id": 58,
"text": "Charlton Athletic has also featured in a number of book publications, in both the realm of fiction and factual/sports writing. These include works by Charlie Connelly and Paul Breen's work of popular fiction which is entitled The Charlton Men. The book is set against Charlton's successful 2011–12 season when they won the League One title and promotion back to the Championship in concurrence with the 2011 London riots.",
"title": "In popular culture"
},
{
"paragraph_id": 59,
"text": "Timothy Young, the protagonist in Out of the Shelter, a novel by David Lodge, supports Charlton Athletic. The book describes Timothy listening to Charlton's victory in the 1947 FA Cup Final on the radio.",
"title": "In popular culture"
},
{
"paragraph_id": 60,
"text": "Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.",
"title": "Players"
},
{
"paragraph_id": 61,
"text": "Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.",
"title": "Players"
},
{
"paragraph_id": 62,
"text": "Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.",
"title": "Players"
},
{
"paragraph_id": 63,
"text": "Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nationality.",
"title": "Players"
},
{
"paragraph_id": 64,
"text": "Source:",
"title": "Honours and achievements"
},
{
"paragraph_id": 65,
"text": "League",
"title": "Honours and achievements"
},
{
"paragraph_id": 66,
"text": "Cup",
"title": "Honours and achievements"
}
] | Charlton Athletic Football Club is an English professional football club based in Charlton, south-east London, which compete in EFL League One, the third tier of the English football league system. Their home ground is The Valley, where the club have played since 1919. They have also played at The Mount in Catford during the 1923–24 season, and spent seven years at Selhurst Park and the Boleyn Ground between 1985 and 1992, because of financial issues, and then safety concerns raised by the local council. The club's traditional kit consists of red shirts, white shorts and red socks, and their most commonly used nickname is The Addicks. Charlton share local rivalries with fellow South London clubs Crystal Palace and Millwall. The club was founded on 9 June 1905 and turned professional in 1920. They spent one season in the Kent League and one season in the Southern League, before being invited to join the newly-formed Football League Third Division South in 1921. They won the division in the 1928–29 season, and again in 1934–35 following relegation in 1933. Charlton were promoted out of the Second Division in 1935–36, and finished second in the First Division the next season. Having been beaten finalists in 1946, they lifted the FA Cup the following year with a 1–0 victory over Burnley. The departure of Jimmy Seed in 1956, manager for 23 years, saw the club relegated out of the top-flight the following year. Relegated again in 1972, Charlton were promoted from the Third Division in 1974–75, and again in 1980–81 following relegation the previous season. Charlton recovered from administration to secure promotion back to the First Division in 1985–86, and went on to lose in the 1987 final of the Full Members' Cup, though they won the 1987 play-off final to retain their top-flight status. Having been relegated in 1990, Charlton won the 1998 play-off final to make their debut in the Premier League. Though they were relegated the next year, manager Alan Curbishley took them back up as champions in 1999–2000. Charlton spent seven successive years in the Premier League, before suffering two relegations in three years. They won League One with 101 points in 2011–12, though were relegated from the Championship in 2016, and again in 2020 after they won the 2019 League One play-off final. | 2002-02-25T15:51:15Z | 2024-01-01T01:03:04Z | [
"Template:Convert",
"Template:Cite book",
"Template:Commons category",
"Template:Further",
"Template:EFL League One",
"Template:Authority control",
"Template:Infobox football club",
"Template:Fs mid",
"Template:Fs end",
"Template:English football updater",
"Template:Citation needed",
"Template:Webarchive",
"Template:Football in London",
"Template:Rp",
"Template:Clear",
"Template:Updated",
"Template:Flagicon",
"Template:BBC Football Info",
"Template:Charlton Athletic F.C.",
"Template:Charlton Athletic F.C. seasons",
"Template:Short description",
"Template:Main",
"Template:Portal",
"Template:Premier League",
"Template:Use dmy dates",
"Template:See also",
"Template:Fs start",
"Template:Reflist",
"Template:Cite news",
"Template:Official website",
"Template:Use British English",
"Template:R",
"Template:Fs player",
"Template:Cite web",
"Template:EFL Championship"
] | https://en.wikipedia.org/wiki/Charlton_Athletic_F.C. |
6,721 | Cross-country skiing | Cross-country skiing is a form of skiing whereby skiers traverse snow-covered terrain without use of ski lifts or other assistance. Cross-country skiing is widely practiced as a sport and recreational activity; however, some still use it as a means of transportation. Variants of cross-country skiing are adapted to a range of terrain which spans unimproved, sometimes mountainous terrain to groomed courses that are specifically designed for the sport.
Modern cross-country skiing is similar to the original form of skiing, from which all skiing disciplines evolved, including alpine skiing, ski jumping and Telemark skiing. Skiers propel themselves either by striding forward (classic style) or side-to-side in a skating motion (skate skiing), aided by arms pushing on ski poles against the snow. It is practised in regions with snow-covered landscapes, including Europe, Canada, Russia, the United States, Australia and New Zealand.
Competitive cross-country skiing is one of the Nordic skiing sports. Cross-country skiing and rifle marksmanship are the two components of biathlon. Ski orienteering is a form of cross-country skiing, which includes map navigation along snow trails and tracks.
The word ski comes from the Old Norse word skíð which means stick of wood. Skiing started as a technique for traveling cross-country over snow on skis, starting almost five millennia ago with beginnings in Scandinavia. It may have been practised as early as 600 BCE in Daxing'anling, in what is now China. Early historical evidence includes Procopius's (around CE 550) description of Sami people as skrithiphinoi translated as "ski running samis". Birkely argues that the Sami people have practiced skiing for more than 6000 years, evidenced by the very old Sami word čuoigat for skiing. Egil Skallagrimsson's 950 CE saga describes King Haakon the Good's practice of sending his tax collectors out on skis. The Gulating law (1274) stated that "No moose shall be disturbed by skiers on private land." Cross-country skiing evolved from a utilitarian means of transportation to being a worldwide recreational activity and sport, which branched out into other forms of skiing starting in the mid-1800s.
Early skiers used one long pole or spear in addition to the skis. The first depiction of a skier with two ski poles dates to 1741. Traditional skis, used for snow travel in Norway and elsewhere into the 1800s, often comprised one short ski with a natural fur traction surface, the andor, and one long for gliding, the langski—one being up to 100 cm (39 in) longer than the other—allowing skiers to propel themselves with a scooter motion. This combination has a long history among the Sami people. Skis up to 280 cm have been produced in Finland, and the longest recorded ski in Norway is 373 cm.
Ski warfare, the use of ski-equipped troops in war, is first recorded by the Danish historian Saxo Grammaticus in the 13th century. These troops were reportedly able to cover distances comparable to that of light cavalry. The garrison in Trondheim used skis at least from 1675, and the Danish-Norwegian army included specialized skiing battalions from 1747—details of military ski exercises from 1767 are on record. Skis were used in military exercises in 1747. In 1799 French traveller Jacques de la Tocnaye recorded his visit to Norway in his travel diary: Norwegian immigrants used skis ("Norwegian snowshoes") in the US midwest from around 1836. Norwegian immigrant "Snowshoe Thompson" transported mail by skiing across the Sierra Nevada between California and Nevada from 1856. In 1888 Norwegian explorer Fridtjof Nansen and his team crossed the Greenland icecap on skis. Norwegian workers on the Buenos Aires - Valparaiso railway line introduced skiing in South America around 1890. In 1910 Roald Amundsen used skis on his South Pole Expedition. In 1902 the Norwegian consul in Kobe imported ski equipment and introduced skiing to the Japanese, motivated by the death of Japanese soldiers during a snow storm. Starting in 1919, Vladimir Lenin helped popularize the activity in the Soviet Union.
Norwegian skiing regiments organized military skiing contests in the 18th century, divided in four classes: shooting at a target while skiing at "top speed", downhill racing among trees, downhill racing on large slopes without falling, and "long racing" on "flat ground". An early record of a public ski competition occurred in Tromsø, 1843. In Norwegian, langrenn refers to "competitive skiing where the goal is to complete a specific distance in groomed tracks in the shortest possible time". In Norway, ski touring competitions (Norwegian: turrenn) are long-distance cross-country competitions open to the public, competition is usually within age intervals.
A new technique, skate skiing, was experimented with early in the 20th Century, but was not widely adopted until the 1980s. Johan Grøttumsbråten used the skating technique at the 1931 World Championship in Oberhof, one of the earliest recorded use of skating in competitive cross-country skiing. This technique was later used in ski orienteering in the 1960s on roads and other firm surfaces. It became widespread during the 1980s after the success of Bill Koch (United States) in 1982 Cross-country Skiing Championships drew more attention to the skating style. Norwegian skier Ove Aunli started using the technique in 1984, when he found it to be much faster than classic style. Finnish skier, Pauli Siitonen, developed a one-sided variant of the style in the 1970s, leaving one ski in the track while skating to the side with the other one during endurance events; this became known as the "marathon skate".
The word ski comes from the Old Norse word skíð which means "cleft wood", "stick of wood" or "ski". Norwegian language does not use a verb-form equivalent in idiomatic speech, unlike English "to ski". In modern Norwegian, a variety of terms refer to cross-country skiing, including:
In contrast, alpine skiing is referred to as stå på ski (literally "stand on skis").
Fridtjof Nansen, describes the crossing of Greenland as På ski over Grønland, literally "On skis across Greenland", while the English edition of the report was titled, The first crossing of Greenland. Nansen referred to the activity of traversing snow on skis as Norwegian: skilöbning (he used the term also in the English translation), which may be translated as ski running. Nansen used skilöbning, regarding all forms of skiing, but noted that ski jumping is purely a competitive sport and not for amateurs. He further noted that in some competitions the skier "is also required to show his skill in turning his ski to one side or the other within given marks" at full speed on a steep hill. Nansen regarded these forms (i.e., jumping and slalom) as "special arts", and believed that the most important branch of skiing was travel "in an ordinary way across the country". In Germany, Nansen's Greenland report was published as Auf Schneeschuhen durch Grönland (literally "On snowshoes through Greenland"). The German term, Schneeschuh, was supplanted by the borrowed Norwegian word, Ski, in the late 19th century. The Norwegian encyclopedia of sports also uses the term, skiløping, (literally "ski running") for all forms of skiing. Around 1900 the word Skilaufen was used in German in the same sense as Norwegian: skiløping.
Recreational cross-country skiing includes ski touring and groomed-trail skiing, typically at resorts or in parklands. It is an accessible form of recreation for persons with vision and mobility impairments. A related form of recreation is dog skijoring—a winter sport where a cross-country skier is assisted by one or more dogs.
Ski touring takes place off-piste and outside of ski resorts. Tours may extend over multiple days. Typically, skis, bindings, and boots allow for free movement of the heel to enable a walking pace, as with Nordic disciplines and unlike Alpine skiing. Ski touring's subgenre ski mountaineering involves independently navigating and route finding through potential avalanche terrain and often requires familiarity with meteorology along with skiing skills. Ski touring can be faster and easier than summer hiking in some terrain, allowing for traverses and ascents that would be harder in the summer. Skis can also be used to access backcountry alpine climbing routes when snow is off the technical route, but still covers the hiking trail. In some countries, organizations maintain a network of huts for use by cross-country skiers in wintertime. For example, the Norwegian Trekking Association maintains over 400 huts stretching across thousands of kilometres of trails which hikers can use in the summer and skiers in the winter.
Groomed trail skiing occurs at facilities such as Nordmarka (Oslo), Royal Gorge Cross Country Ski Resort and Gatineau Park in Quebec, where trails are laid out and groomed for both classic and skate-skiing. Such grooming and track setting (for classic technique) requires specialized equipment and techniques that adapt to the condition of the snow. Trail preparation employs snow machines which tow snow-compaction, texturing and track-setting devices. Groomers must adapt such equipment to the condition of the snow—crystal structure, temperature, degree of compaction, moisture content, etc. Depending on the initial condition of the snow, grooming may achieve an increase in density for new-fallen snow or a decrease in density for icy or compacted snow. Cross-country ski facilities may incorporate a course design that meets homologation standards for such organizations as the International Olympic Committee, the International Ski Federation, or national standards. Standards address course distances, degree of difficulty with maximums in elevation difference and steepness—both up and downhill, plus other factors. Some facilities have night-time lighting on select trails—called lysløype (light trails) in Norwegian and elljusspår (electric-light trails) in Swedish. The first lysløype opened in 1946 in Nordmarka and at Byåsen (Trondheim).
Cross-country ski competition encompasses a variety of formats for races over courses of varying lengths according to rules sanctioned by the International Ski Federation (FIS) and by national organizations, such as the U.S. Ski and Snowboard Association and Cross Country Ski Canada. It also encompasses cross-country ski marathon events, sanctioned by the Worldloppet Ski Federation, cross-country ski orienteering events, sanctioned by the International Orienteering Federation, and Paralympic cross-country skiing, sanctioned by the International Paralympic Committee.
The FIS Nordic World Ski Championships have been held in various numbers and types of events since 1925 for men and since 1954 for women. From 1924 to 1939, the World Championships were held every year, including the Winter Olympic Games. After World War II, the World Championships were held every four years from 1950 to 1982. Since 1985, the World Championships have been held in odd-numbered years. Notable cross-country ski competitions include the Winter Olympics, the FIS Nordic World Ski Championships, and the FIS World Cup events (including the Holmenkollen).
Cross-country ski marathons—races with distances greater than 40 kilometers—have two cup series, the Ski Classics, which started in 2011, and the Worldloppet. Skiers race in classic or free-style (skating) events, depending on the rules of the race. Notable ski marathons, include the Vasaloppet in Sweden, Birkebeineren in Norway, the Engadin Skimarathon in Switzerland, the American Birkebeiner, the Tour of Anchorage in Anchorage, Alaska, and the Boreal Loppet, held in Forestville, Quebec, Canada.
Biathlon combines cross-country skiing and rifle shooting. Depending on the shooting performance, extra distance or time is added to the contestant's total running distance/time. For each shooting round, the biathlete must hit five targets; the skier receives a penalty for each missed target, which varies according to the competition rules.
Ski orienteering is a form of cross-country skiing competition that requires navigation in a landscape, making optimal route choices at racing speeds. Standard orienteering maps are used, but with special green overprinting of trails and tracks to indicate their navigability in snow; other symbols indicate whether any roads are snow-covered or clear. Standard skate-skiing equipment is used, along with a map holder attached to the chest. It is one of the four orienteering disciplines recognized by the International Orienteering Federation. Upper body strength is especially important because of frequent double poling along narrow snow trails.
Paralympic cross-country ski competition is an adaptation of cross-country skiing for athletes with disabilities. Paralympic cross-country skiing includes standing events, sitting events (for wheelchair users), and events for visually impaired athletes under the rules of the International Paralympic Committee. These are divided into several categories for people who are missing limbs, have amputations, are blind, or have any other physical disability, to continue their sport.
Cross-country skiing has two basic propulsion techniques, which apply to different surfaces: classic (undisturbed snow and tracked snow) and skate skiing (firm, smooth snow surfaces). The classic technique relies on a wax or texture on the ski bottom under the foot for traction on the snow to allow the skier to slide the other ski forward in virgin or tracked snow. With the skate skiing technique a skier slides on alternating skis on a firm snow surface at an angle from each other in a manner similar to ice skating. Both techniques employ poles with baskets that allow the arms to participate in the propulsion. Specialized equipment is adapted to each technique and each type of terrain. A variety of turns are used, when descending.
Poles contribute to forward propulsion, either simultaneously (usual for the skate technique) or in alternating sequence (common for the classical technique as the "diagonal stride"). Double poling is also used with the classical technique when higher speed can be achieved on flats and slight downhills than is available in the diagonal stride, which is favored to achieve higher power going uphill.
The classic style is often used on prepared trails (pistes) that have pairs of parallel grooves (tracks) cut into the snow. It is also the most usual technique where no tracks have been prepared. With this technique, each ski is pushed forward from the other stationary ski in a striding and gliding motion, alternating foot to foot. With the "diagonal stride" variant the poles are planted alternately on the opposite side of the forward-striding foot; with the "kick-double-pole" variant the poles are planted simultaneously with every other stride. At times, especially with gentle descents, double poling is the sole means of propulsion. On uphill terrain, techniques include the "side step" for steep slopes, moving the skis perpendicular to the fall line, the "herringbone" for moderate slopes, where the skier takes alternating steps with the skis splayed outwards, and, for gentle slopes, the skier uses the diagonal technique with shorter strides and greater arm force on the poles.
With skate skiing, the skier provides propulsion on a smooth, firm snow surface by pushing alternating skis away from one another at an angle, in a manner similar to ice skating. Skate-skiing usually involves a coordinated use of poles and the upper body to add impetus, sometimes with a double pole plant each time the ski is extended on a temporarily "dominant" side ("V1") or with a double pole plant each time the ski is extended on either side ("V2"). Skiers climb hills with these techniques by widening the angle of the "V" and by making more frequent, shorter strides and more forceful use of poles. A variant of the technique is the "marathon skate" or "Siitonen step", where the skier leaves one ski in the track while skating outwards to the side with the other ski.
Turns, used while descending or for braking, include the snowplough (or "wedge turn"), the stem christie (or "wedge christie"), parallel turn, and the Telemark turn. The step turn is used for maintaining speed during descents or out of track on flats.
Equipment comprises skis, poles, boots and bindings; these vary according to:
Skis used in cross-country are lighter and narrower than those used in alpine skiing. Ski bottoms are designed to provide a gliding surface and, for classic skis, a traction zone under foot. The base of the gliding surface is a plastic material that is designed both to minimize friction and, in many cases, to accept waxes. Glide wax may be used on the tails and tips of classic skis and across the length of skate skis.
Each type of ski is sized and designed differently. Length affects maneuverability; camber affects pressure on the snow beneath the feet of the skier; side-cut affects the ease of turning; width affects forward friction; overall area on the snow affects bearing capacity; and tip geometry affects the ability to penetrate new snow or to stay in a track. Each of the following ski types has a different combination of these attributes:
Glide waxes enhance the speed of the gliding surface, and are applied by ironing them onto the ski and then polishing the ski bottom. Three classes of glide wax are available, depending on the level of desired performance with higher performance coming at higher cost. Hydrocarbon glide waxes, based on paraffin are common for recreational use. Race waxes comprise a combination of fluorinated hydrocarbon waxes and fluorocarbon overlays. Fluorocarbons decrease surface tension and surface area of the water between the ski and the snow, increasing speed and glide of the ski under specific conditions. Either combined with the wax or applied after in a spray, powder, or block form, fluorocarbons significantly improve the glide of the ski. Since the 2021-2022 race season, fluorinated products are banned in FIS sanctioned competitions.
Skis designed for classic technique, both in track and in virgin snow, rely on a traction zone, called the "grip zone" or "kick zone", underfoot. This comes either from a) texture, such as "fish scales" or mohair skins, designed to slide forward but not backwards, that is built into the grip zone of waxless skis, or from applied devices, e.g. climbing skins, or b) from grip waxes. Grip waxes are classified according to their hardness: harder waxes are for colder and newer snow. An incorrect choice of grip wax for the snow conditions encountered may cause ski slippage (wax too hard for the conditions) or snow sticking to the grip zone (wax too soft for the conditions). Grip waxes generate grip by interacting with snow crystals, which vary with temperature, age and compaction. Hard grip waxes do not work well for snow which has metamorphosed to having coarse grains, whether icy or wet. In these conditions, skiers opt for a stickier substance, called klister.
Ski boots are attached to the ski only at the toe, leaving the heel free. Depending on application, boots may be lightweight (performance skiing) or heavier and more supportive (back-country skiing).
Bindings connect the boot to the ski. There are three primary groups of binding systems used in cross-country skiing (in descending order of importance):
Ski poles are used for balance and propulsion. Modern cross-country ski poles are made from aluminium, fibreglass-reinforced plastic, or carbon fibre, depending on weight, cost and performance parameters. Formerly they were made of wood or bamboo. They feature a foot (called a basket) near the end of the shaft that provides a pushing platform, as it makes contact with the snow. Baskets vary in size, according to the expected softness/firmness of the snow. Racing poles feature smaller, lighter baskets than recreational poles. Poles designed for skating are longer than those designed for classic skiing. Traditional skiing in the 1800s used a single pole for both cross-country and downhill. The single pole was longer and stronger than the poles that are used in pairs. In competitive cross-country poles in pairs were introduced around 1900. | [
{
"paragraph_id": 0,
"text": "Cross-country skiing is a form of skiing whereby skiers traverse snow-covered terrain without use of ski lifts or other assistance. Cross-country skiing is widely practiced as a sport and recreational activity; however, some still use it as a means of transportation. Variants of cross-country skiing are adapted to a range of terrain which spans unimproved, sometimes mountainous terrain to groomed courses that are specifically designed for the sport.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Modern cross-country skiing is similar to the original form of skiing, from which all skiing disciplines evolved, including alpine skiing, ski jumping and Telemark skiing. Skiers propel themselves either by striding forward (classic style) or side-to-side in a skating motion (skate skiing), aided by arms pushing on ski poles against the snow. It is practised in regions with snow-covered landscapes, including Europe, Canada, Russia, the United States, Australia and New Zealand.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Competitive cross-country skiing is one of the Nordic skiing sports. Cross-country skiing and rifle marksmanship are the two components of biathlon. Ski orienteering is a form of cross-country skiing, which includes map navigation along snow trails and tracks.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The word ski comes from the Old Norse word skíð which means stick of wood. Skiing started as a technique for traveling cross-country over snow on skis, starting almost five millennia ago with beginnings in Scandinavia. It may have been practised as early as 600 BCE in Daxing'anling, in what is now China. Early historical evidence includes Procopius's (around CE 550) description of Sami people as skrithiphinoi translated as \"ski running samis\". Birkely argues that the Sami people have practiced skiing for more than 6000 years, evidenced by the very old Sami word čuoigat for skiing. Egil Skallagrimsson's 950 CE saga describes King Haakon the Good's practice of sending his tax collectors out on skis. The Gulating law (1274) stated that \"No moose shall be disturbed by skiers on private land.\" Cross-country skiing evolved from a utilitarian means of transportation to being a worldwide recreational activity and sport, which branched out into other forms of skiing starting in the mid-1800s.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Early skiers used one long pole or spear in addition to the skis. The first depiction of a skier with two ski poles dates to 1741. Traditional skis, used for snow travel in Norway and elsewhere into the 1800s, often comprised one short ski with a natural fur traction surface, the andor, and one long for gliding, the langski—one being up to 100 cm (39 in) longer than the other—allowing skiers to propel themselves with a scooter motion. This combination has a long history among the Sami people. Skis up to 280 cm have been produced in Finland, and the longest recorded ski in Norway is 373 cm.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Ski warfare, the use of ski-equipped troops in war, is first recorded by the Danish historian Saxo Grammaticus in the 13th century. These troops were reportedly able to cover distances comparable to that of light cavalry. The garrison in Trondheim used skis at least from 1675, and the Danish-Norwegian army included specialized skiing battalions from 1747—details of military ski exercises from 1767 are on record. Skis were used in military exercises in 1747. In 1799 French traveller Jacques de la Tocnaye recorded his visit to Norway in his travel diary: Norwegian immigrants used skis (\"Norwegian snowshoes\") in the US midwest from around 1836. Norwegian immigrant \"Snowshoe Thompson\" transported mail by skiing across the Sierra Nevada between California and Nevada from 1856. In 1888 Norwegian explorer Fridtjof Nansen and his team crossed the Greenland icecap on skis. Norwegian workers on the Buenos Aires - Valparaiso railway line introduced skiing in South America around 1890. In 1910 Roald Amundsen used skis on his South Pole Expedition. In 1902 the Norwegian consul in Kobe imported ski equipment and introduced skiing to the Japanese, motivated by the death of Japanese soldiers during a snow storm. Starting in 1919, Vladimir Lenin helped popularize the activity in the Soviet Union.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Norwegian skiing regiments organized military skiing contests in the 18th century, divided in four classes: shooting at a target while skiing at \"top speed\", downhill racing among trees, downhill racing on large slopes without falling, and \"long racing\" on \"flat ground\". An early record of a public ski competition occurred in Tromsø, 1843. In Norwegian, langrenn refers to \"competitive skiing where the goal is to complete a specific distance in groomed tracks in the shortest possible time\". In Norway, ski touring competitions (Norwegian: turrenn) are long-distance cross-country competitions open to the public, competition is usually within age intervals.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "A new technique, skate skiing, was experimented with early in the 20th Century, but was not widely adopted until the 1980s. Johan Grøttumsbråten used the skating technique at the 1931 World Championship in Oberhof, one of the earliest recorded use of skating in competitive cross-country skiing. This technique was later used in ski orienteering in the 1960s on roads and other firm surfaces. It became widespread during the 1980s after the success of Bill Koch (United States) in 1982 Cross-country Skiing Championships drew more attention to the skating style. Norwegian skier Ove Aunli started using the technique in 1984, when he found it to be much faster than classic style. Finnish skier, Pauli Siitonen, developed a one-sided variant of the style in the 1970s, leaving one ski in the track while skating to the side with the other one during endurance events; this became known as the \"marathon skate\".",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The word ski comes from the Old Norse word skíð which means \"cleft wood\", \"stick of wood\" or \"ski\". Norwegian language does not use a verb-form equivalent in idiomatic speech, unlike English \"to ski\". In modern Norwegian, a variety of terms refer to cross-country skiing, including:",
"title": "History"
},
{
"paragraph_id": 9,
"text": "In contrast, alpine skiing is referred to as stå på ski (literally \"stand on skis\").",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Fridtjof Nansen, describes the crossing of Greenland as På ski over Grønland, literally \"On skis across Greenland\", while the English edition of the report was titled, The first crossing of Greenland. Nansen referred to the activity of traversing snow on skis as Norwegian: skilöbning (he used the term also in the English translation), which may be translated as ski running. Nansen used skilöbning, regarding all forms of skiing, but noted that ski jumping is purely a competitive sport and not for amateurs. He further noted that in some competitions the skier \"is also required to show his skill in turning his ski to one side or the other within given marks\" at full speed on a steep hill. Nansen regarded these forms (i.e., jumping and slalom) as \"special arts\", and believed that the most important branch of skiing was travel \"in an ordinary way across the country\". In Germany, Nansen's Greenland report was published as Auf Schneeschuhen durch Grönland (literally \"On snowshoes through Greenland\"). The German term, Schneeschuh, was supplanted by the borrowed Norwegian word, Ski, in the late 19th century. The Norwegian encyclopedia of sports also uses the term, skiløping, (literally \"ski running\") for all forms of skiing. Around 1900 the word Skilaufen was used in German in the same sense as Norwegian: skiløping.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Recreational cross-country skiing includes ski touring and groomed-trail skiing, typically at resorts or in parklands. It is an accessible form of recreation for persons with vision and mobility impairments. A related form of recreation is dog skijoring—a winter sport where a cross-country skier is assisted by one or more dogs.",
"title": "Recreation"
},
{
"paragraph_id": 12,
"text": "Ski touring takes place off-piste and outside of ski resorts. Tours may extend over multiple days. Typically, skis, bindings, and boots allow for free movement of the heel to enable a walking pace, as with Nordic disciplines and unlike Alpine skiing. Ski touring's subgenre ski mountaineering involves independently navigating and route finding through potential avalanche terrain and often requires familiarity with meteorology along with skiing skills. Ski touring can be faster and easier than summer hiking in some terrain, allowing for traverses and ascents that would be harder in the summer. Skis can also be used to access backcountry alpine climbing routes when snow is off the technical route, but still covers the hiking trail. In some countries, organizations maintain a network of huts for use by cross-country skiers in wintertime. For example, the Norwegian Trekking Association maintains over 400 huts stretching across thousands of kilometres of trails which hikers can use in the summer and skiers in the winter.",
"title": "Recreation"
},
{
"paragraph_id": 13,
"text": "Groomed trail skiing occurs at facilities such as Nordmarka (Oslo), Royal Gorge Cross Country Ski Resort and Gatineau Park in Quebec, where trails are laid out and groomed for both classic and skate-skiing. Such grooming and track setting (for classic technique) requires specialized equipment and techniques that adapt to the condition of the snow. Trail preparation employs snow machines which tow snow-compaction, texturing and track-setting devices. Groomers must adapt such equipment to the condition of the snow—crystal structure, temperature, degree of compaction, moisture content, etc. Depending on the initial condition of the snow, grooming may achieve an increase in density for new-fallen snow or a decrease in density for icy or compacted snow. Cross-country ski facilities may incorporate a course design that meets homologation standards for such organizations as the International Olympic Committee, the International Ski Federation, or national standards. Standards address course distances, degree of difficulty with maximums in elevation difference and steepness—both up and downhill, plus other factors. Some facilities have night-time lighting on select trails—called lysløype (light trails) in Norwegian and elljusspår (electric-light trails) in Swedish. The first lysløype opened in 1946 in Nordmarka and at Byåsen (Trondheim).",
"title": "Recreation"
},
{
"paragraph_id": 14,
"text": "Cross-country ski competition encompasses a variety of formats for races over courses of varying lengths according to rules sanctioned by the International Ski Federation (FIS) and by national organizations, such as the U.S. Ski and Snowboard Association and Cross Country Ski Canada. It also encompasses cross-country ski marathon events, sanctioned by the Worldloppet Ski Federation, cross-country ski orienteering events, sanctioned by the International Orienteering Federation, and Paralympic cross-country skiing, sanctioned by the International Paralympic Committee.",
"title": "Competition"
},
{
"paragraph_id": 15,
"text": "The FIS Nordic World Ski Championships have been held in various numbers and types of events since 1925 for men and since 1954 for women. From 1924 to 1939, the World Championships were held every year, including the Winter Olympic Games. After World War II, the World Championships were held every four years from 1950 to 1982. Since 1985, the World Championships have been held in odd-numbered years. Notable cross-country ski competitions include the Winter Olympics, the FIS Nordic World Ski Championships, and the FIS World Cup events (including the Holmenkollen).",
"title": "Competition"
},
{
"paragraph_id": 16,
"text": "Cross-country ski marathons—races with distances greater than 40 kilometers—have two cup series, the Ski Classics, which started in 2011, and the Worldloppet. Skiers race in classic or free-style (skating) events, depending on the rules of the race. Notable ski marathons, include the Vasaloppet in Sweden, Birkebeineren in Norway, the Engadin Skimarathon in Switzerland, the American Birkebeiner, the Tour of Anchorage in Anchorage, Alaska, and the Boreal Loppet, held in Forestville, Quebec, Canada.",
"title": "Competition"
},
{
"paragraph_id": 17,
"text": "Biathlon combines cross-country skiing and rifle shooting. Depending on the shooting performance, extra distance or time is added to the contestant's total running distance/time. For each shooting round, the biathlete must hit five targets; the skier receives a penalty for each missed target, which varies according to the competition rules.",
"title": "Competition"
},
{
"paragraph_id": 18,
"text": "Ski orienteering is a form of cross-country skiing competition that requires navigation in a landscape, making optimal route choices at racing speeds. Standard orienteering maps are used, but with special green overprinting of trails and tracks to indicate their navigability in snow; other symbols indicate whether any roads are snow-covered or clear. Standard skate-skiing equipment is used, along with a map holder attached to the chest. It is one of the four orienteering disciplines recognized by the International Orienteering Federation. Upper body strength is especially important because of frequent double poling along narrow snow trails.",
"title": "Competition"
},
{
"paragraph_id": 19,
"text": "Paralympic cross-country ski competition is an adaptation of cross-country skiing for athletes with disabilities. Paralympic cross-country skiing includes standing events, sitting events (for wheelchair users), and events for visually impaired athletes under the rules of the International Paralympic Committee. These are divided into several categories for people who are missing limbs, have amputations, are blind, or have any other physical disability, to continue their sport.",
"title": "Competition"
},
{
"paragraph_id": 20,
"text": "Cross-country skiing has two basic propulsion techniques, which apply to different surfaces: classic (undisturbed snow and tracked snow) and skate skiing (firm, smooth snow surfaces). The classic technique relies on a wax or texture on the ski bottom under the foot for traction on the snow to allow the skier to slide the other ski forward in virgin or tracked snow. With the skate skiing technique a skier slides on alternating skis on a firm snow surface at an angle from each other in a manner similar to ice skating. Both techniques employ poles with baskets that allow the arms to participate in the propulsion. Specialized equipment is adapted to each technique and each type of terrain. A variety of turns are used, when descending.",
"title": "Techniques"
},
{
"paragraph_id": 21,
"text": "Poles contribute to forward propulsion, either simultaneously (usual for the skate technique) or in alternating sequence (common for the classical technique as the \"diagonal stride\"). Double poling is also used with the classical technique when higher speed can be achieved on flats and slight downhills than is available in the diagonal stride, which is favored to achieve higher power going uphill.",
"title": "Techniques"
},
{
"paragraph_id": 22,
"text": "The classic style is often used on prepared trails (pistes) that have pairs of parallel grooves (tracks) cut into the snow. It is also the most usual technique where no tracks have been prepared. With this technique, each ski is pushed forward from the other stationary ski in a striding and gliding motion, alternating foot to foot. With the \"diagonal stride\" variant the poles are planted alternately on the opposite side of the forward-striding foot; with the \"kick-double-pole\" variant the poles are planted simultaneously with every other stride. At times, especially with gentle descents, double poling is the sole means of propulsion. On uphill terrain, techniques include the \"side step\" for steep slopes, moving the skis perpendicular to the fall line, the \"herringbone\" for moderate slopes, where the skier takes alternating steps with the skis splayed outwards, and, for gentle slopes, the skier uses the diagonal technique with shorter strides and greater arm force on the poles.",
"title": "Techniques"
},
{
"paragraph_id": 23,
"text": "With skate skiing, the skier provides propulsion on a smooth, firm snow surface by pushing alternating skis away from one another at an angle, in a manner similar to ice skating. Skate-skiing usually involves a coordinated use of poles and the upper body to add impetus, sometimes with a double pole plant each time the ski is extended on a temporarily \"dominant\" side (\"V1\") or with a double pole plant each time the ski is extended on either side (\"V2\"). Skiers climb hills with these techniques by widening the angle of the \"V\" and by making more frequent, shorter strides and more forceful use of poles. A variant of the technique is the \"marathon skate\" or \"Siitonen step\", where the skier leaves one ski in the track while skating outwards to the side with the other ski.",
"title": "Techniques"
},
{
"paragraph_id": 24,
"text": "Turns, used while descending or for braking, include the snowplough (or \"wedge turn\"), the stem christie (or \"wedge christie\"), parallel turn, and the Telemark turn. The step turn is used for maintaining speed during descents or out of track on flats.",
"title": "Techniques"
},
{
"paragraph_id": 25,
"text": "Equipment comprises skis, poles, boots and bindings; these vary according to:",
"title": "Equipment"
},
{
"paragraph_id": 26,
"text": "Skis used in cross-country are lighter and narrower than those used in alpine skiing. Ski bottoms are designed to provide a gliding surface and, for classic skis, a traction zone under foot. The base of the gliding surface is a plastic material that is designed both to minimize friction and, in many cases, to accept waxes. Glide wax may be used on the tails and tips of classic skis and across the length of skate skis.",
"title": "Equipment"
},
{
"paragraph_id": 27,
"text": "Each type of ski is sized and designed differently. Length affects maneuverability; camber affects pressure on the snow beneath the feet of the skier; side-cut affects the ease of turning; width affects forward friction; overall area on the snow affects bearing capacity; and tip geometry affects the ability to penetrate new snow or to stay in a track. Each of the following ski types has a different combination of these attributes:",
"title": "Equipment"
},
{
"paragraph_id": 28,
"text": "Glide waxes enhance the speed of the gliding surface, and are applied by ironing them onto the ski and then polishing the ski bottom. Three classes of glide wax are available, depending on the level of desired performance with higher performance coming at higher cost. Hydrocarbon glide waxes, based on paraffin are common for recreational use. Race waxes comprise a combination of fluorinated hydrocarbon waxes and fluorocarbon overlays. Fluorocarbons decrease surface tension and surface area of the water between the ski and the snow, increasing speed and glide of the ski under specific conditions. Either combined with the wax or applied after in a spray, powder, or block form, fluorocarbons significantly improve the glide of the ski. Since the 2021-2022 race season, fluorinated products are banned in FIS sanctioned competitions.",
"title": "Equipment"
},
{
"paragraph_id": 29,
"text": "Skis designed for classic technique, both in track and in virgin snow, rely on a traction zone, called the \"grip zone\" or \"kick zone\", underfoot. This comes either from a) texture, such as \"fish scales\" or mohair skins, designed to slide forward but not backwards, that is built into the grip zone of waxless skis, or from applied devices, e.g. climbing skins, or b) from grip waxes. Grip waxes are classified according to their hardness: harder waxes are for colder and newer snow. An incorrect choice of grip wax for the snow conditions encountered may cause ski slippage (wax too hard for the conditions) or snow sticking to the grip zone (wax too soft for the conditions). Grip waxes generate grip by interacting with snow crystals, which vary with temperature, age and compaction. Hard grip waxes do not work well for snow which has metamorphosed to having coarse grains, whether icy or wet. In these conditions, skiers opt for a stickier substance, called klister.",
"title": "Equipment"
},
{
"paragraph_id": 30,
"text": "Ski boots are attached to the ski only at the toe, leaving the heel free. Depending on application, boots may be lightweight (performance skiing) or heavier and more supportive (back-country skiing).",
"title": "Equipment"
},
{
"paragraph_id": 31,
"text": "Bindings connect the boot to the ski. There are three primary groups of binding systems used in cross-country skiing (in descending order of importance):",
"title": "Equipment"
},
{
"paragraph_id": 32,
"text": "Ski poles are used for balance and propulsion. Modern cross-country ski poles are made from aluminium, fibreglass-reinforced plastic, or carbon fibre, depending on weight, cost and performance parameters. Formerly they were made of wood or bamboo. They feature a foot (called a basket) near the end of the shaft that provides a pushing platform, as it makes contact with the snow. Baskets vary in size, according to the expected softness/firmness of the snow. Racing poles feature smaller, lighter baskets than recreational poles. Poles designed for skating are longer than those designed for classic skiing. Traditional skiing in the 1800s used a single pole for both cross-country and downhill. The single pole was longer and stronger than the poles that are used in pairs. In competitive cross-country poles in pairs were introduced around 1900.",
"title": "Equipment"
}
] | Cross-country skiing is a form of skiing whereby skiers traverse snow-covered terrain without use of ski lifts or other assistance. Cross-country skiing is widely practiced as a sport and recreational activity; however, some still use it as a means of transportation. Variants of cross-country skiing are adapted to a range of terrain which spans unimproved, sometimes mountainous terrain to groomed courses that are specifically designed for the sport. Modern cross-country skiing is similar to the original form of skiing, from which all skiing disciplines evolved, including alpine skiing, ski jumping and Telemark skiing. Skiers propel themselves either by striding forward or side-to-side in a skating motion, aided by arms pushing on ski poles against the snow. It is practised in regions with snow-covered landscapes, including Europe, Canada, Russia, the United States, Australia and New Zealand. Competitive cross-country skiing is one of the Nordic skiing sports. Cross-country skiing and rifle marksmanship are the two components of biathlon. Ski orienteering is a form of cross-country skiing, which includes map navigation along snow trails and tracks. | 2001-10-08T22:43:14Z | 2023-12-07T23:15:11Z | [
"Template:About",
"Template:Infobox sport",
"Template:Convert",
"Template:Lang-no",
"Template:Citation",
"Template:Human-powered vehicles",
"Template:Good article",
"Template:Main article",
"Template:Lang",
"Template:Main articles",
"Template:Commons category",
"Template:Wikivoyage",
"Template:Use dmy dates",
"Template:Circa",
"Template:Cite web",
"Template:ISBN",
"Template:Skiing",
"Template:Short description",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal",
"Template:Cite news",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Cross-country_skiing |
6,724 | Copacabana, Rio de Janeiro | Copacabana (/ˌkoʊpəkəˈbænə/ KOH-pə-kə-BAN-ə, US also /-ˈbɑːnə/ -BAH-nə, Portuguese: [ˌkɔpakaˈbɐnɐ]) is a bairro (neighbourhood) located in the South Zone of the city of Rio de Janeiro, Brazil. It is most prominently known for its 4 km (2.5 miles) balneario beach, which is one of the most famous in the world.
The district was originally called Sacopenapãcode: tpw is deprecated (translated from the Tupi language, it means "the way of the socóscode: tpw is deprecated ", the socóscode: tpw is deprecated being a kind of bird) until the mid-18th century. It was renamed after the construction of a chapel holding a replica of the Virgen de Copacabana, the patron saint of Bolivia.
Copacabana begins at Princesa Isabel Avenue and ends at Posto Seis (lifeguard watchtower Six). Beyond Copacabana, there are two small beaches: one, inside Fort Copacabana and the other, right after it: Diabo ("Devil") Beach. Arpoador beach, where surfers go after its perfect waves, comes next, followed by the famous borough of Ipanema. The area served as one of the four "Olympic Zones" during the 2016 Summer Olympics. According to Riotur, the Tourism Secretariat of Rio de Janeiro, there are 63 hotels and 10 hostels in Copacabana.
Copacabana beach, located at the Atlantic shore, stretches from Posto Dois (lifeguard watchtower Two) to Posto Seis (lifeguard watchtower Six). Leme is at Posto Um (lifeguard watchtower One). There are historic forts at both ends of Copacabana beach; Fort Copacabana, built in 1914, is at the south end by Posto Seis and Fort Duque de Caxias, built in 1779, at the north end. Many hotels, restaurants, bars, nightclubs and residential buildings are located in the area. On Sundays and holidays, one side of Avenida Atlântica is closed to cars, giving residents and tourists more space for activities along the beach.
Copacabana Beach plays host to millions of revellers during the annual New Year's Eve celebrations, and in most years, has been the official venue of the FIFA Beach Soccer World Cup.
The Copacabana promenade is a pavement landscape in large scale (4 kilometres long). It was rebuilt in 1970 and has used a black and white Portuguese pavement design since its origin in the 1930s: a geometric wave. The Copacabana promenade was designed by Roberto Burle Marx.
Copacabana has the 12th highest Human Development Index in Rio; the 2000 census put the HDI of Copacabana at 0.902.
According to the IBGE, 160,000 people live in Copacabana and 44,000 or 27.5% of them are 60 years old or older. Copacabana covers an area of 5.220 km which gives the borough a population density of 20,400 people per km. Residential buildings eleven to thirteen stories high built next to each other dominate the borough. Houses and two-story buildings are rare.
When Rio was the capital of Brazil, Copacabana was considered one of the best neighborhoods in the country.
More than 40 different bus routes serve Copacabana, as do three subway Metro stations: Cantagalo, Siqueira Campos and Cardeal Arcoverde.
Three major arteries parallel to each other cut across the entire borough: Avenida Atlântica (Atlantic Avenue), which is a 6-lane, 4 km avenue by the beachside, Nossa Senhora de Copacabana Avenue and Barata Ribeiro/Raul Pompéia Street both of which are 4 lanes and 3.5 km in length. Barata Ribeiro Street changes its name to Raul Pompéia Street after the Sá Freire Alvim Tunnel. Twenty-four streets intersect all three major arteries, and seven other streets intersect some of the three.
The fireworks display in Rio de Janeiro to celebrate New Year's Eve is one of the largest in the world, lasting 15 to 20 minutes. It is estimated that 2,000,000 people go to Copacabana Beach to see the spectacle. The festival also includes a concert that extends throughout the night. The celebration has become one of the biggest tourist attractions of Rio de Janeiro, attracting visitors from all over Brazil as well as from different parts of the world, and the city hotels generally stay fully booked. The celebration is broadcast live on major Brazilian networks including TV Globo.
New Year's Eve has been celebrated on Copacabana beach since the 1950s when cults of African origin such as Candomblé and Umbanda gathered in small groups dressed in white for ritual celebrations. The first fireworks display occurred in 1976, sponsored by a hotel on the waterfront and this has been repeated ever since. In the 1990s the city saw it as a great opportunity to promote the city and organized and expanded the event.
An assessment made during the New Year's Eve 1992 highlighted the risks associated with increasing crowd numbers on Copacabana beach after the fireworks display. Since the 1993-94 event concerts have been held on the beach to retain the public. The result was a success with egress spaced out over a period of 2 hours without the previous turmoil, although critics claimed that it denied the spirit of the New Year's tradition of a religious festival with fireworks by the sea. The following year Rod Stewart beat attendance records. Finally, the Tribute to Tom Jobim - with Gal Costa, Gilberto Gil, Caetano Veloso, Chico Buarque, and Paulinho da Viola - consolidated the shows at the Copacabana Réveillon.
There was a need to transform the fireworks display in a show of the same quality. The fireworks display was created by entrepreneurs Ricardo Amaral and Marius. From the previous 8–10 minutes the time was extended to 20 minutes and the quality and diversity of the fireworks was improved. A technical problem in fireworks 2000 required the use of ferries from New Year's Eve 2001-02. New Year's Eve has begun to compete with the Carnival, and since 1992 it has been a tourist attraction in its own right.
There was no celebration in 2020–21 due to the COVID-19 pandemic, but the fireworks show went on. | [
{
"paragraph_id": 0,
"text": "Copacabana (/ˌkoʊpəkəˈbænə/ KOH-pə-kə-BAN-ə, US also /-ˈbɑːnə/ -BAH-nə, Portuguese: [ˌkɔpakaˈbɐnɐ]) is a bairro (neighbourhood) located in the South Zone of the city of Rio de Janeiro, Brazil. It is most prominently known for its 4 km (2.5 miles) balneario beach, which is one of the most famous in the world.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The district was originally called Sacopenapãcode: tpw is deprecated (translated from the Tupi language, it means \"the way of the socóscode: tpw is deprecated \", the socóscode: tpw is deprecated being a kind of bird) until the mid-18th century. It was renamed after the construction of a chapel holding a replica of the Virgen de Copacabana, the patron saint of Bolivia.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Copacabana begins at Princesa Isabel Avenue and ends at Posto Seis (lifeguard watchtower Six). Beyond Copacabana, there are two small beaches: one, inside Fort Copacabana and the other, right after it: Diabo (\"Devil\") Beach. Arpoador beach, where surfers go after its perfect waves, comes next, followed by the famous borough of Ipanema. The area served as one of the four \"Olympic Zones\" during the 2016 Summer Olympics. According to Riotur, the Tourism Secretariat of Rio de Janeiro, there are 63 hotels and 10 hostels in Copacabana.",
"title": "Characteristics"
},
{
"paragraph_id": 3,
"text": "Copacabana beach, located at the Atlantic shore, stretches from Posto Dois (lifeguard watchtower Two) to Posto Seis (lifeguard watchtower Six). Leme is at Posto Um (lifeguard watchtower One). There are historic forts at both ends of Copacabana beach; Fort Copacabana, built in 1914, is at the south end by Posto Seis and Fort Duque de Caxias, built in 1779, at the north end. Many hotels, restaurants, bars, nightclubs and residential buildings are located in the area. On Sundays and holidays, one side of Avenida Atlântica is closed to cars, giving residents and tourists more space for activities along the beach.",
"title": "Copacabana Beach"
},
{
"paragraph_id": 4,
"text": "Copacabana Beach plays host to millions of revellers during the annual New Year's Eve celebrations, and in most years, has been the official venue of the FIFA Beach Soccer World Cup.",
"title": "Copacabana Beach"
},
{
"paragraph_id": 5,
"text": "The Copacabana promenade is a pavement landscape in large scale (4 kilometres long). It was rebuilt in 1970 and has used a black and white Portuguese pavement design since its origin in the 1930s: a geometric wave. The Copacabana promenade was designed by Roberto Burle Marx.",
"title": "Copacabana promenade"
},
{
"paragraph_id": 6,
"text": "Copacabana has the 12th highest Human Development Index in Rio; the 2000 census put the HDI of Copacabana at 0.902.",
"title": "Living standard"
},
{
"paragraph_id": 7,
"text": "According to the IBGE, 160,000 people live in Copacabana and 44,000 or 27.5% of them are 60 years old or older. Copacabana covers an area of 5.220 km which gives the borough a population density of 20,400 people per km. Residential buildings eleven to thirteen stories high built next to each other dominate the borough. Houses and two-story buildings are rare.",
"title": "Neighbourhood"
},
{
"paragraph_id": 8,
"text": "When Rio was the capital of Brazil, Copacabana was considered one of the best neighborhoods in the country.",
"title": "Neighbourhood"
},
{
"paragraph_id": 9,
"text": "More than 40 different bus routes serve Copacabana, as do three subway Metro stations: Cantagalo, Siqueira Campos and Cardeal Arcoverde.",
"title": "Transportation"
},
{
"paragraph_id": 10,
"text": "Three major arteries parallel to each other cut across the entire borough: Avenida Atlântica (Atlantic Avenue), which is a 6-lane, 4 km avenue by the beachside, Nossa Senhora de Copacabana Avenue and Barata Ribeiro/Raul Pompéia Street both of which are 4 lanes and 3.5 km in length. Barata Ribeiro Street changes its name to Raul Pompéia Street after the Sá Freire Alvim Tunnel. Twenty-four streets intersect all three major arteries, and seven other streets intersect some of the three.",
"title": "Transportation"
},
{
"paragraph_id": 11,
"text": "The fireworks display in Rio de Janeiro to celebrate New Year's Eve is one of the largest in the world, lasting 15 to 20 minutes. It is estimated that 2,000,000 people go to Copacabana Beach to see the spectacle. The festival also includes a concert that extends throughout the night. The celebration has become one of the biggest tourist attractions of Rio de Janeiro, attracting visitors from all over Brazil as well as from different parts of the world, and the city hotels generally stay fully booked. The celebration is broadcast live on major Brazilian networks including TV Globo.",
"title": "New Year's Eve in Copacabana"
},
{
"paragraph_id": 12,
"text": "New Year's Eve has been celebrated on Copacabana beach since the 1950s when cults of African origin such as Candomblé and Umbanda gathered in small groups dressed in white for ritual celebrations. The first fireworks display occurred in 1976, sponsored by a hotel on the waterfront and this has been repeated ever since. In the 1990s the city saw it as a great opportunity to promote the city and organized and expanded the event.",
"title": "New Year's Eve in Copacabana"
},
{
"paragraph_id": 13,
"text": "An assessment made during the New Year's Eve 1992 highlighted the risks associated with increasing crowd numbers on Copacabana beach after the fireworks display. Since the 1993-94 event concerts have been held on the beach to retain the public. The result was a success with egress spaced out over a period of 2 hours without the previous turmoil, although critics claimed that it denied the spirit of the New Year's tradition of a religious festival with fireworks by the sea. The following year Rod Stewart beat attendance records. Finally, the Tribute to Tom Jobim - with Gal Costa, Gilberto Gil, Caetano Veloso, Chico Buarque, and Paulinho da Viola - consolidated the shows at the Copacabana Réveillon.",
"title": "New Year's Eve in Copacabana"
},
{
"paragraph_id": 14,
"text": "There was a need to transform the fireworks display in a show of the same quality. The fireworks display was created by entrepreneurs Ricardo Amaral and Marius. From the previous 8–10 minutes the time was extended to 20 minutes and the quality and diversity of the fireworks was improved. A technical problem in fireworks 2000 required the use of ferries from New Year's Eve 2001-02. New Year's Eve has begun to compete with the Carnival, and since 1992 it has been a tourist attraction in its own right.",
"title": "New Year's Eve in Copacabana"
},
{
"paragraph_id": 15,
"text": "There was no celebration in 2020–21 due to the COVID-19 pandemic, but the fireworks show went on.",
"title": "New Year's Eve in Copacabana"
}
] | Copacabana is a bairro (neighbourhood) located in the South Zone of the city of Rio de Janeiro, Brazil. It is most prominently known for its 4 km (2.5 miles) balneario beach, which is one of the most famous in the world. | 2001-10-08T12:27:52Z | 2023-12-29T18:06:16Z | [
"Template:Wide image",
"Template:Sup",
"Template:Citation needed",
"Template:Reflist",
"Template:Cite news",
"Template:Commons category",
"Template:Authority control",
"Template:About",
"Template:IPAc-en",
"Template:IPA-pt",
"Template:RMS",
"Template:Dead link",
"Template:More citations needed",
"Template:Infobox settlement",
"Template:Respell",
"Template:Rio de Janeiro city neighbourhoods",
"Template:Lang",
"Template:Cite web",
"Template:2016 Summer Olympic venues"
] | https://en.wikipedia.org/wiki/Copacabana,_Rio_de_Janeiro |
6,725 | Cy Young Award | The Cy Young Award is given annually to the best pitchers in Major League Baseball (MLB), one each for the American League (AL) and National League (NL). The award was introduced in 1956 by Baseball Commissioner Ford Frick in honor of Hall of Fame pitcher Cy Young, who died in 1955. The award was originally given to the single best pitcher in the major leagues, but in 1967, after the retirement of Frick, the award was given to one pitcher in each league.
Each league's award is voted on by members of the Baseball Writers' Association of America, with one representative from each team. As of the 2010 season, each voter places a vote for first, second, third, fourth, and fifth place among the pitchers of each league. The formula used to calculate the final scores is a weighted sum of the votes. The pitcher with the highest score in each league wins the award. If two pitchers receive the same number of votes, the award is shared. From 1970 to 2009, writers voted for three pitchers, with the formula of five points for a first-place vote, three for a second-place vote and one for a third-place vote. Before 1970, writers only voted for the best pitcher and used a formula of one point per vote.
The Cy Young Award was introduced in 1956 by Commissioner of Baseball Ford C. Frick in honor of Hall of Fame pitcher Cy Young, who died in 1955. Originally given to the single best pitcher in the major leagues, the award changed its format over time. From 1956 to 1966, the award was given to one pitcher in Major League Baseball. After Frick retired in 1967, William Eckert became the new Commissioner of Baseball. Due to fan requests, Eckert announced that the Cy Young Award would be given out both in the American League and the National League. From 1956 to 1958, a pitcher was not allowed to win the award on more than one occasion; this rule was eliminated in 1959. After a tie in the 1969 voting for the Cy Young Award, the process was changed, in which each writer was to vote for three pitchers: the first-place vote received five points, the second-place vote received three points, and the third-place vote received one point.
The first recipient of the Cy Young Award was Don Newcombe of the Dodgers. The Dodgers are the franchise with the most Cy Young Awards. In 1957, Warren Spahn became the first left-handed pitcher to win the award. In 1963, Sandy Koufax became the first pitcher to win the award in a unanimous vote; two years later he became the first multiple winner. In 1978, Gaylord Perry (age 40) became the oldest pitcher to receive the award, a record that stood until broken in 2004 by Roger Clemens (age 42). The youngest recipient was Dwight Gooden (age 20 in 1985). In 2012, R. A. Dickey became the first knuckleball pitcher to win the award.
In 1974, Mike Marshall became the first relief pitcher to win the award. In 1992, Dennis Eckersley was the first modern closer (first player to be used almost exclusively in ninth-inning situations) to win the award, and since then only one other relief pitcher has won the award, Éric Gagné in 2003 (also a closer). A total of nine relief pitchers have won the Cy Young Award across both leagues.
Steve Carlton in 1982 became the first pitcher to win more than three Cy Young Awards, while Greg Maddux in 1994 became the first to win at least three in a row (and received a fourth straight the following year), a feat later repeated by Randy Johnson.
Twenty-two (22) pitchers have won the award multiple times. Roger Clemens currently holds the record for the most awards won, with seven – his first and last wins separated by eighteen years. Greg Maddux (1992–1995) and Randy Johnson (1999–2002) share the record for the most consecutive awards won with four. Clemens, Johnson, Pedro Martínez, Gaylord Perry, Roy Halladay, Max Scherzer, and Blake Snell are the only pitchers to have won the award in both the American League and National League; Sandy Koufax is the only pitcher who won multiple awards during the period when only one award was presented for all of Major League Baseball. Roger Clemens was the youngest pitcher to win a second Cy Young Award, while Tim Lincecum is the youngest pitcher to do so in the National League and Clayton Kershaw is the youngest left-hander to do so. Clayton Kershaw is the youngest pitcher to win a third Cy Young Award. Clemens is also the only pitcher to win the Cy Young Award with four different teams; nobody else has done so with more than two different teams. Justin Verlander has the most seasons separating his first (2011) and second (2019) Cy Young Awards.
Only two teams have never had a pitcher win the Cy Young Award. The Brooklyn/Los Angeles Dodgers have won more than any other team with 12.
There have been 20 players who unanimously won the Cy Young Award, for a total of 27 wins.
Six of these unanimous wins were accompanied by a win of the Most Valuable Player award (marked with * below; ** denotes that the player's unanimous win was accompanied by a unanimous win of the MVP Award).
In the National League, 12 players have unanimously won the Cy Young Award, for a total of 15 wins.
In the American League, eight players have unanimously won the Cy Young Award, for a total of 12 wins.
Specific
General | [
{
"paragraph_id": 0,
"text": "The Cy Young Award is given annually to the best pitchers in Major League Baseball (MLB), one each for the American League (AL) and National League (NL). The award was introduced in 1956 by Baseball Commissioner Ford Frick in honor of Hall of Fame pitcher Cy Young, who died in 1955. The award was originally given to the single best pitcher in the major leagues, but in 1967, after the retirement of Frick, the award was given to one pitcher in each league.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Each league's award is voted on by members of the Baseball Writers' Association of America, with one representative from each team. As of the 2010 season, each voter places a vote for first, second, third, fourth, and fifth place among the pitchers of each league. The formula used to calculate the final scores is a weighted sum of the votes. The pitcher with the highest score in each league wins the award. If two pitchers receive the same number of votes, the award is shared. From 1970 to 2009, writers voted for three pitchers, with the formula of five points for a first-place vote, three for a second-place vote and one for a third-place vote. Before 1970, writers only voted for the best pitcher and used a formula of one point per vote.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The Cy Young Award was introduced in 1956 by Commissioner of Baseball Ford C. Frick in honor of Hall of Fame pitcher Cy Young, who died in 1955. Originally given to the single best pitcher in the major leagues, the award changed its format over time. From 1956 to 1966, the award was given to one pitcher in Major League Baseball. After Frick retired in 1967, William Eckert became the new Commissioner of Baseball. Due to fan requests, Eckert announced that the Cy Young Award would be given out both in the American League and the National League. From 1956 to 1958, a pitcher was not allowed to win the award on more than one occasion; this rule was eliminated in 1959. After a tie in the 1969 voting for the Cy Young Award, the process was changed, in which each writer was to vote for three pitchers: the first-place vote received five points, the second-place vote received three points, and the third-place vote received one point.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "The first recipient of the Cy Young Award was Don Newcombe of the Dodgers. The Dodgers are the franchise with the most Cy Young Awards. In 1957, Warren Spahn became the first left-handed pitcher to win the award. In 1963, Sandy Koufax became the first pitcher to win the award in a unanimous vote; two years later he became the first multiple winner. In 1978, Gaylord Perry (age 40) became the oldest pitcher to receive the award, a record that stood until broken in 2004 by Roger Clemens (age 42). The youngest recipient was Dwight Gooden (age 20 in 1985). In 2012, R. A. Dickey became the first knuckleball pitcher to win the award.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "In 1974, Mike Marshall became the first relief pitcher to win the award. In 1992, Dennis Eckersley was the first modern closer (first player to be used almost exclusively in ninth-inning situations) to win the award, and since then only one other relief pitcher has won the award, Éric Gagné in 2003 (also a closer). A total of nine relief pitchers have won the Cy Young Award across both leagues.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Steve Carlton in 1982 became the first pitcher to win more than three Cy Young Awards, while Greg Maddux in 1994 became the first to win at least three in a row (and received a fourth straight the following year), a feat later repeated by Randy Johnson.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Twenty-two (22) pitchers have won the award multiple times. Roger Clemens currently holds the record for the most awards won, with seven – his first and last wins separated by eighteen years. Greg Maddux (1992–1995) and Randy Johnson (1999–2002) share the record for the most consecutive awards won with four. Clemens, Johnson, Pedro Martínez, Gaylord Perry, Roy Halladay, Max Scherzer, and Blake Snell are the only pitchers to have won the award in both the American League and National League; Sandy Koufax is the only pitcher who won multiple awards during the period when only one award was presented for all of Major League Baseball. Roger Clemens was the youngest pitcher to win a second Cy Young Award, while Tim Lincecum is the youngest pitcher to do so in the National League and Clayton Kershaw is the youngest left-hander to do so. Clayton Kershaw is the youngest pitcher to win a third Cy Young Award. Clemens is also the only pitcher to win the Cy Young Award with four different teams; nobody else has done so with more than two different teams. Justin Verlander has the most seasons separating his first (2011) and second (2019) Cy Young Awards.",
"title": "Winners"
},
{
"paragraph_id": 7,
"text": "Only two teams have never had a pitcher win the Cy Young Award. The Brooklyn/Los Angeles Dodgers have won more than any other team with 12.",
"title": "Winners"
},
{
"paragraph_id": 8,
"text": "There have been 20 players who unanimously won the Cy Young Award, for a total of 27 wins.",
"title": "Winners"
},
{
"paragraph_id": 9,
"text": "Six of these unanimous wins were accompanied by a win of the Most Valuable Player award (marked with * below; ** denotes that the player's unanimous win was accompanied by a unanimous win of the MVP Award).",
"title": "Winners"
},
{
"paragraph_id": 10,
"text": "In the National League, 12 players have unanimously won the Cy Young Award, for a total of 15 wins.",
"title": "Winners"
},
{
"paragraph_id": 11,
"text": "In the American League, eight players have unanimously won the Cy Young Award, for a total of 12 wins.",
"title": "Winners"
},
{
"paragraph_id": 12,
"text": "Specific",
"title": "References"
},
{
"paragraph_id": 13,
"text": "General",
"title": "References"
}
] | The Cy Young Award is given annually to the best pitchers in Major League Baseball (MLB), one each for the American League (AL) and National League (NL). The award was introduced in 1956 by Baseball Commissioner Ford Frick in honor of Hall of Fame pitcher Cy Young, who died in 1955. The award was originally given to the single best pitcher in the major leagues, but in 1967, after the retirement of Frick, the award was given to one pitcher in each league. Each league's award is voted on by members of the Baseball Writers' Association of America, with one representative from each team. As of the 2010 season, each voter places a vote for first, second, third, fourth, and fifth place among the pitchers of each league. The formula used to calculate the final scores is a weighted sum of the votes. The pitcher with the highest score in each league wins the award. If two pitchers receive the same number of votes, the award is shared. From 1970 to 2009, writers voted for three pitchers, with the formula of five points for a first-place vote, three for a second-place vote and one for a third-place vote. Before 1970, writers only voted for the best pitcher and used a formula of one point per vote. | 2001-10-09T15:01:08Z | 2023-11-18T20:59:36Z | [
"Template:Short description",
"Template:Featured list",
"Template:Ref label",
"Template:Cite web",
"Template:Cite news",
"Template:Refbegin",
"Template:AL Cy Young",
"Template:MLB awards",
"Template:Infobox sports award",
"Template:Mlby",
"Template:Portal",
"Template:Refend",
"Template:NL Cy Young",
"Template:MLB Combined Cy Young",
"Template:Use mdy dates",
"Template:Dagger",
"Template:Sortname",
"Template:Div col",
"Template:Div col end",
"Template:Sup",
"Template:Note label",
"Template:Reflist",
"Template:Cite book",
"Template:Cbignore"
] | https://en.wikipedia.org/wiki/Cy_Young_Award |
6,728 | Antisemitism in Christianity | Some Christian Churches, Christian groups, and ordinary Christians express religious antisemitism toward the Jewish people and the associated religion of Judaism.
Antisemitic Christian rhetoric and the resulting antipathy toward Jews both date back to the early years of Christianity and are derived from pagan anti-Jewish attitudes that were reinforced by the belief that Jews were responsible for the murder of Jesus of Nazareth. Christians imposed ever-increasing anti-Jewish measures over the ensuing centuries, including acts of ostracism, humiliation, expropriation, violence, and murder—measures which culminated in the Holocaust.
Christian antisemitism has been attributed to numerous factors including theological differences between these two related Abrahamic religions; the competition between Church and synagogue; the Christian missionary impulse; a misunderstanding of Jewish culture, beliefs, and practice; and the perception that Judaism was hostile toward Christianity. For two millennia, these attitudes were reinforced in Christian preaching, art, and popular teachings—all of which express contempt for Jews—as well as statutes designed to humiliate and stigmatise Jews.
Modern antisemitism has primarily been described as hatred against Jews as a race and its most recent expression is rooted in 18th-century racial theories. Anti-Judaism is rooted in hostility toward Judaism the religion; in Western Christianity, anti-Judaism effectively merged with antisemitism during the 12th century. Scholars have debated how Christian antisemitism played a role in the Nazi Third Reich, World War II, and the Holocaust. The Holocaust forced many Christians to reflect on the role(s) Christian theology and practice played and still play in anti-Judaism and antisemitism.
The legal status of Christianity and Judaism differed within the Roman Empire: Because the practice of Judaism was restricted to the Jewish people and Jewish proselytes, its followers were generally exempt from following the obligations that were imposed on followers of other religions by the Roman imperial cult. Since the reign of Julius Caesar, Judaism enjoyed the status of a "licit religion", but occasional persecutions still occurred, such as Tiberius' conscription and expulsion of Jews in 19 AD followed by Claudius' expulsion of Jews from Rome. Christianity however was not restricted to one people, and because Jewish Christians were excluded from the synagogue (see Council of Jamnia), they also lost the protected status that was granted to Judaism, even though that protection still had its limits (see Titus Flavius Clemens (consul), Rabbi Akiva, and Ten Martyrs).
From the reign of Nero onwards, who is said by Tacitus to have blamed the Great Fire of Rome on Christians, the practice of Christianity was criminalized and Christians were frequently persecuted, but the persecution differed from region to region. Comparably, Judaism suffered setbacks due to the Jewish-Roman wars, and these setbacks are remembered in the legacy of the Ten Martyrs. Robin Lane Fox traces the origin of much of the later hostility to this early period of persecution, when the Roman authorities commonly tested the faith of suspected Christians by forcing them to pay homage to the deified emperor. Jews were exempt from this requirement as long as they paid the Fiscus Judaicus, and Christians (many or mostly of Jewish origin) would say that they were Jewish but refused to pay the tax. This had to be confirmed by the local Jewish authorities, who were likely to refuse to accept the Christians as fellow Jews, often leading to their execution. The Birkat haMinim was often brought forward as support for this charge that the Jews were responsible for the Persecution of Christians in the Roman Empire. In the 3rd century systematic persecution of Christians began and lasted until Constantine's conversion to Christianity. In 390 Theodosius I made Christianity the state church of the Roman Empire. While pagan cults and Manichaeism were suppressed, Judaism retained its legal status as a licit religion, though anti-Jewish violence still occurred. In the 5th century, some legal measures worsened the status of the Jews in the Roman Empire.
Another point of contention for Christians concerning Judaism, according to the modern KJV of the Protestant Bible, is attributed more to a religious bias, rather than an issue of race or being a "Semite". Paul (a Benjamite Hebrew) clarifies this point in the letter to the Galatians where he makes plain his declaration " There is neither Jew nor Greek, there is neither bond nor free, there is neither male nor female: for ye are all one in Christ Jesus. And if ye be Christ's, then are ye Abraham's seed, and heirs according to the promise." Further Paul states: " Brethren, I speak after the manner of men; Though it be but a man's covenant, yet if it be confirmed, no man disannulleth, or addeth thereto. Now to Abraham and his seed were the promises made. He saith not, And to seeds, as of many; but as of one, And to thy seed, which is Christ."
In Judaism, Jesus was not recognized as the Messiah, which Christians interpreted as His rejection, as a failed Jewish Messiah claimant and a false prophet. However, since Jews traditionally believe that the messiah has not yet come and the Messianic Age is not yet present, the total rejection of Jesus as either the messiah or a deity has never been a central issue in Judaism.
Many New Testament passages criticise the Pharisees and it has been argued that these passages have shaped the way that Christians viewed Jews. Like most Bible passages, however, they can be and have been interpreted in a variety of ways.
Mainstream Talmudic Rabbinical Judaism today directly descends from the Pharisees whom Jesus often criticized. During Jesus' life and at the time of his execution, the Pharisees were only one of several Jewish groups such as the Sadducees, Zealots, and Essenes who mostly died out not long after the period; indeed, Jewish scholars such as Harvey Falk and Hyam Maccoby have suggested that Jesus was himself a Pharisee. In the sermon on the mount, for example, Jesus says "The Pharisees sit in Moses seat, therefore do what they say ..". Arguments by Jesus and his disciples against certain groups of Pharisees and what he saw as their hypocrisy were most likely examples of disputes among Jews and internal to Judaism that were common at the time, see for example Hillel and Shammai.
Professor Lillian C. Freudmann, author of Antisemitism in the New Testament (University Press of America, 1994) has published a detailed study of the description of Jews in the New Testament, and the historical effects that such passages have had in the Christian community throughout history. Similar studies of such verses have been made by both Christian and Jewish scholars, including Professors Clark Williamsom (Christian Theological Seminary), Hyam Maccoby (The Leo Baeck Institute), Norman A. Beck (Texas Lutheran College), and Michael Berenbaum (Georgetown University). Most rabbis feel that these verses are antisemitic, and many Christian scholars, in America and Europe, have reached the same conclusion. Another example is John Dominic Crossan's 1995 book, titled Who Killed Jesus? Exposing the Roots of Anti-Semitism in the Gospel Story of the Death of Jesus.
Some biblical scholars have also been accused of holding antisemitic beliefs. Bruce J. Malina, a founding member of The Context Group, has come under criticism for going as far as to deny the Semitic ancestry of modern Israelis. He then ties this back to his work on first-century cultural anthropology.
Jewish deicide is the belief that Jews to this day will always be collectively responsible for the killing of Jesus, also known as the blood curse. A justification of this charge is derived from Matthew (27:24–25) alleging a crowd of Jews told Pilate that they and their children would be responsible for Jesus' death. Most members of the Church of Jesus Christ of Latter-day Saints accept the Jewish deicide, while the Catholic Church and several other Christian denominations have repudiated it.
After Paul's death, Christianity emerged as a separate religion, and Pauline Christianity emerged as the dominant form of Christianity, especially after Paul, James and the other apostles agreed on a compromise set of requirements. Some Christians continued to adhere to aspects of Jewish law, but they were few in number and often considered heretics by the Church. One example is the Ebionites, who seem to have denied the virgin birth of Jesus, the physical Resurrection of Jesus, and most of the books that were later canonized as the New Testament. For example, the Ethiopian Orthodox still continue Old Testament practices such as the Sabbath. As late as the 4th century Church Father John Chrysostom complained that some Christians were still attending Jewish synagogues. The Church Fathers identified Jews and Judaism with heresy and declared the people of Israel to be extra Deum (lat. "outside of God").
Peter of Antioch referred to Christians that refused to venerate religious images as having "Jewish minds".
In the early second century AD, the heretic Marcion of Sinope (c. 85 – c. 160 AD) declared that the Jewish God was a different God, inferior to the Christian one, and rejected the Jewish scriptures as the product of a lesser deity. Marcion's teachings, which were extremely popular, rejected Judaism not only as an incomplete revelation, but as a false one as well, but, at the same time, allowed less blame to be placed on the Jews personally for having not recognized Jesus, since, in Marcion's worldview, Jesus was not sent by the lesser Jewish God, but by the supreme Christian God, whom the Jews had no reason to recognize.
In combating Marcion, orthodox apologists conceded that Judaism was an incomplete and inferior religion to Christianity, while also defending the Jewish scriptures as canonical.
The Church Father Tertullian (c. 155 – c. 240 AD) had a particularly intense personal dislike towards the Jews and argued that the Gentiles had been chosen by God to replace the Jews, because they were worthier and more honorable. Origen of Alexandria (c. 184 – c. 253) was more knowledgeable about Judaism than any of the other Church Fathers, having studied Hebrew, met Rabbi Hillel the Younger, consulted and debated with Jewish scholars, and been influenced by the allegorical interpretations of Philo of Alexandria. Origen defended the canonicity of the Old Testament and defended Jews of the past as having been chosen by God for their merits. Nonetheless, he condemned contemporary Jews for not understanding their own Law, insisted that Christians were the "true Israel", and blamed the Jews for the death of Christ. He did, however, maintain that Jews would eventually attain salvation in the final apocatastasis. Hippolytus of Rome (c. 170 – c. 235 AD) wrote that the Jews had "been darkened in the eyes of your soul with a darkness utter and everlasting."
Patristic bishops of the patristic era such as Augustine of Hippo argued that the Jews should be left alive and suffering as a perpetual reminder of their murder of Christ. Like his anti-Jewish teacher, Ambrose of Milan, he defined Jews as a special subset of those damned to hell. As "Witness People", he sanctified collective punishment for the Jewish deicide and enslavement of Jews to Catholics: "Not by bodily death, shall the ungodly race of carnal Jews perish ... 'Scatter them abroad, take away their strength. And bring them down O Lord'". Augustine claimed to "love" the Jews but as a means to convert them to Christianity. Sometimes he identified all Jews with the evil Judas and developed the doctrine (together with Cyprian) that there was "no salvation outside the Church".
John Chrysostom and other church fathers went further in their condemnation; the Catholic editor Paul Harkins wrote that St. John Chrysostom's anti-Jewish theology "is no longer tenable (..) For these objectively unchristian acts he cannot be excused, even if he is the product of his times." John Chrysostom held, as most Church Fathers did, that the sins of all Jews were communal and endless, to him his Jewish neighbours were the collective representation of all alleged crimes of all preexisting Jews. All Church Fathers applied the passages of the New Testament concerning the alleged advocation of the crucifixion of Christ to all Jews of his day, the Jews were the ultimate evil. However, John Chrysostom went so far to say that because Jews rejected the Christian God in human flesh, Christ, they therefore deserved to be killed: "grew fit for slaughter." In citing the New Testament, he claimed that Jesus was speaking about Jews when he said, "as for these enemies of mine who did not want me to reign over them, bring them here and slay them before me."
St. Jerome identified Jews with Judas Iscariot and the immoral use of money ("Judas is cursed, that in Judas the Jews may be accursed... their prayers turn into sins"). Jerome's homiletical assaults, that may have served as the basis for the anti-Jewish Good Friday liturgy, contrasts Jews with the evil, and that "the ceremonies of the Jews are harmful and deadly to Christians", whoever keeps them was doomed to the devil: "My enemies are the Jews; they have conspired in hatred against Me, crucified Me, heaped evils of all kinds upon Me, blasphemed Me."
Ephraim the Syrian wrote polemics against Jews in the 4th century, including the repeated accusation that Satan dwells among them as a partner. The writings were directed at Christians who were being proselytized by Jews. Ephraim feared that they were slipping back into Judaism; thus, he portrayed the Jews as enemies of Christianity, like Satan, to emphasize the contrast between the two religions, namely, that Christianity was Godly and true and Judaism was Satanic and false. Like John Chrysostom, his objective was to dissuade Christians from reverting to Judaism by emphasizing what he saw as the wickedness of the Jews and their religion.
Bernard of Clairvaux said "For us the Jews are Scripture's living words, because they remind us of what Our Lord suffered. They are not to be persecuted, killed, or even put to flight." According to Anna Sapir Abulafia, most scholars agree that Jews and Christians in Latin Christendom lived in relative peace with one another until the thirteenth century.
Jews were subjected to a wide range of legal disabilities and restrictions in Medieval Europe. Jews were excluded from many trades, the occupations varying with place and time, and determined by the influence of various non-Jewish competing interests. Often Jews were barred from all occupations but money-lending and peddling, with even these at times forbidden. Jews' association to money lending would carry on throughout history in the stereotype of Jews being greedy and perpetuating capitalism.
In the later medieval period, the number of Jews who were permitted to reside in certain places was limited; they were concentrated in ghettos, and they were also not allowed to own land; they were forced to pay discriminatory taxes whenever they entered cities or districts other than their own. The Oath More Judaico, the form of oath required from Jewish witnesses, developed bizarre or humiliating forms in some places, e.g. in the Swabian law of the 13th century, the Jew would be required to stand on the hide of a sow or a bloody lamb.
The Fourth Lateran Council which was held in 1215 was the first council to proclaim that Jews were required to wear something which distinguished them as Jews (the same requirement was also imposed on Muslims). On many occasions, Jews were accused of blood libels, the supposed drinking of the blood of Christian children in mockery of the Christian Eucharist.
Sicut Judaeis (the "Constitution for the Jews") was the official position of the papacy regarding Jews throughout the Middle Ages and later. The first bull was issued in about 1120 by Calixtus II, intended to protect Jews who suffered during the First Crusade, and was reaffirmed by many popes, even until the 15th century although they were not always strictly upheld.
The bull forbade, besides other things, Christians from coercing Jews to convert, or to harm them, or to take their property, or to disturb the celebration of their festivals, or to interfere with their cemeteries, on pain of excommunication:
We decree that no Christian shall use violence to force them to be baptized, so long as they are unwilling and refuse.…Without the judgment of the political authority of the land, no Christian shall presume to wound them or kill them or rob them of their money or change the good customs that they have thus far enjoyed in the place where they live."
Antisemitism in popular European Christian culture escalated beginning in the 13th century. Blood libels and host desecration drew popular attention and led to many cases of persecution against Jews. Many believed Jews poisoned wells to cause plagues. In the case of blood libel it was widely believed that the Jews would kill a child before Easter and needed Christian blood to bake matzo. Throughout history if a Christian child was murdered accusations of blood libel would arise no matter how small the Jewish population. The Church often added to the fire by portraying the dead child as a martyr who had been tortured and child had powers like Jesus was believed to. Sometimes the children were even made into Saints. Antisemitic imagery such as Judensau and Ecclesia et Synagoga recurred in Christian art and architecture. Anti-Jewish Easter holiday customs such as the Burning of Judas continue to present time.
In Iceland, one of the hymns repeated in the days leading up to Easter includes the lines,
During the Middle Ages in Europe persecutions and formal expulsions of Jews were liable to occur at intervals, although this was also the case for other minority communities, regardless of whether they were religious or ethnic. There were particular outbursts of riotous persecution during the Rhineland massacres of 1096 in Germany accompanying the lead-up to the First Crusade, many involving the crusaders as they travelled to the East. There were many local expulsions from cities by local rulers and city councils. In Germany the Holy Roman Emperor generally tried to restrain persecution, if only for economic reasons, but he was often unable to exert much influence. In the Edict of Expulsion, King Edward I expelled all the Jews from England in 1290 (only after ransoming some 3,000 among the most wealthy of them), on the accusation of usury and undermining loyalty to the dynasty. In 1306 there was a wave of persecution in France, and there were widespread Black Death Jewish persecutions as the Jews were blamed by many Christians for the plague, or spreading it. As late as 1519, the Imperial city of Regensburg took advantage of the recent death of Emperor Maximilian I to expel its 500 Jews.
"Officially, the medieval Catholic church never advocated the expulsion of all the Jews from Christendom, or repudiated Augustine's doctrine of Jewish witness... Still, late medieval Christendom frequently ignored its mandates..."
The largest expulsion of Jews followed the Reconquista or the reunification of Spain, and it preceded the expulsion of the Muslims who would not convert, in spite of the protection of their religious rights promised by the Treaty of Granada (1491). On 31 March 1492 Ferdinand II of Aragon and Isabella I of Castile, the rulers of Spain who financed Christopher Columbus' voyage to the New World just a few months later in 1492, declared that all Jews in their territories should either convert to Christianity or leave the country. While some converted, many others left for Portugal, France, Italy (including the Papal States), Netherlands, Poland, the Ottoman Empire, and North Africa. Many of those who had fled to Portugal were later expelled by King Manuel in 1497 or left to avoid forced conversion and persecution.
On 14 July 1555, Pope Paul IV issued papal bull Cum nimis absurdum which revoked all the rights of the Jewish community and placed religious and economic restrictions on Jews in the Papal States, renewed anti-Jewish legislation and subjected Jews to various degradations and restrictions on their personal freedom.
The bull established the Roman Ghetto and required Jews of Rome, which had existed as a community since before Christian times and which numbered about 2,000 at the time, to live in it. The Ghetto was a walled quarter with three gates that were locked at night. Jews were also restricted to one synagogue per city.
Paul IV's successor, Pope Pius IV, enforced the creation of other ghettos in most Italian towns, and his successor, Pope Pius V, recommended them to other bordering states.
Martin Luther at first made overtures towards the Jews, believing that the "evils" of Catholicism had prevented their conversion to Christianity. When his call to convert to his version of Christianity was unsuccessful, he became hostile to them.
In his book On the Jews and Their Lies, Luther excoriates them as "venomous beasts, vipers, disgusting scum, canders, devils incarnate." He provided detailed recommendations for a pogrom against them, calling for their permanent oppression and expulsion, writing "Their private houses must be destroyed and devastated, they could be lodged in stables. Let the magistrates burn their synagogues and let whatever escapes be covered with sand and mud. Let them be forced to work, and if this avails nothing, we will be compelled to expel them like dogs in order not to expose ourselves to incurring divine wrath and eternal damnation from the Jews and their lies." At one point he wrote: "...we are at fault in not slaying them..." a passage that "may be termed the first work of modern antisemitism, and a giant step forward on the road to the Holocaust."
Luther's harsh comments about the Jews are seen by many as a continuation of medieval Christian antisemitism. In his final sermon shortly before his death, however, Luther preached: "We want to treat them with Christian love and to pray for them, so that they might become converted and would receive the Lord."
In accordance with the anti-Jewish precepts of the Russian Orthodox Church, Russia's discriminatory policies towards Jews intensified when the partition of Poland in the 18th century resulted, for the first time in Russian history, in the possession of land with a large Jewish population. This land was designated as the Pale of Settlement from which Jews were forbidden to migrate into the interior of Russia. In 1772 Catherine II, the empress of Russia, forced the Jews living in the Pale of Settlement to stay in their shtetls and forbade them from returning to the towns that they occupied before the partition of Poland.
Throughout the 19th century and into the 20th, the Roman Catholic Church still incorporated strong antisemitic elements, despite increasing attempts to separate anti-Judaism (opposition to the Jewish religion on religious grounds) and racial antisemitism. Brown University historian David Kertzer, working from the Vatican archive, has argued in his book The Popes Against the Jews that in the 19th and early 20th centuries the Roman Catholic Church adhered to a distinction between "good antisemitism" and "bad antisemitism". The "bad" kind promoted hatred of Jews because of their descent. This was considered un-Christian because the Christian message was intended for all of humanity regardless of ethnicity; anyone could become a Christian. The "good" kind criticized alleged Jewish conspiracies to control newspapers, banks, and other institutions, to care only about accumulation of wealth, etc. Many Catholic bishops wrote articles criticizing Jews on such grounds, and, when they were accused of promoting hatred of Jews, they would remind people that they condemned the "bad" kind of antisemitism. Kertzer's work is not without critics. Scholar of Jewish-Christian relations Rabbi David G. Dalin, for example, criticized Kertzer in the Weekly Standard for using evidence selectively.
The counter-revolutionary Catholic royalist Louis de Bonald stands out among the earliest figures to explicitly call for the reversal of Jewish emancipation in the wake of the French Revolution. Bonald's attacks on the Jews are likely to have influenced Napoleon's decision to limit the civil rights of Alsatian Jews. Bonald's article Sur les juifs (1806) was one of the most venomous screeds of its era and furnished a paradigm which combined anti-liberalism, a defense of a rural society, traditional Christian antisemitism, and the identification of Jews with bankers and finance capital, which would in turn influence many subsequent right-wing reactionaries such as Roger Gougenot des Mousseaux, Charles Maurras, and Édouard Drumont, nationalists such as Maurice Barrès and Paolo Orano, and antisemitic socialists such as Alphonse Toussenel. Bonald furthermore declared that the Jews were an "alien" people, a "state within a state", and should be forced to wear a distinctive mark to more easily identify and discriminate against them.
In the 1840s, the popular counter-revolutionary Catholic journalist Louis Veuillot propagated Bonald's arguments against the Jewish "financial aristocracy" along with vicious attacks against the Talmud and the Jews as a "deicidal people" driven by hatred to "enslave" Christians. Gougenot des Mousseaux's Le Juif, le judaïsme et la judaïsation des peuples chrétiens (1869) has been called a "Bible of modern antisemitism" and was translated into German by Nazi ideologue Alfred Rosenberg. Between 1882 and 1886 alone, French priests published twenty antisemitic books blaming France's ills on the Jews and urging the government to consign them back to the ghettos, expel them, or hang them from the gallows.
In Italy the Jesuit priest Antonio Bresciani's highly popular novel 1850 novel L'Ebreo di Verona (The Jew of Verona) shaped religious antisemitism for decades, as did his work for La Civiltà Cattolica, which he helped launch.
Pope Pius VII (1800–1823) had the walls of the Jewish ghetto in Rome rebuilt after the Jews were emancipated by Napoleon, and Jews were restricted to the ghetto through the end of the Papal States in 1870. Official Catholic organizations, such as the Jesuits, banned candidates "who are descended from the Jewish race unless it is clear that their father, grandfather, and great-grandfather have belonged to the Catholic Church" until 1946.
In Russia, under the Tsarist regime, antisemitism intensified in the early years of the 20th century and was given official favour when the secret police forged the notorious Protocols of the Elders of Zion, a document purported to be a transcription of a plan by Jewish elders to achieve global domination. Violence against the Jews in the Kishinev pogrom in 1903 was continued after the 1905 revolution by the activities of the Black Hundreds. The Beilis Trial of 1913 showed that it was possible to revive the blood libel accusation in Russia.
Catholic writers such as Ernest Jouin, who published the Protocols in French, seamlessly blended racial and religious antisemitism, as in his statement that "from the triple viewpoint of race, of nationality, and of religion, the Jew has become the enemy of humanity." Pope Pius XI praised Jouin for "combating our mortal [Jewish] enemy" and appointed him to high papal office as a protonotary apostolic.
In 1916, in the midst of the First World War, American Jews petitioned Pope Benedict XV on behalf of the Polish Jews.
During a meeting with Roman Catholic Bishop Wilhelm Berning of Osnabrück On April 26, 1933, Hitler declared:
I have been attacked because of my handling of the Jewish question. The Catholic Church considered the Jews pestilent for fifteen hundred years, put them in ghettos, etc., because it recognized the Jews for what they were. In the epoch of liberalism the danger was no longer recognized. I am moving back toward the time in which a fifteen-hundred-year-long tradition was implemented. I do not set race over religion, but I recognize the representatives of this race as pestilent for the state and for the Church, and perhaps I am thereby doing Christianity a great service by pushing them out of schools and public functions.
The transcript of the discussion does not contain any response by Bishop Berning. Martin Rhonheimer does not consider this unusual because in his opinion, for a Catholic Bishop in 1933 there was nothing particularly objectionable "in this historically correct reminder".
The Nazis used Martin Luther's book, On the Jews and Their Lies (1543), to justify their claim that their ideology was morally righteous. Luther even went so far as to advocate the murder of Jews who refused to convert to Christianity by writing that "we are at fault in not slaying them."
Archbishop Robert Runcie asserted that: "Without centuries of Christian antisemitism, Hitler's passionate hatred would never have been so fervently echoed... because for centuries Christians have held Jews collectively responsible for the death of Jesus. On Good Friday in times past, Jews have cowered behind locked doors with fear of a Christian mob seeking 'revenge' for deicide. Without the poisoning of Christian minds through the centuries, the Holocaust is unthinkable." The dissident Catholic priest Hans Küng has written that "Nazi anti-Judaism was the work of godless, anti-Christian criminals. But it would not have been possible without the almost two thousand years' pre-history of 'Christian' anti-Judaism..." The consensus among historians is that Nazism as a whole was either unrelated or actively opposed to Christianity, and Hitler was strongly critical of it, although Germany remained mostly Christian during the Nazi era.
The document Dabru Emet was issued by over 220 rabbis and intellectuals from all branches of Judaism in 2000 as a statement about Jewish-Christian relations. This document states,
Nazism was not a Christian phenomenon. Without the long history of Christian anti-Judaism and Christian violence against Jews, Nazi ideology could not have taken hold nor could it have been carried out. Too many Christians participated in, or were sympathetic to, Nazi atrocities against Jews. Other Christians did not protest sufficiently against these atrocities. But Nazism itself was not an inevitable outcome of Christianity.
According to American historian Lucy Dawidowicz, antisemitism has a long history within Christianity. The line of "antisemitic descent" from Luther, the author of On the Jews and Their Lies, to Hitler is "easy to draw." In her The War Against the Jews, 1933-1945, she contends that Luther and Hitler were obsessed by the "demonologized universe" inhabited by Jews. Dawidowicz writes that the similarities between Luther's anti-Jewish writings and modern antisemitism are no coincidence, because they derived from a common history of Judenhass, which can be traced to Haman's advice to Ahasuerus. Although modern German antisemitism also has its roots in German nationalism and the liberal revolution of 1848, Christian antisemitism she writes is a foundation that was laid by the Roman Catholic Church and "upon which Luther built."
The Confessing Church was, in 1934, the first Christian opposition group. The Catholic Church officially condemned the Nazi theory of racism in Germany in 1937 with the encyclical "Mit brennender Sorge", signed by Pope Pius XI, and Cardinal Michael von Faulhaber led the Catholic opposition, preaching against racism.
Many individual Christian clergy and laypeople of all denominations had to pay for their opposition with their lives, including:
By the 1940s, few Christians were willing to publicly oppose Nazi policy, but many Christians secretly helped save the lives of Jews. There are many sections of Israel's Holocaust Remembrance Museum, Yad Vashem, which are dedicated to honoring these "Righteous Among the Nations".
Before he became Pope, Cardinal Pacelli addressed the International Eucharistic Congress in Budapest on 25–30 May 1938 during which he made reference to the Jews "whose lips curse [Christ] and whose hearts reject him even today"; at this time antisemitic laws were in the process of being formulated in Hungary.
The 1937 encyclical Mit brennender Sorge was issued by Pope Pius XI, but drafted by the future Pope Pius XII and read from the pulpits of all German Catholic churches, it condemned Nazi ideology and has been characterized by scholars as the "first great official public document to dare to confront and criticize Nazism" and "one of the greatest such condemnations ever issued by the Vatican."
In the summer of 1942, Pius explained to his college of Cardinals the reasons for the great gulf that existed between Jews and Christians at the theological level: "Jerusalem has responded to His call and to His grace with the same rigid blindness and stubborn ingratitude that has led it along the path of guilt to the murder of God." Historian Guido Knopp describes these comments of Pius as being "incomprehensible" at a time when "Jerusalem was being murdered by the million". This traditional adversarial relationship with Judaism would be reversed in Nostra aetate, which was issued during the Second Vatican Council.
Prominent members of the Jewish community have contradicted the criticisms of Pius and spoke highly of his efforts to protect Jews. The Israeli historian Pinchas Lapide interviewed war survivors and concluded that Pius XII "was instrumental in saving at least 700,000, but probably as many as 860,000 Jews from certain death at Nazi hands". Some historians dispute this estimate.
The Christian Identity movement, the Ku Klux Klan and other White supremacist groups have expressed antisemitic views. They claim that their antisemitism is based on purported Jewish control of the media, control of international banks, involvement in radical left-wing politics, and the Jews' promotion of multiculturalism, anti-Christian groups, liberalism and perverse organizations. They rebuke charges of racism by claiming that Jews who share their views maintain membership in their organizations. A racial belief which is common among these groups, but not universal among them, is an alternative history doctrine concerning the descendants of the Lost Tribes of Israel. In some of its forms, this doctrine absolutely denies the view that modern Jews have any ethnic connection to the Israel of the Bible. Instead, according to extreme forms of this doctrine, the true Israelites and the true humans are the members of the Adamic (white) race. These groups are often rejected and they are not even considered Christian groups by mainstream Christian denominations and the vast majority of Christians around the world.
Antisemitism remains a substantial problem in Europe and to a greater or lesser degree, it also exists in many other nations, including Eastern Europe and the former Soviet Union, and tensions between some Muslim immigrants and Jews have increased across Europe. The US State Department reports that antisemitism has increased dramatically in Europe and Eurasia since 2000.
While it has been on the decline since the 1940s, a measurable amount of antisemitism still exists in the United States, although acts of violence are rare. For example, the influential Evangelical preacher Billy Graham and the then-president Richard Nixon were caught on tape in the early 1970s while they were discussing matters like how to address the Jews' control of the American media. This belief in Jewish conspiracies and domination of the media was similar to those of Graham's former mentors: William Bell Riley chose Graham to succeed him as the second president of Northwestern Bible and Missionary Training School and evangelist Mordecai Ham led the meetings where Graham first believed in Christ. Both held strongly antisemitic views. The 2001 survey by the Anti-Defamation League reported 1432 acts of antisemitism in the United States that year. The figure included 877 acts of harassment, including verbal intimidation, threats and physical assaults. A minority of American churches engage in anti-Israel activism, including support for the controversial BDS (Boycott, Divestment and Sanctions) movement. While not directly indicative of antisemitism, this activism often conflates the Israeli government's treatment of Palestinians with that of Jesus, thereby promoting the antisemitic doctrine of Jewish guilt. Many Christian Zionists are also accused of antisemitism, such as John Hagee, who argued that the Jews brought the Holocaust upon themselves by angering God.
Relations between Jews and Christians have dramatically improved since the 20th century. According to a global poll which was conducted in 2014 by the Anti-Defamation League, a Jewish group which is devoted to fighting antisemitism and other forms of racism, data was collected from 102 countries with regard to their population's attitudes towards Jews and it revealed that only 24% of the world's Christians held views which were considered antisemitic according to the ADL's index, compared to 49% of the world's Muslims.
Many Christians do not consider anti-Judaism antisemitism. They regard anti-Judaism as a disagreement with the tenets of Judaism by religiously sincere people, while they regard antisemitism as an emotional bias or hatred which does not specifically target the religion of Judaism. Under this approach, anti-Judaism is not regarded as antisemitism because it does not involve actual hostility towards the Jewish people, instead, anti-Judaism only rejects the religious beliefs of Judaism.
Others believe that anti-Judaism is rejection of Judaism as a religion or opposition to Judaism's beliefs and practices essentially because of their source in Judaism or because a belief or practice is associated with the Jewish people. (But see supersessionism)
The position that "Christian theological anti-Judaism is a phenomenon which is distinct from modern antisemitism, which is rooted in economic and racial thought, so that Christian teachings should not be held responsible for antisemitism" has been articulated, among other people, by Pope John Paul II in 'We Remember: A Reflection on the Shoah,' and the Jewish declaration on Christianity, Dabru Emet. Several scholars, including Susannah Heschel, Gavin I Langmuir and Uriel Tal have challenged this position, by arguing that anti-Judaism directly led to modern antisemitism.
Although some Christians did consider anti-Judaism to be contrary to Christian teaching in the past, this view was not widely expressed by Christian leaders and lay people. In many cases, the practical tolerance towards the Jewish religion and Jews prevailed. Some Christian groups condemned verbal anti-Judaism, particularly in their early years.
Some Jewish organizations have denounced evangelistic and missionary activities which specifically target Jews by labeling them antisemitic.
The Southern Baptist Convention (SBC), the largest Protestant Christian denomination in the U.S., has explicitly rejected suggestions that it should back away from seeking to convert Jews, a position which critics have called antisemitic, but a position which Baptists believe is consistent with their view that salvation is solely found through faith in Christ. In 1996 the SBC approved a resolution calling for efforts to seek the conversion of Jews "as well as the salvation of 'every kindred and tongue and people and nation.'"
Most Evangelicals agree with the SBC's position, and some of them also support efforts which specifically seek the Jews' conversion. Additionally, these Evangelical groups are among the most pro-Israel groups. (For more information, see Christian Zionism.) One controversial group which has received a considerable amount of support from some Evangelical churches is Jews for Jesus, which claims that Jews can "complete" their Jewish faith by accepting Jesus as the Messiah.
The Presbyterian Church (USA), the United Methodist Church, and the United Church of Canada have ended their efforts to convert Jews. While Anglicans do not, as a rule, seek converts from other Christian denominations, the General Synod has affirmed that "the good news of salvation in Jesus Christ is for all and must be shared with all including people from other faiths or of no faith and that to do anything else would be to institutionalize discrimination".
The Roman Catholic Church formerly operated religious congregations which specifically aimed to convert Jews. Some of these congregations were actually founded by Jewish converts, like the Congregation of Our Lady of Sion, whose members were nuns and ordained priests. Many Catholic saints were specifically noted for their missionary zeal to convert Jews, such as Vincent Ferrer. After the Second Vatican Council, many missionary orders which aimed to convert Jews to Christianity no longer actively sought to missionize (or proselytize) them. However, Traditionalist Roman Catholic groups, congregations and clergymen continue to advocate the missionizing of Jews according to traditional patterns, sometimes with success (e.g., the Society of St. Pius X which has notable Jewish converts among its faithful, many of whom have become traditionalist priests).
The Church's Ministry Among Jewish People (CMJ) is one of the ten official mission agencies of the Church of England. The Society for Distributing Hebrew Scriptures is another organisation, but it is not affiliated with the established Church.
There are several prophecies concerning the conversion of the Jewish people to Christianity in the scriptures of the Church of Jesus Christ of Latter-day Saints (LDS). The Book of Mormon teaches that the Jewish people need to believe in Jesus to be gathered to Israel. The Doctrine & Covenants teaches that the Jewish people will be converted to Christianity during the second coming when Jesus appears to them and shows them his wounds. It teaches that if the Jewish people do not convert to Christianity, then the world would be cursed. Early LDS prophets, such as Brigham Young and Wildord Woodruff, taught that Jewish people could not be truly converted because of the curse which resulted from Jewish deicide. However, after the establishment of the state of Israel, many LDS members felt that it was time for the Jewish people to start converting to Mormonism. During the 1950s, the LDS Church established several missions which specifically targeted Jewish people in several cities in the United States. After the LDS church began to give the priesthood to all males regardless of race in 1978, it also started to deemphasize the importance of race with regard to conversion. This led to a void of doctrinal teachings that resulted in a spectrum of views in how LDS members interpret scripture and previous teachings. According to research which was conducted by Armand Mauss, most LDS members believe that the Jewish people will need to be converted to Christianity in order to be forgiven for the crucifixion of Jesus Christ.
The Church of Jesus Christ of Latter-day Saints has also been criticized for baptizing deceased Jewish Holocaust victims. In 1995, in part as a result of public pressure, church leaders promised to put new policies into place that would help the church to end the practice, unless it was specifically requested or approved by the surviving spouses, children or parents of the victims. However, the practice has continued, including the baptism of the parents of Holocaust survivor and Jewish rights advocate Simon Wiesenthal.
In recent years, there has been much to note in the way of reconciliation between some Christian groups and the Jews. | [
{
"paragraph_id": 0,
"text": "Some Christian Churches, Christian groups, and ordinary Christians express religious antisemitism toward the Jewish people and the associated religion of Judaism.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Antisemitic Christian rhetoric and the resulting antipathy toward Jews both date back to the early years of Christianity and are derived from pagan anti-Jewish attitudes that were reinforced by the belief that Jews were responsible for the murder of Jesus of Nazareth. Christians imposed ever-increasing anti-Jewish measures over the ensuing centuries, including acts of ostracism, humiliation, expropriation, violence, and murder—measures which culminated in the Holocaust.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Christian antisemitism has been attributed to numerous factors including theological differences between these two related Abrahamic religions; the competition between Church and synagogue; the Christian missionary impulse; a misunderstanding of Jewish culture, beliefs, and practice; and the perception that Judaism was hostile toward Christianity. For two millennia, these attitudes were reinforced in Christian preaching, art, and popular teachings—all of which express contempt for Jews—as well as statutes designed to humiliate and stigmatise Jews.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Modern antisemitism has primarily been described as hatred against Jews as a race and its most recent expression is rooted in 18th-century racial theories. Anti-Judaism is rooted in hostility toward Judaism the religion; in Western Christianity, anti-Judaism effectively merged with antisemitism during the 12th century. Scholars have debated how Christian antisemitism played a role in the Nazi Third Reich, World War II, and the Holocaust. The Holocaust forced many Christians to reflect on the role(s) Christian theology and practice played and still play in anti-Judaism and antisemitism.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The legal status of Christianity and Judaism differed within the Roman Empire: Because the practice of Judaism was restricted to the Jewish people and Jewish proselytes, its followers were generally exempt from following the obligations that were imposed on followers of other religions by the Roman imperial cult. Since the reign of Julius Caesar, Judaism enjoyed the status of a \"licit religion\", but occasional persecutions still occurred, such as Tiberius' conscription and expulsion of Jews in 19 AD followed by Claudius' expulsion of Jews from Rome. Christianity however was not restricted to one people, and because Jewish Christians were excluded from the synagogue (see Council of Jamnia), they also lost the protected status that was granted to Judaism, even though that protection still had its limits (see Titus Flavius Clemens (consul), Rabbi Akiva, and Ten Martyrs).",
"title": "Early differences between Christianity and Judaism"
},
{
"paragraph_id": 5,
"text": "From the reign of Nero onwards, who is said by Tacitus to have blamed the Great Fire of Rome on Christians, the practice of Christianity was criminalized and Christians were frequently persecuted, but the persecution differed from region to region. Comparably, Judaism suffered setbacks due to the Jewish-Roman wars, and these setbacks are remembered in the legacy of the Ten Martyrs. Robin Lane Fox traces the origin of much of the later hostility to this early period of persecution, when the Roman authorities commonly tested the faith of suspected Christians by forcing them to pay homage to the deified emperor. Jews were exempt from this requirement as long as they paid the Fiscus Judaicus, and Christians (many or mostly of Jewish origin) would say that they were Jewish but refused to pay the tax. This had to be confirmed by the local Jewish authorities, who were likely to refuse to accept the Christians as fellow Jews, often leading to their execution. The Birkat haMinim was often brought forward as support for this charge that the Jews were responsible for the Persecution of Christians in the Roman Empire. In the 3rd century systematic persecution of Christians began and lasted until Constantine's conversion to Christianity. In 390 Theodosius I made Christianity the state church of the Roman Empire. While pagan cults and Manichaeism were suppressed, Judaism retained its legal status as a licit religion, though anti-Jewish violence still occurred. In the 5th century, some legal measures worsened the status of the Jews in the Roman Empire.",
"title": "Early differences between Christianity and Judaism"
},
{
"paragraph_id": 6,
"text": "Another point of contention for Christians concerning Judaism, according to the modern KJV of the Protestant Bible, is attributed more to a religious bias, rather than an issue of race or being a \"Semite\". Paul (a Benjamite Hebrew) clarifies this point in the letter to the Galatians where he makes plain his declaration \" There is neither Jew nor Greek, there is neither bond nor free, there is neither male nor female: for ye are all one in Christ Jesus. And if ye be Christ's, then are ye Abraham's seed, and heirs according to the promise.\" Further Paul states: \" Brethren, I speak after the manner of men; Though it be but a man's covenant, yet if it be confirmed, no man disannulleth, or addeth thereto. Now to Abraham and his seed were the promises made. He saith not, And to seeds, as of many; but as of one, And to thy seed, which is Christ.\"",
"title": "Early differences between Christianity and Judaism"
},
{
"paragraph_id": 7,
"text": "In Judaism, Jesus was not recognized as the Messiah, which Christians interpreted as His rejection, as a failed Jewish Messiah claimant and a false prophet. However, since Jews traditionally believe that the messiah has not yet come and the Messianic Age is not yet present, the total rejection of Jesus as either the messiah or a deity has never been a central issue in Judaism.",
"title": "Issues arising from the New Testament"
},
{
"paragraph_id": 8,
"text": "Many New Testament passages criticise the Pharisees and it has been argued that these passages have shaped the way that Christians viewed Jews. Like most Bible passages, however, they can be and have been interpreted in a variety of ways.",
"title": "Issues arising from the New Testament"
},
{
"paragraph_id": 9,
"text": "Mainstream Talmudic Rabbinical Judaism today directly descends from the Pharisees whom Jesus often criticized. During Jesus' life and at the time of his execution, the Pharisees were only one of several Jewish groups such as the Sadducees, Zealots, and Essenes who mostly died out not long after the period; indeed, Jewish scholars such as Harvey Falk and Hyam Maccoby have suggested that Jesus was himself a Pharisee. In the sermon on the mount, for example, Jesus says \"The Pharisees sit in Moses seat, therefore do what they say ..\". Arguments by Jesus and his disciples against certain groups of Pharisees and what he saw as their hypocrisy were most likely examples of disputes among Jews and internal to Judaism that were common at the time, see for example Hillel and Shammai.",
"title": "Issues arising from the New Testament"
},
{
"paragraph_id": 10,
"text": "Professor Lillian C. Freudmann, author of Antisemitism in the New Testament (University Press of America, 1994) has published a detailed study of the description of Jews in the New Testament, and the historical effects that such passages have had in the Christian community throughout history. Similar studies of such verses have been made by both Christian and Jewish scholars, including Professors Clark Williamsom (Christian Theological Seminary), Hyam Maccoby (The Leo Baeck Institute), Norman A. Beck (Texas Lutheran College), and Michael Berenbaum (Georgetown University). Most rabbis feel that these verses are antisemitic, and many Christian scholars, in America and Europe, have reached the same conclusion. Another example is John Dominic Crossan's 1995 book, titled Who Killed Jesus? Exposing the Roots of Anti-Semitism in the Gospel Story of the Death of Jesus.",
"title": "Issues arising from the New Testament"
},
{
"paragraph_id": 11,
"text": "Some biblical scholars have also been accused of holding antisemitic beliefs. Bruce J. Malina, a founding member of The Context Group, has come under criticism for going as far as to deny the Semitic ancestry of modern Israelis. He then ties this back to his work on first-century cultural anthropology.",
"title": "Issues arising from the New Testament"
},
{
"paragraph_id": 12,
"text": "Jewish deicide is the belief that Jews to this day will always be collectively responsible for the killing of Jesus, also known as the blood curse. A justification of this charge is derived from Matthew (27:24–25) alleging a crowd of Jews told Pilate that they and their children would be responsible for Jesus' death. Most members of the Church of Jesus Christ of Latter-day Saints accept the Jewish deicide, while the Catholic Church and several other Christian denominations have repudiated it.",
"title": "Issues arising from the New Testament"
},
{
"paragraph_id": 13,
"text": "After Paul's death, Christianity emerged as a separate religion, and Pauline Christianity emerged as the dominant form of Christianity, especially after Paul, James and the other apostles agreed on a compromise set of requirements. Some Christians continued to adhere to aspects of Jewish law, but they were few in number and often considered heretics by the Church. One example is the Ebionites, who seem to have denied the virgin birth of Jesus, the physical Resurrection of Jesus, and most of the books that were later canonized as the New Testament. For example, the Ethiopian Orthodox still continue Old Testament practices such as the Sabbath. As late as the 4th century Church Father John Chrysostom complained that some Christians were still attending Jewish synagogues. The Church Fathers identified Jews and Judaism with heresy and declared the people of Israel to be extra Deum (lat. \"outside of God\").",
"title": "Church Fathers"
},
{
"paragraph_id": 14,
"text": "Peter of Antioch referred to Christians that refused to venerate religious images as having \"Jewish minds\".",
"title": "Church Fathers"
},
{
"paragraph_id": 15,
"text": "In the early second century AD, the heretic Marcion of Sinope (c. 85 – c. 160 AD) declared that the Jewish God was a different God, inferior to the Christian one, and rejected the Jewish scriptures as the product of a lesser deity. Marcion's teachings, which were extremely popular, rejected Judaism not only as an incomplete revelation, but as a false one as well, but, at the same time, allowed less blame to be placed on the Jews personally for having not recognized Jesus, since, in Marcion's worldview, Jesus was not sent by the lesser Jewish God, but by the supreme Christian God, whom the Jews had no reason to recognize.",
"title": "Church Fathers"
},
{
"paragraph_id": 16,
"text": "In combating Marcion, orthodox apologists conceded that Judaism was an incomplete and inferior religion to Christianity, while also defending the Jewish scriptures as canonical.",
"title": "Church Fathers"
},
{
"paragraph_id": 17,
"text": "The Church Father Tertullian (c. 155 – c. 240 AD) had a particularly intense personal dislike towards the Jews and argued that the Gentiles had been chosen by God to replace the Jews, because they were worthier and more honorable. Origen of Alexandria (c. 184 – c. 253) was more knowledgeable about Judaism than any of the other Church Fathers, having studied Hebrew, met Rabbi Hillel the Younger, consulted and debated with Jewish scholars, and been influenced by the allegorical interpretations of Philo of Alexandria. Origen defended the canonicity of the Old Testament and defended Jews of the past as having been chosen by God for their merits. Nonetheless, he condemned contemporary Jews for not understanding their own Law, insisted that Christians were the \"true Israel\", and blamed the Jews for the death of Christ. He did, however, maintain that Jews would eventually attain salvation in the final apocatastasis. Hippolytus of Rome (c. 170 – c. 235 AD) wrote that the Jews had \"been darkened in the eyes of your soul with a darkness utter and everlasting.\"",
"title": "Church Fathers"
},
{
"paragraph_id": 18,
"text": "Patristic bishops of the patristic era such as Augustine of Hippo argued that the Jews should be left alive and suffering as a perpetual reminder of their murder of Christ. Like his anti-Jewish teacher, Ambrose of Milan, he defined Jews as a special subset of those damned to hell. As \"Witness People\", he sanctified collective punishment for the Jewish deicide and enslavement of Jews to Catholics: \"Not by bodily death, shall the ungodly race of carnal Jews perish ... 'Scatter them abroad, take away their strength. And bring them down O Lord'\". Augustine claimed to \"love\" the Jews but as a means to convert them to Christianity. Sometimes he identified all Jews with the evil Judas and developed the doctrine (together with Cyprian) that there was \"no salvation outside the Church\".",
"title": "Church Fathers"
},
{
"paragraph_id": 19,
"text": "John Chrysostom and other church fathers went further in their condemnation; the Catholic editor Paul Harkins wrote that St. John Chrysostom's anti-Jewish theology \"is no longer tenable (..) For these objectively unchristian acts he cannot be excused, even if he is the product of his times.\" John Chrysostom held, as most Church Fathers did, that the sins of all Jews were communal and endless, to him his Jewish neighbours were the collective representation of all alleged crimes of all preexisting Jews. All Church Fathers applied the passages of the New Testament concerning the alleged advocation of the crucifixion of Christ to all Jews of his day, the Jews were the ultimate evil. However, John Chrysostom went so far to say that because Jews rejected the Christian God in human flesh, Christ, they therefore deserved to be killed: \"grew fit for slaughter.\" In citing the New Testament, he claimed that Jesus was speaking about Jews when he said, \"as for these enemies of mine who did not want me to reign over them, bring them here and slay them before me.\"",
"title": "Church Fathers"
},
{
"paragraph_id": 20,
"text": "St. Jerome identified Jews with Judas Iscariot and the immoral use of money (\"Judas is cursed, that in Judas the Jews may be accursed... their prayers turn into sins\"). Jerome's homiletical assaults, that may have served as the basis for the anti-Jewish Good Friday liturgy, contrasts Jews with the evil, and that \"the ceremonies of the Jews are harmful and deadly to Christians\", whoever keeps them was doomed to the devil: \"My enemies are the Jews; they have conspired in hatred against Me, crucified Me, heaped evils of all kinds upon Me, blasphemed Me.\"",
"title": "Church Fathers"
},
{
"paragraph_id": 21,
"text": "Ephraim the Syrian wrote polemics against Jews in the 4th century, including the repeated accusation that Satan dwells among them as a partner. The writings were directed at Christians who were being proselytized by Jews. Ephraim feared that they were slipping back into Judaism; thus, he portrayed the Jews as enemies of Christianity, like Satan, to emphasize the contrast between the two religions, namely, that Christianity was Godly and true and Judaism was Satanic and false. Like John Chrysostom, his objective was to dissuade Christians from reverting to Judaism by emphasizing what he saw as the wickedness of the Jews and their religion.",
"title": "Church Fathers"
},
{
"paragraph_id": 22,
"text": "Bernard of Clairvaux said \"For us the Jews are Scripture's living words, because they remind us of what Our Lord suffered. They are not to be persecuted, killed, or even put to flight.\" According to Anna Sapir Abulafia, most scholars agree that Jews and Christians in Latin Christendom lived in relative peace with one another until the thirteenth century.",
"title": "Middle Ages"
},
{
"paragraph_id": 23,
"text": "Jews were subjected to a wide range of legal disabilities and restrictions in Medieval Europe. Jews were excluded from many trades, the occupations varying with place and time, and determined by the influence of various non-Jewish competing interests. Often Jews were barred from all occupations but money-lending and peddling, with even these at times forbidden. Jews' association to money lending would carry on throughout history in the stereotype of Jews being greedy and perpetuating capitalism.",
"title": "Middle Ages"
},
{
"paragraph_id": 24,
"text": "In the later medieval period, the number of Jews who were permitted to reside in certain places was limited; they were concentrated in ghettos, and they were also not allowed to own land; they were forced to pay discriminatory taxes whenever they entered cities or districts other than their own. The Oath More Judaico, the form of oath required from Jewish witnesses, developed bizarre or humiliating forms in some places, e.g. in the Swabian law of the 13th century, the Jew would be required to stand on the hide of a sow or a bloody lamb.",
"title": "Middle Ages"
},
{
"paragraph_id": 25,
"text": "The Fourth Lateran Council which was held in 1215 was the first council to proclaim that Jews were required to wear something which distinguished them as Jews (the same requirement was also imposed on Muslims). On many occasions, Jews were accused of blood libels, the supposed drinking of the blood of Christian children in mockery of the Christian Eucharist.",
"title": "Middle Ages"
},
{
"paragraph_id": 26,
"text": "Sicut Judaeis (the \"Constitution for the Jews\") was the official position of the papacy regarding Jews throughout the Middle Ages and later. The first bull was issued in about 1120 by Calixtus II, intended to protect Jews who suffered during the First Crusade, and was reaffirmed by many popes, even until the 15th century although they were not always strictly upheld.",
"title": "Middle Ages"
},
{
"paragraph_id": 27,
"text": "The bull forbade, besides other things, Christians from coercing Jews to convert, or to harm them, or to take their property, or to disturb the celebration of their festivals, or to interfere with their cemeteries, on pain of excommunication:",
"title": "Middle Ages"
},
{
"paragraph_id": 28,
"text": "We decree that no Christian shall use violence to force them to be baptized, so long as they are unwilling and refuse.…Without the judgment of the political authority of the land, no Christian shall presume to wound them or kill them or rob them of their money or change the good customs that they have thus far enjoyed in the place where they live.\"",
"title": "Middle Ages"
},
{
"paragraph_id": 29,
"text": "Antisemitism in popular European Christian culture escalated beginning in the 13th century. Blood libels and host desecration drew popular attention and led to many cases of persecution against Jews. Many believed Jews poisoned wells to cause plagues. In the case of blood libel it was widely believed that the Jews would kill a child before Easter and needed Christian blood to bake matzo. Throughout history if a Christian child was murdered accusations of blood libel would arise no matter how small the Jewish population. The Church often added to the fire by portraying the dead child as a martyr who had been tortured and child had powers like Jesus was believed to. Sometimes the children were even made into Saints. Antisemitic imagery such as Judensau and Ecclesia et Synagoga recurred in Christian art and architecture. Anti-Jewish Easter holiday customs such as the Burning of Judas continue to present time.",
"title": "Middle Ages"
},
{
"paragraph_id": 30,
"text": "In Iceland, one of the hymns repeated in the days leading up to Easter includes the lines,",
"title": "Middle Ages"
},
{
"paragraph_id": 31,
"text": "During the Middle Ages in Europe persecutions and formal expulsions of Jews were liable to occur at intervals, although this was also the case for other minority communities, regardless of whether they were religious or ethnic. There were particular outbursts of riotous persecution during the Rhineland massacres of 1096 in Germany accompanying the lead-up to the First Crusade, many involving the crusaders as they travelled to the East. There were many local expulsions from cities by local rulers and city councils. In Germany the Holy Roman Emperor generally tried to restrain persecution, if only for economic reasons, but he was often unable to exert much influence. In the Edict of Expulsion, King Edward I expelled all the Jews from England in 1290 (only after ransoming some 3,000 among the most wealthy of them), on the accusation of usury and undermining loyalty to the dynasty. In 1306 there was a wave of persecution in France, and there were widespread Black Death Jewish persecutions as the Jews were blamed by many Christians for the plague, or spreading it. As late as 1519, the Imperial city of Regensburg took advantage of the recent death of Emperor Maximilian I to expel its 500 Jews.",
"title": "Middle Ages"
},
{
"paragraph_id": 32,
"text": "\"Officially, the medieval Catholic church never advocated the expulsion of all the Jews from Christendom, or repudiated Augustine's doctrine of Jewish witness... Still, late medieval Christendom frequently ignored its mandates...\"",
"title": "Middle Ages"
},
{
"paragraph_id": 33,
"text": "The largest expulsion of Jews followed the Reconquista or the reunification of Spain, and it preceded the expulsion of the Muslims who would not convert, in spite of the protection of their religious rights promised by the Treaty of Granada (1491). On 31 March 1492 Ferdinand II of Aragon and Isabella I of Castile, the rulers of Spain who financed Christopher Columbus' voyage to the New World just a few months later in 1492, declared that all Jews in their territories should either convert to Christianity or leave the country. While some converted, many others left for Portugal, France, Italy (including the Papal States), Netherlands, Poland, the Ottoman Empire, and North Africa. Many of those who had fled to Portugal were later expelled by King Manuel in 1497 or left to avoid forced conversion and persecution.",
"title": "Middle Ages"
},
{
"paragraph_id": 34,
"text": "On 14 July 1555, Pope Paul IV issued papal bull Cum nimis absurdum which revoked all the rights of the Jewish community and placed religious and economic restrictions on Jews in the Papal States, renewed anti-Jewish legislation and subjected Jews to various degradations and restrictions on their personal freedom.",
"title": "From the Renaissance to the 17th century"
},
{
"paragraph_id": 35,
"text": "The bull established the Roman Ghetto and required Jews of Rome, which had existed as a community since before Christian times and which numbered about 2,000 at the time, to live in it. The Ghetto was a walled quarter with three gates that were locked at night. Jews were also restricted to one synagogue per city.",
"title": "From the Renaissance to the 17th century"
},
{
"paragraph_id": 36,
"text": "Paul IV's successor, Pope Pius IV, enforced the creation of other ghettos in most Italian towns, and his successor, Pope Pius V, recommended them to other bordering states.",
"title": "From the Renaissance to the 17th century"
},
{
"paragraph_id": 37,
"text": "Martin Luther at first made overtures towards the Jews, believing that the \"evils\" of Catholicism had prevented their conversion to Christianity. When his call to convert to his version of Christianity was unsuccessful, he became hostile to them.",
"title": "From the Renaissance to the 17th century"
},
{
"paragraph_id": 38,
"text": "In his book On the Jews and Their Lies, Luther excoriates them as \"venomous beasts, vipers, disgusting scum, canders, devils incarnate.\" He provided detailed recommendations for a pogrom against them, calling for their permanent oppression and expulsion, writing \"Their private houses must be destroyed and devastated, they could be lodged in stables. Let the magistrates burn their synagogues and let whatever escapes be covered with sand and mud. Let them be forced to work, and if this avails nothing, we will be compelled to expel them like dogs in order not to expose ourselves to incurring divine wrath and eternal damnation from the Jews and their lies.\" At one point he wrote: \"...we are at fault in not slaying them...\" a passage that \"may be termed the first work of modern antisemitism, and a giant step forward on the road to the Holocaust.\"",
"title": "From the Renaissance to the 17th century"
},
{
"paragraph_id": 39,
"text": "Luther's harsh comments about the Jews are seen by many as a continuation of medieval Christian antisemitism. In his final sermon shortly before his death, however, Luther preached: \"We want to treat them with Christian love and to pray for them, so that they might become converted and would receive the Lord.\"",
"title": "From the Renaissance to the 17th century"
},
{
"paragraph_id": 40,
"text": "In accordance with the anti-Jewish precepts of the Russian Orthodox Church, Russia's discriminatory policies towards Jews intensified when the partition of Poland in the 18th century resulted, for the first time in Russian history, in the possession of land with a large Jewish population. This land was designated as the Pale of Settlement from which Jews were forbidden to migrate into the interior of Russia. In 1772 Catherine II, the empress of Russia, forced the Jews living in the Pale of Settlement to stay in their shtetls and forbade them from returning to the towns that they occupied before the partition of Poland.",
"title": "18th century"
},
{
"paragraph_id": 41,
"text": "Throughout the 19th century and into the 20th, the Roman Catholic Church still incorporated strong antisemitic elements, despite increasing attempts to separate anti-Judaism (opposition to the Jewish religion on religious grounds) and racial antisemitism. Brown University historian David Kertzer, working from the Vatican archive, has argued in his book The Popes Against the Jews that in the 19th and early 20th centuries the Roman Catholic Church adhered to a distinction between \"good antisemitism\" and \"bad antisemitism\". The \"bad\" kind promoted hatred of Jews because of their descent. This was considered un-Christian because the Christian message was intended for all of humanity regardless of ethnicity; anyone could become a Christian. The \"good\" kind criticized alleged Jewish conspiracies to control newspapers, banks, and other institutions, to care only about accumulation of wealth, etc. Many Catholic bishops wrote articles criticizing Jews on such grounds, and, when they were accused of promoting hatred of Jews, they would remind people that they condemned the \"bad\" kind of antisemitism. Kertzer's work is not without critics. Scholar of Jewish-Christian relations Rabbi David G. Dalin, for example, criticized Kertzer in the Weekly Standard for using evidence selectively.",
"title": "19th century"
},
{
"paragraph_id": 42,
"text": "The counter-revolutionary Catholic royalist Louis de Bonald stands out among the earliest figures to explicitly call for the reversal of Jewish emancipation in the wake of the French Revolution. Bonald's attacks on the Jews are likely to have influenced Napoleon's decision to limit the civil rights of Alsatian Jews. Bonald's article Sur les juifs (1806) was one of the most venomous screeds of its era and furnished a paradigm which combined anti-liberalism, a defense of a rural society, traditional Christian antisemitism, and the identification of Jews with bankers and finance capital, which would in turn influence many subsequent right-wing reactionaries such as Roger Gougenot des Mousseaux, Charles Maurras, and Édouard Drumont, nationalists such as Maurice Barrès and Paolo Orano, and antisemitic socialists such as Alphonse Toussenel. Bonald furthermore declared that the Jews were an \"alien\" people, a \"state within a state\", and should be forced to wear a distinctive mark to more easily identify and discriminate against them.",
"title": "19th century"
},
{
"paragraph_id": 43,
"text": "In the 1840s, the popular counter-revolutionary Catholic journalist Louis Veuillot propagated Bonald's arguments against the Jewish \"financial aristocracy\" along with vicious attacks against the Talmud and the Jews as a \"deicidal people\" driven by hatred to \"enslave\" Christians. Gougenot des Mousseaux's Le Juif, le judaïsme et la judaïsation des peuples chrétiens (1869) has been called a \"Bible of modern antisemitism\" and was translated into German by Nazi ideologue Alfred Rosenberg. Between 1882 and 1886 alone, French priests published twenty antisemitic books blaming France's ills on the Jews and urging the government to consign them back to the ghettos, expel them, or hang them from the gallows.",
"title": "19th century"
},
{
"paragraph_id": 44,
"text": "In Italy the Jesuit priest Antonio Bresciani's highly popular novel 1850 novel L'Ebreo di Verona (The Jew of Verona) shaped religious antisemitism for decades, as did his work for La Civiltà Cattolica, which he helped launch.",
"title": "19th century"
},
{
"paragraph_id": 45,
"text": "Pope Pius VII (1800–1823) had the walls of the Jewish ghetto in Rome rebuilt after the Jews were emancipated by Napoleon, and Jews were restricted to the ghetto through the end of the Papal States in 1870. Official Catholic organizations, such as the Jesuits, banned candidates \"who are descended from the Jewish race unless it is clear that their father, grandfather, and great-grandfather have belonged to the Catholic Church\" until 1946.",
"title": "19th century"
},
{
"paragraph_id": 46,
"text": "In Russia, under the Tsarist regime, antisemitism intensified in the early years of the 20th century and was given official favour when the secret police forged the notorious Protocols of the Elders of Zion, a document purported to be a transcription of a plan by Jewish elders to achieve global domination. Violence against the Jews in the Kishinev pogrom in 1903 was continued after the 1905 revolution by the activities of the Black Hundreds. The Beilis Trial of 1913 showed that it was possible to revive the blood libel accusation in Russia.",
"title": "20th century"
},
{
"paragraph_id": 47,
"text": "Catholic writers such as Ernest Jouin, who published the Protocols in French, seamlessly blended racial and religious antisemitism, as in his statement that \"from the triple viewpoint of race, of nationality, and of religion, the Jew has become the enemy of humanity.\" Pope Pius XI praised Jouin for \"combating our mortal [Jewish] enemy\" and appointed him to high papal office as a protonotary apostolic.",
"title": "20th century"
},
{
"paragraph_id": 48,
"text": "In 1916, in the midst of the First World War, American Jews petitioned Pope Benedict XV on behalf of the Polish Jews.",
"title": "20th century"
},
{
"paragraph_id": 49,
"text": "During a meeting with Roman Catholic Bishop Wilhelm Berning of Osnabrück On April 26, 1933, Hitler declared:",
"title": "20th century"
},
{
"paragraph_id": 50,
"text": "I have been attacked because of my handling of the Jewish question. The Catholic Church considered the Jews pestilent for fifteen hundred years, put them in ghettos, etc., because it recognized the Jews for what they were. In the epoch of liberalism the danger was no longer recognized. I am moving back toward the time in which a fifteen-hundred-year-long tradition was implemented. I do not set race over religion, but I recognize the representatives of this race as pestilent for the state and for the Church, and perhaps I am thereby doing Christianity a great service by pushing them out of schools and public functions.",
"title": "20th century"
},
{
"paragraph_id": 51,
"text": "The transcript of the discussion does not contain any response by Bishop Berning. Martin Rhonheimer does not consider this unusual because in his opinion, for a Catholic Bishop in 1933 there was nothing particularly objectionable \"in this historically correct reminder\".",
"title": "20th century"
},
{
"paragraph_id": 52,
"text": "The Nazis used Martin Luther's book, On the Jews and Their Lies (1543), to justify their claim that their ideology was morally righteous. Luther even went so far as to advocate the murder of Jews who refused to convert to Christianity by writing that \"we are at fault in not slaying them.\"",
"title": "20th century"
},
{
"paragraph_id": 53,
"text": "Archbishop Robert Runcie asserted that: \"Without centuries of Christian antisemitism, Hitler's passionate hatred would never have been so fervently echoed... because for centuries Christians have held Jews collectively responsible for the death of Jesus. On Good Friday in times past, Jews have cowered behind locked doors with fear of a Christian mob seeking 'revenge' for deicide. Without the poisoning of Christian minds through the centuries, the Holocaust is unthinkable.\" The dissident Catholic priest Hans Küng has written that \"Nazi anti-Judaism was the work of godless, anti-Christian criminals. But it would not have been possible without the almost two thousand years' pre-history of 'Christian' anti-Judaism...\" The consensus among historians is that Nazism as a whole was either unrelated or actively opposed to Christianity, and Hitler was strongly critical of it, although Germany remained mostly Christian during the Nazi era.",
"title": "20th century"
},
{
"paragraph_id": 54,
"text": "The document Dabru Emet was issued by over 220 rabbis and intellectuals from all branches of Judaism in 2000 as a statement about Jewish-Christian relations. This document states,",
"title": "20th century"
},
{
"paragraph_id": 55,
"text": "Nazism was not a Christian phenomenon. Without the long history of Christian anti-Judaism and Christian violence against Jews, Nazi ideology could not have taken hold nor could it have been carried out. Too many Christians participated in, or were sympathetic to, Nazi atrocities against Jews. Other Christians did not protest sufficiently against these atrocities. But Nazism itself was not an inevitable outcome of Christianity.",
"title": "20th century"
},
{
"paragraph_id": 56,
"text": "According to American historian Lucy Dawidowicz, antisemitism has a long history within Christianity. The line of \"antisemitic descent\" from Luther, the author of On the Jews and Their Lies, to Hitler is \"easy to draw.\" In her The War Against the Jews, 1933-1945, she contends that Luther and Hitler were obsessed by the \"demonologized universe\" inhabited by Jews. Dawidowicz writes that the similarities between Luther's anti-Jewish writings and modern antisemitism are no coincidence, because they derived from a common history of Judenhass, which can be traced to Haman's advice to Ahasuerus. Although modern German antisemitism also has its roots in German nationalism and the liberal revolution of 1848, Christian antisemitism she writes is a foundation that was laid by the Roman Catholic Church and \"upon which Luther built.\"",
"title": "20th century"
},
{
"paragraph_id": 57,
"text": "The Confessing Church was, in 1934, the first Christian opposition group. The Catholic Church officially condemned the Nazi theory of racism in Germany in 1937 with the encyclical \"Mit brennender Sorge\", signed by Pope Pius XI, and Cardinal Michael von Faulhaber led the Catholic opposition, preaching against racism.",
"title": "20th century"
},
{
"paragraph_id": 58,
"text": "Many individual Christian clergy and laypeople of all denominations had to pay for their opposition with their lives, including:",
"title": "20th century"
},
{
"paragraph_id": 59,
"text": "By the 1940s, few Christians were willing to publicly oppose Nazi policy, but many Christians secretly helped save the lives of Jews. There are many sections of Israel's Holocaust Remembrance Museum, Yad Vashem, which are dedicated to honoring these \"Righteous Among the Nations\".",
"title": "20th century"
},
{
"paragraph_id": 60,
"text": "Before he became Pope, Cardinal Pacelli addressed the International Eucharistic Congress in Budapest on 25–30 May 1938 during which he made reference to the Jews \"whose lips curse [Christ] and whose hearts reject him even today\"; at this time antisemitic laws were in the process of being formulated in Hungary.",
"title": "20th century"
},
{
"paragraph_id": 61,
"text": "The 1937 encyclical Mit brennender Sorge was issued by Pope Pius XI, but drafted by the future Pope Pius XII and read from the pulpits of all German Catholic churches, it condemned Nazi ideology and has been characterized by scholars as the \"first great official public document to dare to confront and criticize Nazism\" and \"one of the greatest such condemnations ever issued by the Vatican.\"",
"title": "20th century"
},
{
"paragraph_id": 62,
"text": "In the summer of 1942, Pius explained to his college of Cardinals the reasons for the great gulf that existed between Jews and Christians at the theological level: \"Jerusalem has responded to His call and to His grace with the same rigid blindness and stubborn ingratitude that has led it along the path of guilt to the murder of God.\" Historian Guido Knopp describes these comments of Pius as being \"incomprehensible\" at a time when \"Jerusalem was being murdered by the million\". This traditional adversarial relationship with Judaism would be reversed in Nostra aetate, which was issued during the Second Vatican Council.",
"title": "20th century"
},
{
"paragraph_id": 63,
"text": "Prominent members of the Jewish community have contradicted the criticisms of Pius and spoke highly of his efforts to protect Jews. The Israeli historian Pinchas Lapide interviewed war survivors and concluded that Pius XII \"was instrumental in saving at least 700,000, but probably as many as 860,000 Jews from certain death at Nazi hands\". Some historians dispute this estimate.",
"title": "20th century"
},
{
"paragraph_id": 64,
"text": "The Christian Identity movement, the Ku Klux Klan and other White supremacist groups have expressed antisemitic views. They claim that their antisemitism is based on purported Jewish control of the media, control of international banks, involvement in radical left-wing politics, and the Jews' promotion of multiculturalism, anti-Christian groups, liberalism and perverse organizations. They rebuke charges of racism by claiming that Jews who share their views maintain membership in their organizations. A racial belief which is common among these groups, but not universal among them, is an alternative history doctrine concerning the descendants of the Lost Tribes of Israel. In some of its forms, this doctrine absolutely denies the view that modern Jews have any ethnic connection to the Israel of the Bible. Instead, according to extreme forms of this doctrine, the true Israelites and the true humans are the members of the Adamic (white) race. These groups are often rejected and they are not even considered Christian groups by mainstream Christian denominations and the vast majority of Christians around the world.",
"title": "20th century"
},
{
"paragraph_id": 65,
"text": "Antisemitism remains a substantial problem in Europe and to a greater or lesser degree, it also exists in many other nations, including Eastern Europe and the former Soviet Union, and tensions between some Muslim immigrants and Jews have increased across Europe. The US State Department reports that antisemitism has increased dramatically in Europe and Eurasia since 2000.",
"title": "20th century"
},
{
"paragraph_id": 66,
"text": "While it has been on the decline since the 1940s, a measurable amount of antisemitism still exists in the United States, although acts of violence are rare. For example, the influential Evangelical preacher Billy Graham and the then-president Richard Nixon were caught on tape in the early 1970s while they were discussing matters like how to address the Jews' control of the American media. This belief in Jewish conspiracies and domination of the media was similar to those of Graham's former mentors: William Bell Riley chose Graham to succeed him as the second president of Northwestern Bible and Missionary Training School and evangelist Mordecai Ham led the meetings where Graham first believed in Christ. Both held strongly antisemitic views. The 2001 survey by the Anti-Defamation League reported 1432 acts of antisemitism in the United States that year. The figure included 877 acts of harassment, including verbal intimidation, threats and physical assaults. A minority of American churches engage in anti-Israel activism, including support for the controversial BDS (Boycott, Divestment and Sanctions) movement. While not directly indicative of antisemitism, this activism often conflates the Israeli government's treatment of Palestinians with that of Jesus, thereby promoting the antisemitic doctrine of Jewish guilt. Many Christian Zionists are also accused of antisemitism, such as John Hagee, who argued that the Jews brought the Holocaust upon themselves by angering God.",
"title": "20th century"
},
{
"paragraph_id": 67,
"text": "Relations between Jews and Christians have dramatically improved since the 20th century. According to a global poll which was conducted in 2014 by the Anti-Defamation League, a Jewish group which is devoted to fighting antisemitism and other forms of racism, data was collected from 102 countries with regard to their population's attitudes towards Jews and it revealed that only 24% of the world's Christians held views which were considered antisemitic according to the ADL's index, compared to 49% of the world's Muslims.",
"title": "20th century"
},
{
"paragraph_id": 68,
"text": "Many Christians do not consider anti-Judaism antisemitism. They regard anti-Judaism as a disagreement with the tenets of Judaism by religiously sincere people, while they regard antisemitism as an emotional bias or hatred which does not specifically target the religion of Judaism. Under this approach, anti-Judaism is not regarded as antisemitism because it does not involve actual hostility towards the Jewish people, instead, anti-Judaism only rejects the religious beliefs of Judaism.",
"title": "Anti-Judaism"
},
{
"paragraph_id": 69,
"text": "Others believe that anti-Judaism is rejection of Judaism as a religion or opposition to Judaism's beliefs and practices essentially because of their source in Judaism or because a belief or practice is associated with the Jewish people. (But see supersessionism)",
"title": "Anti-Judaism"
},
{
"paragraph_id": 70,
"text": "The position that \"Christian theological anti-Judaism is a phenomenon which is distinct from modern antisemitism, which is rooted in economic and racial thought, so that Christian teachings should not be held responsible for antisemitism\" has been articulated, among other people, by Pope John Paul II in 'We Remember: A Reflection on the Shoah,' and the Jewish declaration on Christianity, Dabru Emet. Several scholars, including Susannah Heschel, Gavin I Langmuir and Uriel Tal have challenged this position, by arguing that anti-Judaism directly led to modern antisemitism.",
"title": "Anti-Judaism"
},
{
"paragraph_id": 71,
"text": "Although some Christians did consider anti-Judaism to be contrary to Christian teaching in the past, this view was not widely expressed by Christian leaders and lay people. In many cases, the practical tolerance towards the Jewish religion and Jews prevailed. Some Christian groups condemned verbal anti-Judaism, particularly in their early years.",
"title": "Anti-Judaism"
},
{
"paragraph_id": 72,
"text": "Some Jewish organizations have denounced evangelistic and missionary activities which specifically target Jews by labeling them antisemitic.",
"title": "Conversion of Jews"
},
{
"paragraph_id": 73,
"text": "The Southern Baptist Convention (SBC), the largest Protestant Christian denomination in the U.S., has explicitly rejected suggestions that it should back away from seeking to convert Jews, a position which critics have called antisemitic, but a position which Baptists believe is consistent with their view that salvation is solely found through faith in Christ. In 1996 the SBC approved a resolution calling for efforts to seek the conversion of Jews \"as well as the salvation of 'every kindred and tongue and people and nation.'\"",
"title": "Conversion of Jews"
},
{
"paragraph_id": 74,
"text": "Most Evangelicals agree with the SBC's position, and some of them also support efforts which specifically seek the Jews' conversion. Additionally, these Evangelical groups are among the most pro-Israel groups. (For more information, see Christian Zionism.) One controversial group which has received a considerable amount of support from some Evangelical churches is Jews for Jesus, which claims that Jews can \"complete\" their Jewish faith by accepting Jesus as the Messiah.",
"title": "Conversion of Jews"
},
{
"paragraph_id": 75,
"text": "The Presbyterian Church (USA), the United Methodist Church, and the United Church of Canada have ended their efforts to convert Jews. While Anglicans do not, as a rule, seek converts from other Christian denominations, the General Synod has affirmed that \"the good news of salvation in Jesus Christ is for all and must be shared with all including people from other faiths or of no faith and that to do anything else would be to institutionalize discrimination\".",
"title": "Conversion of Jews"
},
{
"paragraph_id": 76,
"text": "The Roman Catholic Church formerly operated religious congregations which specifically aimed to convert Jews. Some of these congregations were actually founded by Jewish converts, like the Congregation of Our Lady of Sion, whose members were nuns and ordained priests. Many Catholic saints were specifically noted for their missionary zeal to convert Jews, such as Vincent Ferrer. After the Second Vatican Council, many missionary orders which aimed to convert Jews to Christianity no longer actively sought to missionize (or proselytize) them. However, Traditionalist Roman Catholic groups, congregations and clergymen continue to advocate the missionizing of Jews according to traditional patterns, sometimes with success (e.g., the Society of St. Pius X which has notable Jewish converts among its faithful, many of whom have become traditionalist priests).",
"title": "Conversion of Jews"
},
{
"paragraph_id": 77,
"text": "The Church's Ministry Among Jewish People (CMJ) is one of the ten official mission agencies of the Church of England. The Society for Distributing Hebrew Scriptures is another organisation, but it is not affiliated with the established Church.",
"title": "Conversion of Jews"
},
{
"paragraph_id": 78,
"text": "There are several prophecies concerning the conversion of the Jewish people to Christianity in the scriptures of the Church of Jesus Christ of Latter-day Saints (LDS). The Book of Mormon teaches that the Jewish people need to believe in Jesus to be gathered to Israel. The Doctrine & Covenants teaches that the Jewish people will be converted to Christianity during the second coming when Jesus appears to them and shows them his wounds. It teaches that if the Jewish people do not convert to Christianity, then the world would be cursed. Early LDS prophets, such as Brigham Young and Wildord Woodruff, taught that Jewish people could not be truly converted because of the curse which resulted from Jewish deicide. However, after the establishment of the state of Israel, many LDS members felt that it was time for the Jewish people to start converting to Mormonism. During the 1950s, the LDS Church established several missions which specifically targeted Jewish people in several cities in the United States. After the LDS church began to give the priesthood to all males regardless of race in 1978, it also started to deemphasize the importance of race with regard to conversion. This led to a void of doctrinal teachings that resulted in a spectrum of views in how LDS members interpret scripture and previous teachings. According to research which was conducted by Armand Mauss, most LDS members believe that the Jewish people will need to be converted to Christianity in order to be forgiven for the crucifixion of Jesus Christ.",
"title": "Conversion of Jews"
},
{
"paragraph_id": 79,
"text": "The Church of Jesus Christ of Latter-day Saints has also been criticized for baptizing deceased Jewish Holocaust victims. In 1995, in part as a result of public pressure, church leaders promised to put new policies into place that would help the church to end the practice, unless it was specifically requested or approved by the surviving spouses, children or parents of the victims. However, the practice has continued, including the baptism of the parents of Holocaust survivor and Jewish rights advocate Simon Wiesenthal.",
"title": "Conversion of Jews"
},
{
"paragraph_id": 80,
"text": "In recent years, there has been much to note in the way of reconciliation between some Christian groups and the Jews.",
"title": "Reconciliation between Judaism and Christian groups"
}
] | Some Christian Churches, Christian groups, and ordinary Christians express religious antisemitism toward the Jewish people and the associated religion of Judaism. Antisemitic Christian rhetoric and the resulting antipathy toward Jews both date back to the early years of Christianity and are derived from pagan anti-Jewish attitudes that were reinforced by the belief that Jews were responsible for the murder of Jesus of Nazareth. Christians imposed ever-increasing anti-Jewish measures over the ensuing centuries, including acts of ostracism, humiliation, expropriation, violence, and murder—measures which culminated in the Holocaust. Christian antisemitism has been attributed to numerous factors including theological differences between these two related Abrahamic religions; the competition between Church and synagogue; the Christian missionary impulse; a misunderstanding of Jewish culture, beliefs, and practice; and the perception that Judaism was hostile toward Christianity. For two millennia, these attitudes were reinforced in Christian preaching, art, and popular teachings—all of which express contempt for Jews—as well as statutes designed to humiliate and stigmatise Jews. Modern antisemitism has primarily been described as hatred against Jews as a race and its most recent expression is rooted in 18th-century racial theories. Anti-Judaism is rooted in hostility toward Judaism the religion; in Western Christianity, anti-Judaism effectively merged with antisemitism during the 12th century. Scholars have debated how Christian antisemitism played a role in the Nazi Third Reich, World War II, and the Holocaust. The Holocaust forced many Christians to reflect on the role(s) Christian theology and practice played and still play in anti-Judaism and antisemitism. | 2001-11-16T13:09:18Z | 2023-12-21T00:40:56Z | [
"Template:Webarchive",
"Template:Unreferenced section",
"Template:Portal",
"Template:ISBN",
"Template:Sourcetext",
"Template:Antisemitism topics",
"Template:Main article",
"Template:More citations needed section",
"Template:Main",
"Template:Bibleverse",
"Template:Rp",
"Template:Citation needed",
"Template:Cite journal",
"Template:Cite web",
"Template:Div col",
"Template:Bibleref2",
"Template:Cite news",
"Template:Antisemitism",
"Template:More citations needed",
"Template:Circa",
"Template:Cite book",
"Template:Refbegin",
"Template:Refend",
"Template:Short description",
"Template:Further",
"Template:According to whom",
"Template:Cite magazine",
"Template:Div col end",
"Template:Reflist",
"Template:CathEncy",
"Template:See also",
"Template:' \"",
"Template:Blockquote"
] | https://en.wikipedia.org/wiki/Antisemitism_in_Christianity |
6,731 | Boeing C-17 Globemaster III | The McDonnell Douglas/Boeing C-17 Globemaster III is a large military transport aircraft that was developed for the United States Air Force (USAF) from the 1980s to the early 1990s by McDonnell Douglas. The C-17 carries forward the name of two previous piston-engined military cargo aircraft, the Douglas C-74 Globemaster and the Douglas C-124 Globemaster II.
The C-17 is based upon the YC-15, a smaller prototype airlifter designed during the 1970s. It was designed to replace the Lockheed C-141 Starlifter, and also fulfill some of the duties of the Lockheed C-5 Galaxy. Compared to the YC-15, the redesigned airlifter differed in having swept wings, increased size, and more powerful engines. Development was protracted by a series of design issues, causing the company to incur a loss of nearly US$1.5 billion on the program's development phase. On 15 September 1991, roughly one year behind schedule, the first C-17 performed its maiden flight. The C-17 formally entered USAF service on 17 January 1995. Boeing, which merged with McDonnell Douglas in 1997, continued to manufacture the C-17 for almost two decades. The final C-17 was completed at the Long Beach, California, plant and flown on 29 November 2015.
The C-17 commonly performs tactical and strategic airlift missions, transporting troops and cargo throughout the world; additional roles include medical evacuation and airdrop duties. The transport is in service with the USAF along with air arms of India, the United Kingdom, Australia, Canada, Qatar, the United Arab Emirates, Kuwait, and the Europe-based multilateral organization Heavy Airlift Wing. The type played a key logistical role during both Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq, as well as in providing humanitarian aid in the aftermath of various natural disasters, including the 2010 Haiti earthquake, the 2011 Sindh floods and the recent 2023 Turkey-Syria earthquake.
In the 1970s, the U.S. Air Force began looking for a replacement for its Lockheed C-130 Hercules tactical cargo aircraft. The Advanced Medium STOL Transport (AMST) competition was held, with Boeing proposing the YC-14, and McDonnell Douglas proposing the YC-15. Though both entrants exceeded specified requirements, the AMST competition was canceled before a winner was selected. The USAF started the C-X program in November 1979 to develop a larger AMST with longer range to augment its strategic airlift.
By 1980, the USAF had a large fleet of aging C-141 Starlifter cargo aircraft. Compounding matters, increased strategic airlift capabilities was needed to fulfill its rapid-deployment airlift requirements. The USAF set mission requirements and released a request for proposals (RFP) for C-X in October 1980. McDonnell Douglas chose to develop a new aircraft based on the YC-15. Boeing bid an enlarged three-engine version of its AMST YC-14. Lockheed submitted both a C-5-based design and an enlarged C-141 design. On 28 August 1981, McDonnell Douglas was chosen to build its proposal, then designated C-17. Compared to the YC-15, the new aircraft differed in having swept wings, increased size, and more powerful engines. This would allow it to perform the work done by the C-141, and to fulfill some of the duties of the Lockheed C-5 Galaxy, freeing the C-5 fleet for outsize cargo.
Alternative proposals were pursued to fill airlift needs after the C-X contest. These were lengthening of C-141As into C-141Bs, ordering more C-5s, continued purchases of KC-10s, and expansion of the Civil Reserve Air Fleet. Limited budgets reduced program funding, requiring a delay of four years. During this time contracts were awarded for preliminary design work and for the completion of engine certification. In December 1985, a full-scale development contract was awarded, under Program Manager Bob Clepper. At this time, first flight was planned for 1990. The USAF had formed a requirement for 210 aircraft.
Development problems and limited funding caused delays in the late 1980s. Criticisms were made of the developing aircraft and questions were raised about more cost-effective alternatives during this time. In April 1990, Secretary of Defense Dick Cheney reduced the order from 210 to 120 aircraft. The maiden flight of the C-17 took place on 15 September 1991 from the McDonnell Douglas's plant in Long Beach, California, about a year behind schedule. The first aircraft (T-1) and five more production models (P1-P5) participated in extensive flight testing and evaluation at Edwards Air Force Base. Two complete airframes were built for static and repeated load testing.
A static test of the C-17 wing in October 1992 resulted in its failure at 128% of design limit load, below the 150% requirement. Both wings buckled rear to the front and failures occurred in stringers, spars, and ribs. Some $100 million was spent to redesign the wing structure; the wing failed at 145% during a second test in September 1993. A review of the test data, however, showed that the wing was not loaded correctly and did indeed meet the requirement. The C-17 received the "Globemaster III" name in early 1993. In late 1993, the Department of Defense (DoD) gave the contractor two years to solve production issues and cost overruns or face the contract's termination after the delivery of the 40th aircraft. By accepting the 1993 terms, McDonnell Douglas incurred a loss of nearly US$1.5 billion on the program's development phase.
In March 1994, the Non-Developmental Airlift Aircraft program was established to procure a transport aircraft using commercial practices as a possible alternative or supplement to the C-17. Initial material solutions considered included: buy a modified Boeing 747-400 NDAA, restart the C-5 production line, extend the C-141 service life, and continue C-17 production. The field eventually narrowed to: the Boeing 747-400, the Lockheed Martin C-5D, and the McDonnell Douglas C-17. The NDAA program was initiated after the C-17 program was temporarily capped at a 40-aircraft buy pending further evaluation of C-17 cost and performance and an assessment of commercial airlift alternatives.
In April 1994, the program remained over budget and did not meet weight, fuel burn, payload, and range specifications. It failed several key criteria during airworthiness evaluation tests. Problems were found with the mission software, landing gear, and other areas. In May 1994, it was proposed to cut production to as few as 32 aircraft; these cuts were later rescinded. A July 1994 Government Accountability Office (GAO) report revealed that USAF and DoD studies from 1986 and 1991 stated the C-17 could use 6,400 more runways outside the U.S. than the C-5, but these studies had only considered runway dimensions, but not runway strength or load classification numbers (LCN). The C-5 has a lower LCN, but the USAF classifies both in the same broad load classification group. When considering runway dimensions and load ratings, the C-17's worldwide runway advantage over the C-5 shrank from 6,400 to 911 airfields. The report also stated "current military doctrine that does not reflect the use of small, austere airfields", thus the C-17's short field capability was not considered.
A January 1995 GAO report stated that the USAF originally planned to order 210 C-17s at a cost of $41.8 billion, and that the 120 aircraft on order were to cost $39.5 billion based on a 1992 estimate. In March 1994, the U.S. Army decided it did not need the 60,000 lb (27,000 kg) low-altitude parachute-extraction system delivery with the C-17 and that the C-130's 42,000 lb (19,000 kg) capability was sufficient. C-17 testing was limited to this lower weight. Airflow issues prevented the C-17 from meeting airdrop requirements. A February 1997 GAO report revealed that a C-17 with a full payload could not land on 3,000 ft (914 m) wet runways; simulations suggested a distance of 5,000 ft (1,500 m) was required. The YC-15 was transferred to AMARC to be made flightworthy again for further flight tests for the C-17 program in March 1997.
By September 1995, most of the prior issues were reportedly resolved and the C-17 was meeting all performance and reliability targets. The first USAF squadron was declared operational in January 1995.
In 1996, the DoD ordered another 80 aircraft for a total of 120. In 1997, McDonnell Douglas merged with domestic competitor Boeing. In April 1999, Boeing offered to cut the C-17's unit price if the USAF bought 60 more; in August 2002, the order was increased to 180 aircraft. In 2007, 190 C-17s were on order for the USAF. On 6 February 2009, Boeing was awarded a $2.95 billion contract for 15 additional C-17s, increasing the total USAF fleet to 205 and extending production from August 2009 to August 2010. On 6 April 2009, U.S. Secretary of Defense Robert Gates stated that there would be no more C-17s ordered beyond the 205 planned. However, on 12 June 2009, the House Armed Services Air and Land Forces Subcommittee added a further 17 C-17s.
In 2010, Boeing reduced the production rate to 10 aircraft per year from a high of 16 per year, due to dwindling orders and to extend the production line's life while additional orders were sought. The workforce was reduced by about 1,100 through 2012, a second shift at the Long Beach plant was also eliminated. By April 2011, 230 production C-17s had been delivered, including 210 to the USAF. The C-17 prototype "T-1" was retired in 2012 after use as a testbed by the USAF. In January 2010, the USAF announced the end of Boeing's performance-based logistics contracts to maintain the type. On 19 June 2012, the USAF ordered its 224th and final C-17 to replace one that crashed in Alaska in July 2010.
In September 2013, Boeing announced that C-17 production was starting to close down. In October 2014, the main wing spar of the 279th and last aircraft was completed; this C-17 was delivered in 2015, after which Boeing closed the Long Beach plant. Production of spare components was to continue until at least 2017. The C-17 is projected to be in service for several decades. In February 2014, Boeing was engaged in sales talks with "five or six" countries for the remaining 15 C-17s; thus Boeing decided to build ten aircraft without confirmed buyers in anticipation of future purchases.
In May 2015, The Wall Street Journal reported that Boeing expected to book a charge of under $100 million and cut 3,000 positions associated with the C-17 program, and also suggested that Airbus' lower cost A400M Atlas took international sales away from the C-17.
The C-17 Globemaster III is a strategic transport aircraft, able to airlift cargo close to a battle area. The size and weight of U.S. mechanized firepower and equipment have grown in recent decades from increased air mobility requirements, particularly for large or heavy non-palletized outsize cargo. It has a length of 174 feet (53 m) and a wingspan of 169 feet 10 inches (51.77 m), and uses about 8% composite materials, mostly in secondary structure and control surfaces.
The C-17 is powered by four Pratt & Whitney F117-PW-100 turbofan engines, which are based on the commercial Pratt & Whitney PW2040 used on the Boeing 757. Each engine is rated at 40,400 lbf (180 kN) of thrust. The engine's thrust reversers direct engine exhaust air upwards and forward, reducing the chances of foreign object damage by ingestion of runway debris, and providing enough reverse thrust to back up the aircraft while taxiing. The thrust reversers can also be used in flight at idle-reverse for added drag in maximum-rate descents. In vortex surfing tests performed by two C-17s, up to 10% fuel savings were reported.
For cargo operations the C-17 requires a crew of three: pilot, copilot, and loadmaster. The cargo compartment is 88 feet (27 m) long by 18 feet (5.5 m) wide by 12 feet 4 inches (3.76 m) high. The cargo floor has rollers for palletized cargo but it can be flipped to provide a flat floor suitable for vehicles and other rolling stock. Cargo is loaded through a large aft ramp that accommodates rolling stock, such as a 69-ton (63-metric ton) M1 Abrams main battle tank, other armored vehicles, trucks, and trailers, along with palletized cargo.
Maximum payload of the C-17 is 170,900 pounds (77,500 kg; 85.5 short tons), and its maximum takeoff weight is 585,000 pounds (265,000 kg). With a payload of 160,000 pounds (73,000 kg) and an initial cruise altitude of 28,000 ft (8,500 m), the C-17 has an unrefueled range of about 2,400 nautical miles (4,400 kilometres) on the first 71 aircraft, and 2,800 nautical miles (5,200 kilometres) on all subsequent extended-range models that include a sealed center wing bay as a fuel tank. Boeing informally calls these aircraft the C-17 ER. The C-17's cruise speed is about 450 knots (830 km/h) (Mach 0.74). It is designed to airdrop 102 paratroopers and their equipment. According to Boeing the maximum unloaded range is 6,230 nautical miles (10,026 Kilometers).
The C-17 is designed to operate from runways as short as 3,500 ft (1,067 m) and as narrow as 90 ft (27 m). The C-17 can also operate from unpaved, unimproved runways (although with a higher probability to damage the aircraft). The thrust reversers can be used to move the aircraft backwards and reverse direction on narrow taxiways using a three- (or more) point turn. The plane is designed for 20 man-hours of maintenance per flight hour, and a 74% mission availability rate.
The first production C-17 was delivered to Charleston Air Force Base, South Carolina, on 14 July 1993. The first C-17 unit, the 17th Airlift Squadron, became operationally ready on 17 January 1995. It has broken 22 records for oversized payloads. The C-17 was awarded U.S. aviation's most prestigious award, the Collier Trophy, in 1994. A Congressional report on operations in Kosovo and Operation Allied Force noted "One of the great success stories...was the performance of the Air Force's C-17A" It flew half of the strategic airlift missions in the operation, the type could use small airfields, easing operations; rapid turnaround times also led to efficient utilization.
In 2006, eight C-17s were delivered to March Joint Air Reserve Base, California; controlled by the Air Force Reserve Command (AFRC), assigned to the 452nd Air Mobility Wing and subsequently assigned to AMC's 436th Airlift Wing and its AFRC "associate" unit, the 512th Airlift Wing, at Dover Air Force Base, Delaware, supplementing the Lockheed C-5 Galaxy. The Mississippi Air National Guard's 172 Airlift Group received their first of eight C-17s in 2006. In 2011, the New York Air National Guard's 105th Airlift Wing at Stewart Air National Guard Base transitioned from the C-5 to the C-17.
C-17s delivered military supplies during Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq as well as humanitarian aid in the aftermath of the 2010 Haiti earthquake, and the 2011 Sindh floods, delivering thousands of food rations, tons of medical and emergency supplies. On 26 March 2003, 15 USAF C-17s participated in the biggest combat airdrop since the United States invasion of Panama in December 1989: the night-time airdrop of 1,000 paratroopers from the 173rd Airborne Brigade occurred over Bashur, Iraq. These airdrops were followed by C-17s ferrying M1 Abrams, M2 Bradleys, M113s and artillery. USAF C-17s have also assisted allies in their airlift needs, such as Canadian vehicles to Afghanistan in 2003 and Australian forces for the Australian-led military deployment to East Timor in 2006. In 2006, USAF C-17s flew 15 Canadian Leopard C2 tanks from Kyrgyzstan into Kandahar in support of NATO's Afghanistan mission. In 2013, five USAF C-17s supported French operations in Mali, operating with other nations' C-17s (RAF, NATO and RCAF deployed a single C-17 each).
Since 1999, C-17s have flown annually to Antarctica on Operation Deep Freeze in support of the US Antarctic Research Program, replacing the C-141s used in prior years. The initial flight was flown by the USAF 62nd Airlift Wing. The C-17s fly round trip between Christchurch Airport and McMurdo Station around October each year and take 5 hours to fly each way. In 2006, the C-17 flew its first Antarctic airdrop mission, delivering 70,000 pounds of supplies. Further air drops occurred during subsequent years.
A C-17 accompanies the President of the United States on his visits to both domestic and foreign arrangements, consultations, and meetings. It is used to transport the Presidential Limousine, Marine One, and security detachments. On several occasions, a C-17 has been used to transport the President himself, temporarily gaining the Air Force One call sign while doing so.
Debate arose over follow-on C-17 orders, the USAF requested line shutdown while Congress called for further production. In FY2007, the USAF requested $1.6 billion (~$2.19 billion in 2022) in response to "excessive combat use" on the C-17 fleet. In 2008, USAF General Arthur Lichte, Commander of Air Mobility Command, indicated before a House of Representatives subcommittee on air and land forces a need to extend production to another 15 aircraft to increase the total to 205, and that C-17 production may continue to satisfy airlift requirements. The USAF finally decided to cap its C-17 fleet at 223 aircraft; the final delivery was on 12 September 2013.
In 2015, as part of a missile-defense test at Wake Island, simulated medium-range ballistic missiles were launched from C-17s against THAAD missile defense systems and the USS John Paul Jones (DDG-53). In early 2020, palletized munitions–"Combat Expendable Platforms"– were tested from C-17s and C-130Js with results the USAF considered positive. In 2021, the Air Force Research Laboratory further developed the concept into the Rapid Dragon system which transforms the C-17 into a lethal cruise missile arsenal ship capable of mass launching 45 JASSM-ER with 500 kg warheads from a standoff distance of 925 km (575 mi). Future anticipated improvements includes support for JDAM-ER, mine laying, drone dispersal as well as improved standoff range when full production of the 1,900 km (1,200 mi) JASSM-XR delivers large inventories in 2024.
On 15 August 2021, USAF C-17 02-1109 from the 62nd Airlift Wing and 446th Airlift Wing at Joint Base Lewis-McChord departed Hamid Karzai International Airport in Kabul, Afghanistan, while crowds of people trying to escape the 2021 Taliban offensive ran alongside the aircraft. The C-17 lifted off with people holding on to the outside, and at least two died after falling from the aircraft. There were an unknown number possibly crushed and killed by the landing gear retracting, with human remains found in the landing-gear stowage. Also that day, C-17 01-0186 from the 816th Expeditionary Airlift Squadron at Al Udeid Air Base transported 823 Afghan citizens from Hamid Karzai International Airport on a single flight, setting a new record for the type which was previously over 670 people during a 2013 typhoon evacuation from Tacloban, Philippines.
Boeing marketed the C-17 to many European nations including Belgium, Germany, France, Italy, Spain and the United Kingdom. The Royal Air Force (RAF) has established an aim of having interoperability and some weapons and capabilities commonality with the USAF. The 1998 Strategic Defence Review identified a requirement for a strategic airlifter. The Short-Term Strategic Airlift competition commenced in September of that year, but the tender was canceled in August 1999 with some bids identified by ministers as too expensive, including the Boeing/BAe C-17 bid, and others unsuitable. The project continued, with the C-17 seen as the favorite. In the light of Airbus A400M delays, the UK Secretary of State for Defence, Geoff Hoon, announced in May 2000 that the RAF would lease four C-17s at an annual cost of £100 million from Boeing for an initial seven years with an optional two-year extension. The RAF had the option to buy or return the aircraft to Boeing. The UK committed to upgrading its C-17s in line with the USAF so that if they were returned, the USAF could adopt them. The lease agreement restricted the C-17's operational use, meaning that the RAF could not use them for para-drop, airdrop, rough field, low-level operations and air to air refueling.
The first C-17 was delivered to the RAF at Boeing's Long Beach facility on 17 May 2001 and flown to RAF Brize Norton by a crew from No. 99 Squadron. The RAF's fourth C-17 was delivered on 24 August 2001. The RAF aircraft were some of the first to take advantage of the new center wing fuel tank found in Block 13 aircraft. In RAF service, the C-17 has not been given an official service name and designation (for example, C-130J referred to as Hercules C4 or C5), but is referred to simply as the C-17 or "C-17A Globemaster". Although it was to be a fallback for the A400M, the Ministry of Defence (MoD) announced on 21 July 2004 that they had elected to buy their four C-17s at the end of the lease, though the A400M appeared to be closer to production. The C-17 gives the RAF strategic capabilities that it would not wish to lose, for example a maximum payload of 169,500 pounds (76,900 kg) compared to the A400M's 82,000 pounds (37,000 kg). The C-17's capabilities allow the RAF to use it as an airborne hospital for medical evacuation missions.
Another C-17 was ordered in August 2006, and delivered on 22 February 2008. The four leased C-17s were to be purchased later in 2008. Due to fears that the A400M may suffer further delays, the MoD announced in 2006 that it planned to acquire three more C-17s, for a total of eight, with delivery in 2009–2010. On 3 December 2007, the MoD announced a contract for a sixth C-17, which was received on 11 June 2008. On 18 December 2009, Boeing confirmed that the RAF had ordered a seventh C-17, which was delivered on 16 November 2010. The UK announced the purchase of its eighth C-17 in February 2012. The RAF showed interest in buying a ninth C-17 in November 2013.
On 13 January 2013, the RAF deployed two C-17s from RAF Brize Norton to the French Évreux Air Base, transporting French armored vehicles to the Malian capital of Bamako during the French intervention in Mali. In June 2015, an RAF C-17 was used to medically evacuate four victims of the 2015 Sousse attacks from Tunisia. On 13 September 2022, C-17 ZZ177 carried the body of Queen Elizabeth II from Edinburgh Airport to RAF Northolt in London. She had been lying in state at St Giles' Cathedral in Edinburgh, Scotland.
The Royal Australian Air Force (RAAF) began investigating an acquisition of strategic transport aircraft in 2005. In late 2005, the then Minister for Defence Robert Hill stated that such aircraft were being considered due to the limited availability of strategic airlift aircraft from partner nations and air freight companies. The C-17 was considered to be favored over the A400M as it was a "proven aircraft" and in production. One major RAAF requirement was the ability to airlift the Army's M1 Abrams tanks; another requirement was immediate delivery. Though unstated, commonality with the USAF and the RAF was also considered advantageous. RAAF aircraft were ordered directly from the USAF production run and are identical to American C-17s even in paint scheme, the only difference being the national markings, allowing deliveries to commence within nine months of commitment to the program.
On 2 March 2006, the Australian government announced the purchase of three aircraft and one option with an entry into service date of 2006. In July 2006, Boeing was awarded a fixed price contract to deliver four C-17s for US$780M (A$1bn). Australia also signed a US$80.7M contract to join the global 'virtual fleet' C-17 sustainment program; RAAF C-17s receive the same upgrades as the USAF's fleet.
The RAAF took delivery of its first C-17 in a ceremony at Boeing's plant at Long Beach, California on 28 November 2006. Several days later the aircraft flew from Hickam Air Force Base, Hawaii to Defence Establishment Fairbairn, Canberra, arriving on 4 December 2006. The aircraft was formally accepted in a ceremony at Fairbairn shortly after arrival. The second aircraft was delivered to the RAAF on 11 May 2007 and the third was delivered on 18 December 2007. The fourth Australian C-17 was delivered on 19 January 2008. All the Australian C-17s are operated by No. 36 Squadron and are based at RAAF Base Amberley in Queensland.
On 18 April 2011, Boeing announced that Australia had signed an agreement with the U.S. government to acquire a fifth C-17 due to an increased demand for humanitarian and disaster relief missions. The aircraft was delivered to the RAAF on 14 September 2011. On 23 September 2011, Australian Minister for Defence Materiel Jason Clare announced that the government was seeking information from the U.S. about the price and delivery schedule for a sixth Globemaster. In November 2011, Australia requested a sixth C-17 through the U.S. Foreign Military Sales program; it was ordered in June 2012, and was delivered on 1 November 2012.
In August 2014, Defence Minister David Johnston announced the intention to purchase one or two additional C-17s. On 3 October 2014, Johnston announced the government's approval to buy two C-17s at a total cost of US$770M (A$1bn). The United States Congress approved the sale under the Foreign Military Sales program. Prime Minister Tony Abbott confirmed in April 2015 that two additional aircraft were to be ordered, with both delivered by 4 November 2015; these added to the six C-17s it had as of 2015.
The Canadian Armed Forces had a long-standing need for strategic airlift for military and humanitarian operations around the world. It had followed a pattern similar to the German Air Force in leasing Antonovs and Ilyushins for many requirements, including deploying the Disaster Assistance Response Team to tsunami-stricken Sri Lanka in 2005; the Canadian Forces had relied entirely on leased An-124 Ruslan for a Canadian Army deployment to Haiti in 2003. A combination of leased Ruslans, Ilyushins and USAF C-17s was also used to move heavy equipment to Afghanistan. In 2002, the Canadian Forces Future Strategic Airlifter Project began to study alternatives, including long-term leasing arrangements.
On 5 July 2006, the Canadian government issued a notice of intent to negotiate with Boeing to procure four airlifters for the Canadian Forces Air Command (Royal Canadian Air Force after August 2011). On 1 February 2007, Canada awarded a contract for four C-17s with delivery beginning in August 2007. Like Australia, Canada was granted airframes originally slated for the USAF to accelerate delivery. The official Canadian designation is CC-177 Globemaster III.
On 23 July 2007, the first Canadian C-17 made its initial flight. It was turned over to Canada on 8 August, and participated at the Abbotsford International Airshow on 11 August prior to arriving at its new home base at 8 Wing, CFB Trenton, Ontario on 12 August. Its first operational mission was to deliver disaster relief to Jamaica following Hurricane Dean that month. The last of the initial four aircraft was delivered in April 2008. On 19 December 2014, it was reported that Canada intended to purchase one more C-17. On 30 March 2015, Canada's fifth C-17 arrived at CFB Trenton. The aircraft are assigned to 429 Transport Squadron based at CFB Trenton.
On 14 April 2010, a Canadian C-17 landed for the first time at CFS Alert, the world's most northerly airport. Canadian Globemasters have been deployed in support of numerous missions worldwide, including Operation Hestia after the earthquake in Haiti, providing airlift as part of Operation Mobile and support to the Canadian mission in Afghanistan. After Typhoon Haiyan hit the Philippines in 2013, Canadian C-17s established an air bridge between the two nations, deploying Canada's DART and delivering humanitarian supplies and equipment. In 2014, they supported Operation Reassurance and Operation Impact.
At the 2006 Farnborough Airshow, a number of NATO member nations signed a letter of intent to jointly purchase and operate several C-17s within the Strategic Airlift Capability (SAC). SAC members are Bulgaria, Estonia, Hungary, Lithuania, the Netherlands, Norway, Poland, Romania, Slovenia, the U.S., along with two Partnership for Peace countries Finland and Sweden as of 2010. The purchase was for two C-17s, and a third was contributed by the U.S. On 14 July 2009, Boeing delivered the first C-17 under the SAC program. The second and third C-17s were delivered in September and October 2009.
The SAC C-17s are based at Pápa Air Base, Hungary. The Heavy Airlift Wing is hosted by Hungary, which acts as the flag nation. The aircraft are manned in similar fashion as the NATO E-3 AWACS aircraft. The C-17 flight crew are multi-national, but each mission is assigned to an individual member nation based on the SAC's annual flight hour share agreement. The NATO Airlift Management Programme Office (NAMPO) provides management and support for the Heavy Airlift Wing. NAMPO is a part of the NATO Support Agency (NSPA). In September 2014, Boeing stated that the three C-17s supporting SAC missions had achieved a readiness rate of nearly 94 percent over the last five years and supported over 1,000 missions.
In June 2009, the Indian Air Force (IAF) selected the C-17 for its Very Heavy Lift Transport Aircraft requirement to replace several types of transport aircraft. In January 2010, India requested 10 C-17s through the U.S.'s Foreign Military Sales program, the sale was approved by Congress in June 2010. On 23 June 2010, the IAF successfully test-landed a USAF C-17 at the Gaggal Airport, India to complete the IAF's C-17 trials. In February 2011, the IAF and Boeing agreed terms for the order of 10 C-17s with an option for six more; the US$4.1 billion order was approved by the Indian Cabinet Committee on Security on 6 June 2011. Deliveries began in June 2013 and were to continue to 2014. In 2012, the IAF reportedly finalized plans to buy six more C-17s in its five-year plan for 2017–2022.
It provides strategic airlift, the ability to deploy special forces, and to operate in diverse terrain – from Himalayan air bases in North India at 13,000 ft (4,000 m) to Indian Ocean bases in South India. The C-17s are based at Hindon Air Force Station and are operated by No. 81 Squadron IAF Skylords. The first C-17 was delivered in January 2013 for testing and training; it was officially accepted on 11 June 2013. The second C-17 was delivered on 23 July 2013 and put into service immediately. IAF Chief of Air Staff Norman AK Browne called it "a major component in the IAF's modernization drive" while taking delivery of the aircraft at Boeing's Long Beach factory. On 2 September 2013, the Skylords squadron with three C-17s officially entered IAF service.
The Skylords regularly fly missions within India, such as to high-altitude bases at Leh and Thoise. The IAF first used the C-17 to transport an infantry battalion's equipment to Port Blair on Andaman Islands on 1 July 2013. Foreign deployments to date include Tajikistan in August 2013, and Rwanda to support Indian peacekeepers. One C-17 was used for transporting relief materials during Cyclone Phailin.
The sixth aircraft was received in July 2014. In June 2017, the U.S. Department of State approved the potential sale of one C-17 to India under a proposed $366 million (~$432 million in 2022) U.S. Foreign Military Sale. This aircraft, the last C-17 produced, increased the IAF's fleet to 11 C-17s. In March 2018, a contract was awarded for completion by 22 August 2019.
On 7 February 2023, an IAF C-17 delivered humanitarian aid packages for earthquake victims in Turkey and Syria by taking a detour around Pakistan's airspace in the aftermath of the 2021 Taliban takeover of Afghanistan.
Boeing delivered Qatar's first C-17 on 11 August 2009 and the second on 10 September 2009 for the Qatar Emiri Air Force. Qatar received its third C-17 in 2012, and fourth C-17 was received on 10 December 2012. In June 2013, The New York Times reported that Qatar was allegedly using its C-17s to ship weapons from Libya to the Syrian opposition during the civil war via Turkey. On 15 June 2015, it was announced at the Paris Airshow that Qatar agreed to order four additional C-17s from the five remaining "white tail" C-17s to double Qatar's C-17 fleet. One Qatari C-17 bears the civilian markings of government-owned Qatar Airways, although the airplane is owned and operated by the Qatar Emiri Air Force. This is because some airports are closed to airplanes with military markings.
In February 2009, the United Arab Emirates Air Force agreed to buy four C-17s. In January 2010, a contract was signed for six C-17s. In May 2011, the first C-17 was handed over and the final was received in June 2012.
Kuwait requested the purchase of one C-17 in September 2010 and a second in April 2013 through the U.S.'s Foreign Military Sales (FMS) program. The nation ordered two C-17s; the first was delivered on 13 February 2014.
In 2015, New Zealand's Minister of Defence, Gerry Brownlee, was considering the purchase of two C-17s for the Royal New Zealand Air Force at an estimated cost of $600 million as a heavy air transport option. However, the New Zealand Government eventually decided not to acquire the C-17.
Data from Brassey's World Aircraft & Systems Directory, U.S. Air Force fact sheet, Boeing
General characteristics
Performance
Avionics
Related development
Aircraft of comparable role, configuration, and era
Related lists | [
{
"paragraph_id": 0,
"text": "The McDonnell Douglas/Boeing C-17 Globemaster III is a large military transport aircraft that was developed for the United States Air Force (USAF) from the 1980s to the early 1990s by McDonnell Douglas. The C-17 carries forward the name of two previous piston-engined military cargo aircraft, the Douglas C-74 Globemaster and the Douglas C-124 Globemaster II.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The C-17 is based upon the YC-15, a smaller prototype airlifter designed during the 1970s. It was designed to replace the Lockheed C-141 Starlifter, and also fulfill some of the duties of the Lockheed C-5 Galaxy. Compared to the YC-15, the redesigned airlifter differed in having swept wings, increased size, and more powerful engines. Development was protracted by a series of design issues, causing the company to incur a loss of nearly US$1.5 billion on the program's development phase. On 15 September 1991, roughly one year behind schedule, the first C-17 performed its maiden flight. The C-17 formally entered USAF service on 17 January 1995. Boeing, which merged with McDonnell Douglas in 1997, continued to manufacture the C-17 for almost two decades. The final C-17 was completed at the Long Beach, California, plant and flown on 29 November 2015.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The C-17 commonly performs tactical and strategic airlift missions, transporting troops and cargo throughout the world; additional roles include medical evacuation and airdrop duties. The transport is in service with the USAF along with air arms of India, the United Kingdom, Australia, Canada, Qatar, the United Arab Emirates, Kuwait, and the Europe-based multilateral organization Heavy Airlift Wing. The type played a key logistical role during both Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq, as well as in providing humanitarian aid in the aftermath of various natural disasters, including the 2010 Haiti earthquake, the 2011 Sindh floods and the recent 2023 Turkey-Syria earthquake.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the 1970s, the U.S. Air Force began looking for a replacement for its Lockheed C-130 Hercules tactical cargo aircraft. The Advanced Medium STOL Transport (AMST) competition was held, with Boeing proposing the YC-14, and McDonnell Douglas proposing the YC-15. Though both entrants exceeded specified requirements, the AMST competition was canceled before a winner was selected. The USAF started the C-X program in November 1979 to develop a larger AMST with longer range to augment its strategic airlift.",
"title": "Development"
},
{
"paragraph_id": 4,
"text": "By 1980, the USAF had a large fleet of aging C-141 Starlifter cargo aircraft. Compounding matters, increased strategic airlift capabilities was needed to fulfill its rapid-deployment airlift requirements. The USAF set mission requirements and released a request for proposals (RFP) for C-X in October 1980. McDonnell Douglas chose to develop a new aircraft based on the YC-15. Boeing bid an enlarged three-engine version of its AMST YC-14. Lockheed submitted both a C-5-based design and an enlarged C-141 design. On 28 August 1981, McDonnell Douglas was chosen to build its proposal, then designated C-17. Compared to the YC-15, the new aircraft differed in having swept wings, increased size, and more powerful engines. This would allow it to perform the work done by the C-141, and to fulfill some of the duties of the Lockheed C-5 Galaxy, freeing the C-5 fleet for outsize cargo.",
"title": "Development"
},
{
"paragraph_id": 5,
"text": "Alternative proposals were pursued to fill airlift needs after the C-X contest. These were lengthening of C-141As into C-141Bs, ordering more C-5s, continued purchases of KC-10s, and expansion of the Civil Reserve Air Fleet. Limited budgets reduced program funding, requiring a delay of four years. During this time contracts were awarded for preliminary design work and for the completion of engine certification. In December 1985, a full-scale development contract was awarded, under Program Manager Bob Clepper. At this time, first flight was planned for 1990. The USAF had formed a requirement for 210 aircraft.",
"title": "Development"
},
{
"paragraph_id": 6,
"text": "Development problems and limited funding caused delays in the late 1980s. Criticisms were made of the developing aircraft and questions were raised about more cost-effective alternatives during this time. In April 1990, Secretary of Defense Dick Cheney reduced the order from 210 to 120 aircraft. The maiden flight of the C-17 took place on 15 September 1991 from the McDonnell Douglas's plant in Long Beach, California, about a year behind schedule. The first aircraft (T-1) and five more production models (P1-P5) participated in extensive flight testing and evaluation at Edwards Air Force Base. Two complete airframes were built for static and repeated load testing.",
"title": "Development"
},
{
"paragraph_id": 7,
"text": "A static test of the C-17 wing in October 1992 resulted in its failure at 128% of design limit load, below the 150% requirement. Both wings buckled rear to the front and failures occurred in stringers, spars, and ribs. Some $100 million was spent to redesign the wing structure; the wing failed at 145% during a second test in September 1993. A review of the test data, however, showed that the wing was not loaded correctly and did indeed meet the requirement. The C-17 received the \"Globemaster III\" name in early 1993. In late 1993, the Department of Defense (DoD) gave the contractor two years to solve production issues and cost overruns or face the contract's termination after the delivery of the 40th aircraft. By accepting the 1993 terms, McDonnell Douglas incurred a loss of nearly US$1.5 billion on the program's development phase.",
"title": "Development"
},
{
"paragraph_id": 8,
"text": "In March 1994, the Non-Developmental Airlift Aircraft program was established to procure a transport aircraft using commercial practices as a possible alternative or supplement to the C-17. Initial material solutions considered included: buy a modified Boeing 747-400 NDAA, restart the C-5 production line, extend the C-141 service life, and continue C-17 production. The field eventually narrowed to: the Boeing 747-400, the Lockheed Martin C-5D, and the McDonnell Douglas C-17. The NDAA program was initiated after the C-17 program was temporarily capped at a 40-aircraft buy pending further evaluation of C-17 cost and performance and an assessment of commercial airlift alternatives.",
"title": "Development"
},
{
"paragraph_id": 9,
"text": "In April 1994, the program remained over budget and did not meet weight, fuel burn, payload, and range specifications. It failed several key criteria during airworthiness evaluation tests. Problems were found with the mission software, landing gear, and other areas. In May 1994, it was proposed to cut production to as few as 32 aircraft; these cuts were later rescinded. A July 1994 Government Accountability Office (GAO) report revealed that USAF and DoD studies from 1986 and 1991 stated the C-17 could use 6,400 more runways outside the U.S. than the C-5, but these studies had only considered runway dimensions, but not runway strength or load classification numbers (LCN). The C-5 has a lower LCN, but the USAF classifies both in the same broad load classification group. When considering runway dimensions and load ratings, the C-17's worldwide runway advantage over the C-5 shrank from 6,400 to 911 airfields. The report also stated \"current military doctrine that does not reflect the use of small, austere airfields\", thus the C-17's short field capability was not considered.",
"title": "Development"
},
{
"paragraph_id": 10,
"text": "A January 1995 GAO report stated that the USAF originally planned to order 210 C-17s at a cost of $41.8 billion, and that the 120 aircraft on order were to cost $39.5 billion based on a 1992 estimate. In March 1994, the U.S. Army decided it did not need the 60,000 lb (27,000 kg) low-altitude parachute-extraction system delivery with the C-17 and that the C-130's 42,000 lb (19,000 kg) capability was sufficient. C-17 testing was limited to this lower weight. Airflow issues prevented the C-17 from meeting airdrop requirements. A February 1997 GAO report revealed that a C-17 with a full payload could not land on 3,000 ft (914 m) wet runways; simulations suggested a distance of 5,000 ft (1,500 m) was required. The YC-15 was transferred to AMARC to be made flightworthy again for further flight tests for the C-17 program in March 1997.",
"title": "Development"
},
{
"paragraph_id": 11,
"text": "By September 1995, most of the prior issues were reportedly resolved and the C-17 was meeting all performance and reliability targets. The first USAF squadron was declared operational in January 1995.",
"title": "Development"
},
{
"paragraph_id": 12,
"text": "In 1996, the DoD ordered another 80 aircraft for a total of 120. In 1997, McDonnell Douglas merged with domestic competitor Boeing. In April 1999, Boeing offered to cut the C-17's unit price if the USAF bought 60 more; in August 2002, the order was increased to 180 aircraft. In 2007, 190 C-17s were on order for the USAF. On 6 February 2009, Boeing was awarded a $2.95 billion contract for 15 additional C-17s, increasing the total USAF fleet to 205 and extending production from August 2009 to August 2010. On 6 April 2009, U.S. Secretary of Defense Robert Gates stated that there would be no more C-17s ordered beyond the 205 planned. However, on 12 June 2009, the House Armed Services Air and Land Forces Subcommittee added a further 17 C-17s.",
"title": "Development"
},
{
"paragraph_id": 13,
"text": "In 2010, Boeing reduced the production rate to 10 aircraft per year from a high of 16 per year, due to dwindling orders and to extend the production line's life while additional orders were sought. The workforce was reduced by about 1,100 through 2012, a second shift at the Long Beach plant was also eliminated. By April 2011, 230 production C-17s had been delivered, including 210 to the USAF. The C-17 prototype \"T-1\" was retired in 2012 after use as a testbed by the USAF. In January 2010, the USAF announced the end of Boeing's performance-based logistics contracts to maintain the type. On 19 June 2012, the USAF ordered its 224th and final C-17 to replace one that crashed in Alaska in July 2010.",
"title": "Development"
},
{
"paragraph_id": 14,
"text": "In September 2013, Boeing announced that C-17 production was starting to close down. In October 2014, the main wing spar of the 279th and last aircraft was completed; this C-17 was delivered in 2015, after which Boeing closed the Long Beach plant. Production of spare components was to continue until at least 2017. The C-17 is projected to be in service for several decades. In February 2014, Boeing was engaged in sales talks with \"five or six\" countries for the remaining 15 C-17s; thus Boeing decided to build ten aircraft without confirmed buyers in anticipation of future purchases.",
"title": "Development"
},
{
"paragraph_id": 15,
"text": "In May 2015, The Wall Street Journal reported that Boeing expected to book a charge of under $100 million and cut 3,000 positions associated with the C-17 program, and also suggested that Airbus' lower cost A400M Atlas took international sales away from the C-17.",
"title": "Development"
},
{
"paragraph_id": 16,
"text": "The C-17 Globemaster III is a strategic transport aircraft, able to airlift cargo close to a battle area. The size and weight of U.S. mechanized firepower and equipment have grown in recent decades from increased air mobility requirements, particularly for large or heavy non-palletized outsize cargo. It has a length of 174 feet (53 m) and a wingspan of 169 feet 10 inches (51.77 m), and uses about 8% composite materials, mostly in secondary structure and control surfaces.",
"title": "Design"
},
{
"paragraph_id": 17,
"text": "The C-17 is powered by four Pratt & Whitney F117-PW-100 turbofan engines, which are based on the commercial Pratt & Whitney PW2040 used on the Boeing 757. Each engine is rated at 40,400 lbf (180 kN) of thrust. The engine's thrust reversers direct engine exhaust air upwards and forward, reducing the chances of foreign object damage by ingestion of runway debris, and providing enough reverse thrust to back up the aircraft while taxiing. The thrust reversers can also be used in flight at idle-reverse for added drag in maximum-rate descents. In vortex surfing tests performed by two C-17s, up to 10% fuel savings were reported.",
"title": "Design"
},
{
"paragraph_id": 18,
"text": "For cargo operations the C-17 requires a crew of three: pilot, copilot, and loadmaster. The cargo compartment is 88 feet (27 m) long by 18 feet (5.5 m) wide by 12 feet 4 inches (3.76 m) high. The cargo floor has rollers for palletized cargo but it can be flipped to provide a flat floor suitable for vehicles and other rolling stock. Cargo is loaded through a large aft ramp that accommodates rolling stock, such as a 69-ton (63-metric ton) M1 Abrams main battle tank, other armored vehicles, trucks, and trailers, along with palletized cargo.",
"title": "Design"
},
{
"paragraph_id": 19,
"text": "Maximum payload of the C-17 is 170,900 pounds (77,500 kg; 85.5 short tons), and its maximum takeoff weight is 585,000 pounds (265,000 kg). With a payload of 160,000 pounds (73,000 kg) and an initial cruise altitude of 28,000 ft (8,500 m), the C-17 has an unrefueled range of about 2,400 nautical miles (4,400 kilometres) on the first 71 aircraft, and 2,800 nautical miles (5,200 kilometres) on all subsequent extended-range models that include a sealed center wing bay as a fuel tank. Boeing informally calls these aircraft the C-17 ER. The C-17's cruise speed is about 450 knots (830 km/h) (Mach 0.74). It is designed to airdrop 102 paratroopers and their equipment. According to Boeing the maximum unloaded range is 6,230 nautical miles (10,026 Kilometers).",
"title": "Design"
},
{
"paragraph_id": 20,
"text": "The C-17 is designed to operate from runways as short as 3,500 ft (1,067 m) and as narrow as 90 ft (27 m). The C-17 can also operate from unpaved, unimproved runways (although with a higher probability to damage the aircraft). The thrust reversers can be used to move the aircraft backwards and reverse direction on narrow taxiways using a three- (or more) point turn. The plane is designed for 20 man-hours of maintenance per flight hour, and a 74% mission availability rate.",
"title": "Design"
},
{
"paragraph_id": 21,
"text": "The first production C-17 was delivered to Charleston Air Force Base, South Carolina, on 14 July 1993. The first C-17 unit, the 17th Airlift Squadron, became operationally ready on 17 January 1995. It has broken 22 records for oversized payloads. The C-17 was awarded U.S. aviation's most prestigious award, the Collier Trophy, in 1994. A Congressional report on operations in Kosovo and Operation Allied Force noted \"One of the great success stories...was the performance of the Air Force's C-17A\" It flew half of the strategic airlift missions in the operation, the type could use small airfields, easing operations; rapid turnaround times also led to efficient utilization.",
"title": "Operational history"
},
{
"paragraph_id": 22,
"text": "In 2006, eight C-17s were delivered to March Joint Air Reserve Base, California; controlled by the Air Force Reserve Command (AFRC), assigned to the 452nd Air Mobility Wing and subsequently assigned to AMC's 436th Airlift Wing and its AFRC \"associate\" unit, the 512th Airlift Wing, at Dover Air Force Base, Delaware, supplementing the Lockheed C-5 Galaxy. The Mississippi Air National Guard's 172 Airlift Group received their first of eight C-17s in 2006. In 2011, the New York Air National Guard's 105th Airlift Wing at Stewart Air National Guard Base transitioned from the C-5 to the C-17.",
"title": "Operational history"
},
{
"paragraph_id": 23,
"text": "C-17s delivered military supplies during Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq as well as humanitarian aid in the aftermath of the 2010 Haiti earthquake, and the 2011 Sindh floods, delivering thousands of food rations, tons of medical and emergency supplies. On 26 March 2003, 15 USAF C-17s participated in the biggest combat airdrop since the United States invasion of Panama in December 1989: the night-time airdrop of 1,000 paratroopers from the 173rd Airborne Brigade occurred over Bashur, Iraq. These airdrops were followed by C-17s ferrying M1 Abrams, M2 Bradleys, M113s and artillery. USAF C-17s have also assisted allies in their airlift needs, such as Canadian vehicles to Afghanistan in 2003 and Australian forces for the Australian-led military deployment to East Timor in 2006. In 2006, USAF C-17s flew 15 Canadian Leopard C2 tanks from Kyrgyzstan into Kandahar in support of NATO's Afghanistan mission. In 2013, five USAF C-17s supported French operations in Mali, operating with other nations' C-17s (RAF, NATO and RCAF deployed a single C-17 each).",
"title": "Operational history"
},
{
"paragraph_id": 24,
"text": "Since 1999, C-17s have flown annually to Antarctica on Operation Deep Freeze in support of the US Antarctic Research Program, replacing the C-141s used in prior years. The initial flight was flown by the USAF 62nd Airlift Wing. The C-17s fly round trip between Christchurch Airport and McMurdo Station around October each year and take 5 hours to fly each way. In 2006, the C-17 flew its first Antarctic airdrop mission, delivering 70,000 pounds of supplies. Further air drops occurred during subsequent years.",
"title": "Operational history"
},
{
"paragraph_id": 25,
"text": "A C-17 accompanies the President of the United States on his visits to both domestic and foreign arrangements, consultations, and meetings. It is used to transport the Presidential Limousine, Marine One, and security detachments. On several occasions, a C-17 has been used to transport the President himself, temporarily gaining the Air Force One call sign while doing so.",
"title": "Operational history"
},
{
"paragraph_id": 26,
"text": "Debate arose over follow-on C-17 orders, the USAF requested line shutdown while Congress called for further production. In FY2007, the USAF requested $1.6 billion (~$2.19 billion in 2022) in response to \"excessive combat use\" on the C-17 fleet. In 2008, USAF General Arthur Lichte, Commander of Air Mobility Command, indicated before a House of Representatives subcommittee on air and land forces a need to extend production to another 15 aircraft to increase the total to 205, and that C-17 production may continue to satisfy airlift requirements. The USAF finally decided to cap its C-17 fleet at 223 aircraft; the final delivery was on 12 September 2013.",
"title": "Operational history"
},
{
"paragraph_id": 27,
"text": "In 2015, as part of a missile-defense test at Wake Island, simulated medium-range ballistic missiles were launched from C-17s against THAAD missile defense systems and the USS John Paul Jones (DDG-53). In early 2020, palletized munitions–\"Combat Expendable Platforms\"– were tested from C-17s and C-130Js with results the USAF considered positive. In 2021, the Air Force Research Laboratory further developed the concept into the Rapid Dragon system which transforms the C-17 into a lethal cruise missile arsenal ship capable of mass launching 45 JASSM-ER with 500 kg warheads from a standoff distance of 925 km (575 mi). Future anticipated improvements includes support for JDAM-ER, mine laying, drone dispersal as well as improved standoff range when full production of the 1,900 km (1,200 mi) JASSM-XR delivers large inventories in 2024.",
"title": "Operational history"
},
{
"paragraph_id": 28,
"text": "On 15 August 2021, USAF C-17 02-1109 from the 62nd Airlift Wing and 446th Airlift Wing at Joint Base Lewis-McChord departed Hamid Karzai International Airport in Kabul, Afghanistan, while crowds of people trying to escape the 2021 Taliban offensive ran alongside the aircraft. The C-17 lifted off with people holding on to the outside, and at least two died after falling from the aircraft. There were an unknown number possibly crushed and killed by the landing gear retracting, with human remains found in the landing-gear stowage. Also that day, C-17 01-0186 from the 816th Expeditionary Airlift Squadron at Al Udeid Air Base transported 823 Afghan citizens from Hamid Karzai International Airport on a single flight, setting a new record for the type which was previously over 670 people during a 2013 typhoon evacuation from Tacloban, Philippines.",
"title": "Operational history"
},
{
"paragraph_id": 29,
"text": "Boeing marketed the C-17 to many European nations including Belgium, Germany, France, Italy, Spain and the United Kingdom. The Royal Air Force (RAF) has established an aim of having interoperability and some weapons and capabilities commonality with the USAF. The 1998 Strategic Defence Review identified a requirement for a strategic airlifter. The Short-Term Strategic Airlift competition commenced in September of that year, but the tender was canceled in August 1999 with some bids identified by ministers as too expensive, including the Boeing/BAe C-17 bid, and others unsuitable. The project continued, with the C-17 seen as the favorite. In the light of Airbus A400M delays, the UK Secretary of State for Defence, Geoff Hoon, announced in May 2000 that the RAF would lease four C-17s at an annual cost of £100 million from Boeing for an initial seven years with an optional two-year extension. The RAF had the option to buy or return the aircraft to Boeing. The UK committed to upgrading its C-17s in line with the USAF so that if they were returned, the USAF could adopt them. The lease agreement restricted the C-17's operational use, meaning that the RAF could not use them for para-drop, airdrop, rough field, low-level operations and air to air refueling.",
"title": "Operational history"
},
{
"paragraph_id": 30,
"text": "The first C-17 was delivered to the RAF at Boeing's Long Beach facility on 17 May 2001 and flown to RAF Brize Norton by a crew from No. 99 Squadron. The RAF's fourth C-17 was delivered on 24 August 2001. The RAF aircraft were some of the first to take advantage of the new center wing fuel tank found in Block 13 aircraft. In RAF service, the C-17 has not been given an official service name and designation (for example, C-130J referred to as Hercules C4 or C5), but is referred to simply as the C-17 or \"C-17A Globemaster\". Although it was to be a fallback for the A400M, the Ministry of Defence (MoD) announced on 21 July 2004 that they had elected to buy their four C-17s at the end of the lease, though the A400M appeared to be closer to production. The C-17 gives the RAF strategic capabilities that it would not wish to lose, for example a maximum payload of 169,500 pounds (76,900 kg) compared to the A400M's 82,000 pounds (37,000 kg). The C-17's capabilities allow the RAF to use it as an airborne hospital for medical evacuation missions.",
"title": "Operational history"
},
{
"paragraph_id": 31,
"text": "Another C-17 was ordered in August 2006, and delivered on 22 February 2008. The four leased C-17s were to be purchased later in 2008. Due to fears that the A400M may suffer further delays, the MoD announced in 2006 that it planned to acquire three more C-17s, for a total of eight, with delivery in 2009–2010. On 3 December 2007, the MoD announced a contract for a sixth C-17, which was received on 11 June 2008. On 18 December 2009, Boeing confirmed that the RAF had ordered a seventh C-17, which was delivered on 16 November 2010. The UK announced the purchase of its eighth C-17 in February 2012. The RAF showed interest in buying a ninth C-17 in November 2013.",
"title": "Operational history"
},
{
"paragraph_id": 32,
"text": "On 13 January 2013, the RAF deployed two C-17s from RAF Brize Norton to the French Évreux Air Base, transporting French armored vehicles to the Malian capital of Bamako during the French intervention in Mali. In June 2015, an RAF C-17 was used to medically evacuate four victims of the 2015 Sousse attacks from Tunisia. On 13 September 2022, C-17 ZZ177 carried the body of Queen Elizabeth II from Edinburgh Airport to RAF Northolt in London. She had been lying in state at St Giles' Cathedral in Edinburgh, Scotland.",
"title": "Operational history"
},
{
"paragraph_id": 33,
"text": "The Royal Australian Air Force (RAAF) began investigating an acquisition of strategic transport aircraft in 2005. In late 2005, the then Minister for Defence Robert Hill stated that such aircraft were being considered due to the limited availability of strategic airlift aircraft from partner nations and air freight companies. The C-17 was considered to be favored over the A400M as it was a \"proven aircraft\" and in production. One major RAAF requirement was the ability to airlift the Army's M1 Abrams tanks; another requirement was immediate delivery. Though unstated, commonality with the USAF and the RAF was also considered advantageous. RAAF aircraft were ordered directly from the USAF production run and are identical to American C-17s even in paint scheme, the only difference being the national markings, allowing deliveries to commence within nine months of commitment to the program.",
"title": "Operational history"
},
{
"paragraph_id": 34,
"text": "On 2 March 2006, the Australian government announced the purchase of three aircraft and one option with an entry into service date of 2006. In July 2006, Boeing was awarded a fixed price contract to deliver four C-17s for US$780M (A$1bn). Australia also signed a US$80.7M contract to join the global 'virtual fleet' C-17 sustainment program; RAAF C-17s receive the same upgrades as the USAF's fleet.",
"title": "Operational history"
},
{
"paragraph_id": 35,
"text": "The RAAF took delivery of its first C-17 in a ceremony at Boeing's plant at Long Beach, California on 28 November 2006. Several days later the aircraft flew from Hickam Air Force Base, Hawaii to Defence Establishment Fairbairn, Canberra, arriving on 4 December 2006. The aircraft was formally accepted in a ceremony at Fairbairn shortly after arrival. The second aircraft was delivered to the RAAF on 11 May 2007 and the third was delivered on 18 December 2007. The fourth Australian C-17 was delivered on 19 January 2008. All the Australian C-17s are operated by No. 36 Squadron and are based at RAAF Base Amberley in Queensland.",
"title": "Operational history"
},
{
"paragraph_id": 36,
"text": "On 18 April 2011, Boeing announced that Australia had signed an agreement with the U.S. government to acquire a fifth C-17 due to an increased demand for humanitarian and disaster relief missions. The aircraft was delivered to the RAAF on 14 September 2011. On 23 September 2011, Australian Minister for Defence Materiel Jason Clare announced that the government was seeking information from the U.S. about the price and delivery schedule for a sixth Globemaster. In November 2011, Australia requested a sixth C-17 through the U.S. Foreign Military Sales program; it was ordered in June 2012, and was delivered on 1 November 2012.",
"title": "Operational history"
},
{
"paragraph_id": 37,
"text": "In August 2014, Defence Minister David Johnston announced the intention to purchase one or two additional C-17s. On 3 October 2014, Johnston announced the government's approval to buy two C-17s at a total cost of US$770M (A$1bn). The United States Congress approved the sale under the Foreign Military Sales program. Prime Minister Tony Abbott confirmed in April 2015 that two additional aircraft were to be ordered, with both delivered by 4 November 2015; these added to the six C-17s it had as of 2015.",
"title": "Operational history"
},
{
"paragraph_id": 38,
"text": "The Canadian Armed Forces had a long-standing need for strategic airlift for military and humanitarian operations around the world. It had followed a pattern similar to the German Air Force in leasing Antonovs and Ilyushins for many requirements, including deploying the Disaster Assistance Response Team to tsunami-stricken Sri Lanka in 2005; the Canadian Forces had relied entirely on leased An-124 Ruslan for a Canadian Army deployment to Haiti in 2003. A combination of leased Ruslans, Ilyushins and USAF C-17s was also used to move heavy equipment to Afghanistan. In 2002, the Canadian Forces Future Strategic Airlifter Project began to study alternatives, including long-term leasing arrangements.",
"title": "Operational history"
},
{
"paragraph_id": 39,
"text": "On 5 July 2006, the Canadian government issued a notice of intent to negotiate with Boeing to procure four airlifters for the Canadian Forces Air Command (Royal Canadian Air Force after August 2011). On 1 February 2007, Canada awarded a contract for four C-17s with delivery beginning in August 2007. Like Australia, Canada was granted airframes originally slated for the USAF to accelerate delivery. The official Canadian designation is CC-177 Globemaster III.",
"title": "Operational history"
},
{
"paragraph_id": 40,
"text": "On 23 July 2007, the first Canadian C-17 made its initial flight. It was turned over to Canada on 8 August, and participated at the Abbotsford International Airshow on 11 August prior to arriving at its new home base at 8 Wing, CFB Trenton, Ontario on 12 August. Its first operational mission was to deliver disaster relief to Jamaica following Hurricane Dean that month. The last of the initial four aircraft was delivered in April 2008. On 19 December 2014, it was reported that Canada intended to purchase one more C-17. On 30 March 2015, Canada's fifth C-17 arrived at CFB Trenton. The aircraft are assigned to 429 Transport Squadron based at CFB Trenton.",
"title": "Operational history"
},
{
"paragraph_id": 41,
"text": "On 14 April 2010, a Canadian C-17 landed for the first time at CFS Alert, the world's most northerly airport. Canadian Globemasters have been deployed in support of numerous missions worldwide, including Operation Hestia after the earthquake in Haiti, providing airlift as part of Operation Mobile and support to the Canadian mission in Afghanistan. After Typhoon Haiyan hit the Philippines in 2013, Canadian C-17s established an air bridge between the two nations, deploying Canada's DART and delivering humanitarian supplies and equipment. In 2014, they supported Operation Reassurance and Operation Impact.",
"title": "Operational history"
},
{
"paragraph_id": 42,
"text": "At the 2006 Farnborough Airshow, a number of NATO member nations signed a letter of intent to jointly purchase and operate several C-17s within the Strategic Airlift Capability (SAC). SAC members are Bulgaria, Estonia, Hungary, Lithuania, the Netherlands, Norway, Poland, Romania, Slovenia, the U.S., along with two Partnership for Peace countries Finland and Sweden as of 2010. The purchase was for two C-17s, and a third was contributed by the U.S. On 14 July 2009, Boeing delivered the first C-17 under the SAC program. The second and third C-17s were delivered in September and October 2009.",
"title": "Operational history"
},
{
"paragraph_id": 43,
"text": "The SAC C-17s are based at Pápa Air Base, Hungary. The Heavy Airlift Wing is hosted by Hungary, which acts as the flag nation. The aircraft are manned in similar fashion as the NATO E-3 AWACS aircraft. The C-17 flight crew are multi-national, but each mission is assigned to an individual member nation based on the SAC's annual flight hour share agreement. The NATO Airlift Management Programme Office (NAMPO) provides management and support for the Heavy Airlift Wing. NAMPO is a part of the NATO Support Agency (NSPA). In September 2014, Boeing stated that the three C-17s supporting SAC missions had achieved a readiness rate of nearly 94 percent over the last five years and supported over 1,000 missions.",
"title": "Operational history"
},
{
"paragraph_id": 44,
"text": "In June 2009, the Indian Air Force (IAF) selected the C-17 for its Very Heavy Lift Transport Aircraft requirement to replace several types of transport aircraft. In January 2010, India requested 10 C-17s through the U.S.'s Foreign Military Sales program, the sale was approved by Congress in June 2010. On 23 June 2010, the IAF successfully test-landed a USAF C-17 at the Gaggal Airport, India to complete the IAF's C-17 trials. In February 2011, the IAF and Boeing agreed terms for the order of 10 C-17s with an option for six more; the US$4.1 billion order was approved by the Indian Cabinet Committee on Security on 6 June 2011. Deliveries began in June 2013 and were to continue to 2014. In 2012, the IAF reportedly finalized plans to buy six more C-17s in its five-year plan for 2017–2022.",
"title": "Operational history"
},
{
"paragraph_id": 45,
"text": "It provides strategic airlift, the ability to deploy special forces, and to operate in diverse terrain – from Himalayan air bases in North India at 13,000 ft (4,000 m) to Indian Ocean bases in South India. The C-17s are based at Hindon Air Force Station and are operated by No. 81 Squadron IAF Skylords. The first C-17 was delivered in January 2013 for testing and training; it was officially accepted on 11 June 2013. The second C-17 was delivered on 23 July 2013 and put into service immediately. IAF Chief of Air Staff Norman AK Browne called it \"a major component in the IAF's modernization drive\" while taking delivery of the aircraft at Boeing's Long Beach factory. On 2 September 2013, the Skylords squadron with three C-17s officially entered IAF service.",
"title": "Operational history"
},
{
"paragraph_id": 46,
"text": "The Skylords regularly fly missions within India, such as to high-altitude bases at Leh and Thoise. The IAF first used the C-17 to transport an infantry battalion's equipment to Port Blair on Andaman Islands on 1 July 2013. Foreign deployments to date include Tajikistan in August 2013, and Rwanda to support Indian peacekeepers. One C-17 was used for transporting relief materials during Cyclone Phailin.",
"title": "Operational history"
},
{
"paragraph_id": 47,
"text": "The sixth aircraft was received in July 2014. In June 2017, the U.S. Department of State approved the potential sale of one C-17 to India under a proposed $366 million (~$432 million in 2022) U.S. Foreign Military Sale. This aircraft, the last C-17 produced, increased the IAF's fleet to 11 C-17s. In March 2018, a contract was awarded for completion by 22 August 2019.",
"title": "Operational history"
},
{
"paragraph_id": 48,
"text": "On 7 February 2023, an IAF C-17 delivered humanitarian aid packages for earthquake victims in Turkey and Syria by taking a detour around Pakistan's airspace in the aftermath of the 2021 Taliban takeover of Afghanistan.",
"title": "Operational history"
},
{
"paragraph_id": 49,
"text": "Boeing delivered Qatar's first C-17 on 11 August 2009 and the second on 10 September 2009 for the Qatar Emiri Air Force. Qatar received its third C-17 in 2012, and fourth C-17 was received on 10 December 2012. In June 2013, The New York Times reported that Qatar was allegedly using its C-17s to ship weapons from Libya to the Syrian opposition during the civil war via Turkey. On 15 June 2015, it was announced at the Paris Airshow that Qatar agreed to order four additional C-17s from the five remaining \"white tail\" C-17s to double Qatar's C-17 fleet. One Qatari C-17 bears the civilian markings of government-owned Qatar Airways, although the airplane is owned and operated by the Qatar Emiri Air Force. This is because some airports are closed to airplanes with military markings.",
"title": "Operational history"
},
{
"paragraph_id": 50,
"text": "In February 2009, the United Arab Emirates Air Force agreed to buy four C-17s. In January 2010, a contract was signed for six C-17s. In May 2011, the first C-17 was handed over and the final was received in June 2012.",
"title": "Operational history"
},
{
"paragraph_id": 51,
"text": "Kuwait requested the purchase of one C-17 in September 2010 and a second in April 2013 through the U.S.'s Foreign Military Sales (FMS) program. The nation ordered two C-17s; the first was delivered on 13 February 2014.",
"title": "Operational history"
},
{
"paragraph_id": 52,
"text": "In 2015, New Zealand's Minister of Defence, Gerry Brownlee, was considering the purchase of two C-17s for the Royal New Zealand Air Force at an estimated cost of $600 million as a heavy air transport option. However, the New Zealand Government eventually decided not to acquire the C-17.",
"title": "Operational history"
},
{
"paragraph_id": 53,
"text": "Data from Brassey's World Aircraft & Systems Directory, U.S. Air Force fact sheet, Boeing",
"title": "Specifications (C-17A)"
},
{
"paragraph_id": 54,
"text": "General characteristics",
"title": "Specifications (C-17A)"
},
{
"paragraph_id": 55,
"text": "Performance",
"title": "Specifications (C-17A)"
},
{
"paragraph_id": 56,
"text": "Avionics",
"title": "Specifications (C-17A)"
},
{
"paragraph_id": 57,
"text": "Related development",
"title": "See also"
},
{
"paragraph_id": 58,
"text": "Aircraft of comparable role, configuration, and era",
"title": "See also"
},
{
"paragraph_id": 59,
"text": "Related lists",
"title": "See also"
}
] | The McDonnell Douglas/Boeing C-17 Globemaster III is a large military transport aircraft that was developed for the United States Air Force (USAF) from the 1980s to the early 1990s by McDonnell Douglas. The C-17 carries forward the name of two previous piston-engined military cargo aircraft, the Douglas C-74 Globemaster and the Douglas C-124 Globemaster II. The C-17 is based upon the YC-15, a smaller prototype airlifter designed during the 1970s. It was designed to replace the Lockheed C-141 Starlifter, and also fulfill some of the duties of the Lockheed C-5 Galaxy. Compared to the YC-15, the redesigned airlifter differed in having swept wings, increased size, and more powerful engines. Development was protracted by a series of design issues, causing the company to incur a loss of nearly US$1.5 billion on the program's development phase. On 15 September 1991, roughly one year behind schedule, the first C-17 performed its maiden flight. The C-17 formally entered USAF service on 17 January 1995. Boeing, which merged with McDonnell Douglas in 1997, continued to manufacture the C-17 for almost two decades. The final C-17 was completed at the Long Beach, California, plant and flown on 29 November 2015. The C-17 commonly performs tactical and strategic airlift missions, transporting troops and cargo throughout the world; additional roles include medical evacuation and airdrop duties. The transport is in service with the USAF along with air arms of India, the United Kingdom, Australia, Canada, Qatar, the United Arab Emirates, Kuwait, and the Europe-based multilateral organization Heavy Airlift Wing. The type played a key logistical role during both Operation Enduring Freedom in Afghanistan and Operation Iraqi Freedom in Iraq, as well as in providing humanitarian aid in the aftermath of various natural disasters, including the 2010 Haiti earthquake, the 2011 Sindh floods and the recent 2023 Turkey-Syria earthquake. | 2002-02-25T15:51:15Z | 2023-12-21T17:29:05Z | [
"Template:Short description",
"Template:Listen",
"Template:AUS",
"Template:Webarchive",
"Template:Cite web",
"Template:Infobox aircraft type",
"Template:Inflation/year",
"Template:Cvt",
"Template:UAE",
"Template:Aircontent",
"Template:Rs",
"Template:Refbegin",
"Template:Official website",
"Template:Main",
"Template:USA",
"Template:Anchor",
"Template:Portal",
"Template:Cite news",
"Template:Refend",
"Template:McDD aircraft",
"Template:Redirect",
"Template:Commons category",
"Template:CF aircraft",
"Template:ADF aircraft designations",
"Template:Convert",
"Template:As of",
"Template:IND",
"Template:KWT",
"Template:Dead link",
"Template:Citation",
"Template:US transport aircraft",
"Template:USD",
"Template:Aircraft specs",
"Template:Reflist",
"Template:Cbignore",
"Template:Cite magazine",
"Template:Use dmy dates",
"Template:Infobox aircraft begin",
"Template:Format price",
"Template:CAN",
"Template:QAT",
"Template:UK",
"Template:Cite press release",
"Template:ISBN",
"Template:Cn",
"Template:AUD",
"Template:Flagicon",
"Template:Multiple image",
"Template:Boeing support aircraft",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Boeing_C-17_Globemaster_III |
6,732 | Caber | Caber can refer to: | [
{
"paragraph_id": 0,
"text": "Caber can refer to:",
"title": ""
}
] | Caber can refer to: Caber toss, a sport | 2021-04-19T17:31:31Z | [
"Template:Wikt",
"Template:In title",
"Template:Dis"
] | https://en.wikipedia.org/wiki/Caber |
|
6,734 | Garbage collection (computer science) | In computer science, garbage collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim memory which was allocated by the program, but is no longer referenced; such memory is called garbage. Garbage collection was invented by American computer scientist John McCarthy around 1959 to simplify manual memory management in Lisp.
Garbage collection relieves the programmer from doing manual memory management, where the programmer specifies what objects to de-allocate and return to the memory system and when to do so. Other, similar techniques include stack allocation, region inference, and memory ownership, and combinations thereof. Garbage collection may take a significant proportion of a program's total processing time, and affect performance as a result.
Resources other than memory, such as network sockets, database handles, windows, file descriptors, and device descriptors, are not typically handled by garbage collection, but rather by other methods (e.g. destructors). Some such methods de-allocate memory also.
Many programming languages require garbage collection, either as part of the language specification (e.g., RPL, Java, C#, D, Go, and most scripting languages) or effectively for practical implementation (e.g., formal languages like lambda calculus). These are said to be garbage-collected languages. Other languages, such as C and C++, were designed for use with manual memory management, but have garbage-collected implementations available. Some languages, like Ada, Modula-3, and C++/CLI, allow both garbage collection and manual memory management to co-exist in the same application by using separate heaps for collected and manually managed objects. Still others, like D, are garbage-collected but allow the user to manually delete objects or even disable garbage collection entirely when speed is required.
Although many languages integrate GC into their compiler and runtime system, post-hoc GC systems also exist, such as Automatic Reference Counting (ARC). Some of these post-hoc GC systems do not require recompilation. Post-hoc GC is sometimes called litter collection, to distinguish it from ordinary GC.
GC frees the programmer from manually de-allocating memory. This helps avoid some kinds of errors:
GC uses computing resources to decide which memory to free. Therefore, the penalty for the convenience of not annotating object lifetime manually in the source code is overhead, which can impair program performance. A peer-reviewed paper from 2005 concluded that GC needs five times the memory to compensate for this overhead and to perform as fast as the same program using idealised explicit memory management. The comparison however is made to a program generated by inserting deallocation calls using an oracle, implemented by collecting traces from programs run under a profiler, and the program is only correct for one particular execution of the program. Interaction with memory hierarchy effects can make this overhead intolerable in circumstances that are hard to predict or to detect in routine testing. The impact on performance was given by Apple as a reason for not adopting garbage collection in iOS, despite it being the most desired feature.
The moment when the garbage is actually collected can be unpredictable, resulting in stalls (pauses to shift/free memory) scattered throughout a session. Unpredictable stalls can be unacceptable in real-time environments, in transaction processing, or in interactive programs. Incremental, concurrent, and real-time garbage collectors address these problems, with varying trade-offs.
Tracing garbage collection is the most common type of garbage collection, so much so that "garbage collection" often refers to tracing garbage collection, rather than other methods such as reference counting. The overall strategy consists of determining which objects should be garbage collected by tracing which objects are reachable by a chain of references from certain root objects, and considering the rest as garbage and collecting them. However, there are a large number of algorithms used in implementation, with widely varying complexity and performance characteristics.
Reference counting garbage collection is where each object has a count of the number of references to it. Garbage is identified by having a reference count of zero. An object's reference count is incremented when a reference to it is created, and decremented when a reference is destroyed. When the count reaches zero, the object's memory is reclaimed.
As with manual memory management, and unlike tracing garbage collection, reference counting guarantees that objects are destroyed as soon as their last reference is destroyed, and usually only accesses memory which is either in CPU caches, in objects to be freed, or directly pointed to by those, and thus tends to not have significant negative side effects on CPU cache and virtual memory operation.
There are a number of disadvantages to reference counting; this can generally be solved or mitigated by more sophisticated algorithms:
Escape analysis is a compile-time technique that can convert heap allocations to stack allocations, thereby reducing the amount of garbage collection to be done. This analysis determines whether an object allocated inside a function is accessible outside of it. If a function-local allocation is found to be accessible to another function or thread, the allocation is said to "escape" and cannot be done on the stack. Otherwise, the object may be allocated directly on the stack and released when the function returns, bypassing the heap and associated memory management costs.
Generally speaking, higher-level programming languages are more likely to have garbage collection as a standard feature. In some languages lacking built in garbage collection, it can be added through a library, as with the Boehm garbage collector for C and C++.
Most functional programming languages, such as ML, Haskell, and APL, have garbage collection built in. Lisp is especially notable as both the first functional programming language and the first language to introduce garbage collection.
Other dynamic languages, such as Ruby and Julia (but not Perl 5 or PHP before version 5.3, which both use reference counting), JavaScript and ECMAScript also tend to use GC. Object-oriented programming languages such as Smalltalk, RPL and Java usually provide integrated garbage collection. Notable exceptions are C++ and Delphi, which have destructors.
BASIC and Logo have often used garbage collection for variable-length data types, such as strings and lists, so as not to burden programmers with memory management details. On the Altair 8800, programs with many string variables and little string space could cause long pauses due to garbage collection. Similarly the Applesoft BASIC interpreter's garbage collection algorithm repeatedly scans the string descriptors for the string having the highest address in order to compact it toward high memory, resulting in O ( n 2 ) {\displaystyle O(n^{2})} performance and pauses anywhere from a few seconds to a few minutes. A replacement garbage collector for Applesoft BASIC by Randy Wigginton identifies a group of strings in every pass over the heap, reducing collection time dramatically. BASIC.SYSTEM, released with ProDOS in 1983, provides a windowing garbage collector for BASIC that is many times faster.
While the Objective-C traditionally had no garbage collection, with the release of OS X 10.5 in 2007 Apple introduced garbage collection for Objective-C 2.0, using an in-house developed runtime collector. However, with the 2012 release of OS X 10.8, garbage collection was deprecated in favor of LLVM's automatic reference counter (ARC) that was introduced with OS X 10.7. Furthermore, since May 2015 Apple even forbids the usage of garbage collection for new OS X applications in the App Store. For iOS, garbage collection has never been introduced due to problems in application responsivity and performance; instead, iOS uses ARC.
Garbage collection is rarely used on embedded or real-time systems because of the usual need for very tight control over the use of limited resources. However, garbage collectors compatible with many limited environments have been developed. The Microsoft .NET Micro Framework, .NET nanoFramework and Java Platform, Micro Edition are embedded software platforms that, like their larger cousins, include garbage collection.
Garbage collectors available in Java JDKs include:
Compile-time garbage collection is a form of static analysis allowing memory to be reused and reclaimed based on invariants known during compilation.
This form of garbage collection has been studied in the Mercury programming language, and it saw greater usage with the introduction of LLVM's automatic reference counter (ARC) into Apple's ecosystem (iOS and OS X) in 2011.
Incremental, concurrent, and real-time garbage collectors have been developed, for example by Henry Baker and by Henry Lieberman.
In Baker's algorithm, the allocation is done in either half of a single region of memory. When it becomes half full, a garbage collection is performed which moves the live objects into the other half and the remaining objects are implicitly deallocated. The running program (the 'mutator') has to check that any object it references is in the correct half, and if not move it across, while a background task is finding all of the objects.
Generational garbage collection schemes are based on the empirical observation that most objects die young. In generational garbage collection two or more allocation regions (generations) are kept, which are kept separate based on object's age. New objects are created in the "young" generation that is regularly collected, and when a generation is full, the objects that are still referenced from older regions are copied into the next oldest generation. Occasionally a full scan is performed.
Some high-level language computer architectures include hardware support for real-time garbage collection.
Most implementations of real-time garbage collectors use tracing. Such real-time garbage collectors meet hard real-time constraints when used with a real-time operating system. | [
{
"paragraph_id": 0,
"text": "In computer science, garbage collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim memory which was allocated by the program, but is no longer referenced; such memory is called garbage. Garbage collection was invented by American computer scientist John McCarthy around 1959 to simplify manual memory management in Lisp.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Garbage collection relieves the programmer from doing manual memory management, where the programmer specifies what objects to de-allocate and return to the memory system and when to do so. Other, similar techniques include stack allocation, region inference, and memory ownership, and combinations thereof. Garbage collection may take a significant proportion of a program's total processing time, and affect performance as a result.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Resources other than memory, such as network sockets, database handles, windows, file descriptors, and device descriptors, are not typically handled by garbage collection, but rather by other methods (e.g. destructors). Some such methods de-allocate memory also.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Many programming languages require garbage collection, either as part of the language specification (e.g., RPL, Java, C#, D, Go, and most scripting languages) or effectively for practical implementation (e.g., formal languages like lambda calculus). These are said to be garbage-collected languages. Other languages, such as C and C++, were designed for use with manual memory management, but have garbage-collected implementations available. Some languages, like Ada, Modula-3, and C++/CLI, allow both garbage collection and manual memory management to co-exist in the same application by using separate heaps for collected and manually managed objects. Still others, like D, are garbage-collected but allow the user to manually delete objects or even disable garbage collection entirely when speed is required.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "Although many languages integrate GC into their compiler and runtime system, post-hoc GC systems also exist, such as Automatic Reference Counting (ARC). Some of these post-hoc GC systems do not require recompilation. Post-hoc GC is sometimes called litter collection, to distinguish it from ordinary GC.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "GC frees the programmer from manually de-allocating memory. This helps avoid some kinds of errors:",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "GC uses computing resources to decide which memory to free. Therefore, the penalty for the convenience of not annotating object lifetime manually in the source code is overhead, which can impair program performance. A peer-reviewed paper from 2005 concluded that GC needs five times the memory to compensate for this overhead and to perform as fast as the same program using idealised explicit memory management. The comparison however is made to a program generated by inserting deallocation calls using an oracle, implemented by collecting traces from programs run under a profiler, and the program is only correct for one particular execution of the program. Interaction with memory hierarchy effects can make this overhead intolerable in circumstances that are hard to predict or to detect in routine testing. The impact on performance was given by Apple as a reason for not adopting garbage collection in iOS, despite it being the most desired feature.",
"title": "Overview"
},
{
"paragraph_id": 7,
"text": "The moment when the garbage is actually collected can be unpredictable, resulting in stalls (pauses to shift/free memory) scattered throughout a session. Unpredictable stalls can be unacceptable in real-time environments, in transaction processing, or in interactive programs. Incremental, concurrent, and real-time garbage collectors address these problems, with varying trade-offs.",
"title": "Overview"
},
{
"paragraph_id": 8,
"text": "Tracing garbage collection is the most common type of garbage collection, so much so that \"garbage collection\" often refers to tracing garbage collection, rather than other methods such as reference counting. The overall strategy consists of determining which objects should be garbage collected by tracing which objects are reachable by a chain of references from certain root objects, and considering the rest as garbage and collecting them. However, there are a large number of algorithms used in implementation, with widely varying complexity and performance characteristics.",
"title": "Strategies"
},
{
"paragraph_id": 9,
"text": "Reference counting garbage collection is where each object has a count of the number of references to it. Garbage is identified by having a reference count of zero. An object's reference count is incremented when a reference to it is created, and decremented when a reference is destroyed. When the count reaches zero, the object's memory is reclaimed.",
"title": "Strategies"
},
{
"paragraph_id": 10,
"text": "As with manual memory management, and unlike tracing garbage collection, reference counting guarantees that objects are destroyed as soon as their last reference is destroyed, and usually only accesses memory which is either in CPU caches, in objects to be freed, or directly pointed to by those, and thus tends to not have significant negative side effects on CPU cache and virtual memory operation.",
"title": "Strategies"
},
{
"paragraph_id": 11,
"text": "There are a number of disadvantages to reference counting; this can generally be solved or mitigated by more sophisticated algorithms:",
"title": "Strategies"
},
{
"paragraph_id": 12,
"text": "Escape analysis is a compile-time technique that can convert heap allocations to stack allocations, thereby reducing the amount of garbage collection to be done. This analysis determines whether an object allocated inside a function is accessible outside of it. If a function-local allocation is found to be accessible to another function or thread, the allocation is said to \"escape\" and cannot be done on the stack. Otherwise, the object may be allocated directly on the stack and released when the function returns, bypassing the heap and associated memory management costs.",
"title": "Strategies"
},
{
"paragraph_id": 13,
"text": "Generally speaking, higher-level programming languages are more likely to have garbage collection as a standard feature. In some languages lacking built in garbage collection, it can be added through a library, as with the Boehm garbage collector for C and C++.",
"title": "Availability"
},
{
"paragraph_id": 14,
"text": "Most functional programming languages, such as ML, Haskell, and APL, have garbage collection built in. Lisp is especially notable as both the first functional programming language and the first language to introduce garbage collection.",
"title": "Availability"
},
{
"paragraph_id": 15,
"text": "Other dynamic languages, such as Ruby and Julia (but not Perl 5 or PHP before version 5.3, which both use reference counting), JavaScript and ECMAScript also tend to use GC. Object-oriented programming languages such as Smalltalk, RPL and Java usually provide integrated garbage collection. Notable exceptions are C++ and Delphi, which have destructors.",
"title": "Availability"
},
{
"paragraph_id": 16,
"text": "BASIC and Logo have often used garbage collection for variable-length data types, such as strings and lists, so as not to burden programmers with memory management details. On the Altair 8800, programs with many string variables and little string space could cause long pauses due to garbage collection. Similarly the Applesoft BASIC interpreter's garbage collection algorithm repeatedly scans the string descriptors for the string having the highest address in order to compact it toward high memory, resulting in O ( n 2 ) {\\displaystyle O(n^{2})} performance and pauses anywhere from a few seconds to a few minutes. A replacement garbage collector for Applesoft BASIC by Randy Wigginton identifies a group of strings in every pass over the heap, reducing collection time dramatically. BASIC.SYSTEM, released with ProDOS in 1983, provides a windowing garbage collector for BASIC that is many times faster.",
"title": "Availability"
},
{
"paragraph_id": 17,
"text": "While the Objective-C traditionally had no garbage collection, with the release of OS X 10.5 in 2007 Apple introduced garbage collection for Objective-C 2.0, using an in-house developed runtime collector. However, with the 2012 release of OS X 10.8, garbage collection was deprecated in favor of LLVM's automatic reference counter (ARC) that was introduced with OS X 10.7. Furthermore, since May 2015 Apple even forbids the usage of garbage collection for new OS X applications in the App Store. For iOS, garbage collection has never been introduced due to problems in application responsivity and performance; instead, iOS uses ARC.",
"title": "Availability"
},
{
"paragraph_id": 18,
"text": "Garbage collection is rarely used on embedded or real-time systems because of the usual need for very tight control over the use of limited resources. However, garbage collectors compatible with many limited environments have been developed. The Microsoft .NET Micro Framework, .NET nanoFramework and Java Platform, Micro Edition are embedded software platforms that, like their larger cousins, include garbage collection.",
"title": "Availability"
},
{
"paragraph_id": 19,
"text": "Garbage collectors available in Java JDKs include:",
"title": "Availability"
},
{
"paragraph_id": 20,
"text": "Compile-time garbage collection is a form of static analysis allowing memory to be reused and reclaimed based on invariants known during compilation.",
"title": "Availability"
},
{
"paragraph_id": 21,
"text": "This form of garbage collection has been studied in the Mercury programming language, and it saw greater usage with the introduction of LLVM's automatic reference counter (ARC) into Apple's ecosystem (iOS and OS X) in 2011.",
"title": "Availability"
},
{
"paragraph_id": 22,
"text": "Incremental, concurrent, and real-time garbage collectors have been developed, for example by Henry Baker and by Henry Lieberman.",
"title": "Availability"
},
{
"paragraph_id": 23,
"text": "In Baker's algorithm, the allocation is done in either half of a single region of memory. When it becomes half full, a garbage collection is performed which moves the live objects into the other half and the remaining objects are implicitly deallocated. The running program (the 'mutator') has to check that any object it references is in the correct half, and if not move it across, while a background task is finding all of the objects.",
"title": "Availability"
},
{
"paragraph_id": 24,
"text": "Generational garbage collection schemes are based on the empirical observation that most objects die young. In generational garbage collection two or more allocation regions (generations) are kept, which are kept separate based on object's age. New objects are created in the \"young\" generation that is regularly collected, and when a generation is full, the objects that are still referenced from older regions are copied into the next oldest generation. Occasionally a full scan is performed.",
"title": "Availability"
},
{
"paragraph_id": 25,
"text": "Some high-level language computer architectures include hardware support for real-time garbage collection.",
"title": "Availability"
},
{
"paragraph_id": 26,
"text": "Most implementations of real-time garbage collectors use tracing. Such real-time garbage collectors meet hard real-time constraints when used with a real-time operating system.",
"title": "Availability"
}
] | In computer science, garbage collection (GC) is a form of automatic memory management. The garbage collector attempts to reclaim memory which was allocated by the program, but is no longer referenced; such memory is called garbage. Garbage collection was invented by American computer scientist John McCarthy around 1959 to simplify manual memory management in Lisp. Garbage collection relieves the programmer from doing manual memory management, where the programmer specifies what objects to de-allocate and return to the memory system and when to do so. Other, similar techniques include stack allocation, region inference, and memory ownership, and combinations thereof. Garbage collection may take a significant proportion of a program's total processing time, and affect performance as a result. Resources other than memory, such as network sockets, database handles, windows, file descriptors, and device descriptors, are not typically handled by garbage collection, but rather by other methods. Some such methods de-allocate memory also. | 2001-10-19T04:13:44Z | 2023-12-20T07:39:21Z | [
"Template:About",
"Template:Unreferenced section",
"Template:Main",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal",
"Template:John McCarthy",
"Template:Short description",
"Template:More citations needed section",
"Template:Citation needed",
"Template:Cite web",
"Template:Memory management",
"Template:Authority control",
"Template:Wikibooks",
"Template:Use dmy dates",
"Template:Use list-defined references",
"Template:Portal"
] | https://en.wikipedia.org/wiki/Garbage_collection_(computer_science) |
6,736 | Canidae | Canidae (/ˈkænɪdiː/; from Latin, canis, "dog") is a biological family of dog-like carnivorans, colloquially referred to as dogs, and constitutes a clade. A member of this family is also called a canid (/ˈkeɪnɪd/). The family includes three subfamilies: the Caninae, the extinct Borophaginae and Hesperocyoninae. The Caninae are known as canines, and include domestic dogs, wolves, coyotes, foxes, jackals and other species.
Canids are found on all continents except Antarctica, having arrived independently or accompanied by human beings over extended periods of time. Canids vary in size from the 2-metre-long (6.6 ft) gray wolf to the 24-centimetre-long (9.4 in) fennec fox. The body forms of canids are similar, typically having long muzzles, upright ears, teeth adapted for cracking bones and slicing flesh, long legs, and bushy tails. They are mostly social animals, living together in family units or small groups and behaving co-operatively. Typically, only the dominant pair in a group breeds and a litter of young are reared annually in an underground den. Canids communicate by scent signals and vocalizations. One canid, the domestic dog, originated from a symbiotic relationship with Upper Paleolithic humans and today remains one of the most widely kept domestic animals.
In the history of the carnivores, the family Canidae is represented by the two extinct subfamilies designated as Hesperocyoninae and Borophaginae, and the extant subfamily Caninae. This subfamily includes all living canids and their most recent fossil relatives. All living canids as a group form a dental monophyletic relationship with the extinct borophagines, with both groups having a bicuspid (two points) on the lower carnassial talonid, which gives this tooth an additional ability in mastication. This, together with the development of a distinct entoconid cusp and the broadening of the talonid of the first lower molar, and the corresponding enlargement of the talon of the upper first molar and reduction of its parastyle distinguish these late Cenozoic canids and are the essential differences that identify their clade.
The cat-like feliformia and dog-like Caniforms emerged within the Carnivoramorpha around 45–42 Mya (million years ago). The Canidae first appeared in North America during the Late Eocene (37.8-33.9 Mya). They did not reach Eurasia until the Miocene or to South America until the Late Pliocene.
This cladogram shows the phylogenetic position of canids within Caniformia, based on fossil finds:
The Canidae today includes a diverse group of some 37 species ranging in size from the maned wolf with its long limbs to the short-legged bush dog. Modern canids inhabit forests, tundra, savannahs, and deserts throughout tropical and temperate parts of the world. The evolutionary relationships between the species have been studied in the past using morphological approaches, but more recently, molecular studies have enabled the investigation of phylogenetics relationships. In some species, genetic divergence has been suppressed by the high level of gene flow between different populations and where the species have hybridized, large hybrid zones exist.
Carnivorans evolved after the extinction of the non-avian dinosaurs 66 million years ago. Around 50 million years ago, or earlier, in the Paleocene, the carnivorans split into two main divisions: caniforms (dog-like) and feliforms (cat-like). By 40 Mya, the first identifiable member of the dog family had arisen. Named Prohesperocyon wilsoni, its fossilized remains have been found in what is now the southwestern part of Texas. The chief features which identify it as a canid include the loss of the upper third molar (part of a trend toward a more shearing bite), and the structure of the middle ear which has an enlarged bulla (the hollow bony structure protecting the delicate parts of the ear). Prohesperocyon probably had slightly longer limbs than its predecessors, and also had parallel and closely touching toes which differ markedly from the splayed arrangements of the digits in bears.
The canid family soon subdivided into three subfamilies, each of which diverged during the Eocene: Hesperocyoninae (about 39.74–15 Mya), Borophaginae (about 34–32 Mya), and Caninae (about 34–30 Mya). The Caninae are the only surviving subfamily and all present-day canids, including wolves, foxes, coyotes, jackals, and domestic dogs. Members of each subfamily showed an increase in body mass with time and some exhibited specialized hypercarnivorous diets that made them prone to extinction.
By the Oligocene, all three subfamilies of canids (Hesperocyoninae, Borophaginae, and Caninae) had appeared in the fossil records of North America. The earliest and most primitive branch of the Canidae was the Hesperocyoninae lineage, which included the coyote-sized Mesocyon of the Oligocene (38–24 Mya). These early canids probably evolved for the fast pursuit of prey in a grassland habitat; they resembled modern viverrids in appearance. Hesperocyonines eventually became extinct in the middle Miocene. One of the early members of the Hesperocyonines, the genus Hesperocyon, gave rise to Archaeocyon and Leptocyon. These branches led to the borophagine and canine radiations.
Around 8 Mya, the Beringian land bridge allowed members of the genus Eucyon a means to enter Asia from North America and they continued on to colonize Europe.
The Canis, Urocyon, and Vulpes genera developed from canids from North America, where the canine radiation began. The success of these canines was related to the development of lower carnassials that were capable of both mastication and shearing. Around 5 million years ago, some of the Old World Eucyon evolved into the first members of Canis, During the Pliocene, around 4–5 Mya, Canis lepophagus appeared in North America. This was small and sometimes coyote-like. Others were wolf-like in characteristics. C. latrans (the coyote) is theorized to have descended from C. lepophagus.
The formation of the Isthmus of Panama, about 3 Mya, joined South America to North America, allowing canids to invade South America, where they diversified. However, the most recent common ancestor of the South American canids lived in North America some 4 Mya and more than one incursion across the new land bridge is likely given the fact that more than one lineage is present in South America. Two North American lineages found in South America are the gray fox (Urocyon cinereoargentus) and the now-extinct dire wolf (Aenocyon dirus). Besides these, there are species endemic to South America: the maned wolf (Chrysocyon brachyurus), the short-eared dog (Atelocynus microtis), the bush dog (Speothos venaticus), the crab-eating fox (Cerdocyon thous), and the South American foxes (Lycalopex spp.). The monophyly of this group has been established by molecular means.
During the Pleistocene, the North American wolf line appeared, with Canis edwardii, clearly identifiable as a wolf, and Canis rufus appeared, possibly a direct descendant of C. edwardii. Around 0.8 Mya, Canis ambrusteri emerged in North America. A large wolf, it was found all over North and Central America and was eventually supplanted by the dire wolf, which then spread into South America during the Late Pleistocene.
By 0.3 Mya, a number of subspecies of the gray wolf (C. lupus) had developed and had spread throughout Europe and northern Asia. The gray wolf colonized North America during the late Rancholabrean era across the Bering land bridge, with at least three separate invasions, with each one consisting of one or more different Eurasian gray wolf clades. MtDNA studies have shown that there are at least four extant C. lupus lineages. The dire wolf shared its habitat with the gray wolf, but became extinct in a large-scale extinction event that occurred around 11,500 years ago. It may have been more of a scavenger than a hunter; its molars appear to be adapted for crushing bones and it may have gone extinct as a result of the extinction of the large herbivorous animals on whose carcasses it relied.
In 2015, a study of mitochondrial genome sequences and whole-genome nuclear sequences of African and Eurasian canids indicated that extant wolf-like canids have colonized Africa from Eurasia at least five times throughout the Pliocene and Pleistocene, which is consistent with fossil evidence suggesting that much of African canid fauna diversity resulted from the immigration of Eurasian ancestors, likely coincident with Plio-Pleistocene climatic oscillations between arid and humid conditions. When comparing the African and Eurasian golden jackals, the study concluded that the African specimens represented a distinct monophyletic lineage that should be recognized as a separate species, Canis anthus (African golden wolf). According to a phylogeny derived from nuclear sequences, the Eurasian golden jackal (Canis aureus) diverged from the wolf/coyote lineage 1.9 Mya, but the African golden wolf separated 1.3 Mya. Mitochondrial genome sequences indicated the Ethiopian wolf diverged from the wolf/coyote lineage slightly prior to that.
Wild canids are found on every continent except Antarctica, and inhabit a wide range of different habitats, including deserts, mountains, forests, and grasslands. They vary in size from the fennec fox, which may be as little as 24 cm (9.4 in) in length and weigh 0.6 kg (1.3 lb), to the gray wolf, which may be up to 160 cm (5.2 ft) long, and can weigh up to 79 kg (174 lb). Only a few species are arboreal—the gray fox, the closely related island fox and the raccoon dog habitually climb trees.
All canids have a similar basic form, as exemplified by the gray wolf, although the relative length of muzzle, limbs, ears, and tail vary considerably between species. With the exceptions of the bush dog, the raccoon dog and some domestic dog breeds, canids have relatively long legs and lithe bodies, adapted for chasing prey. The tails are bushy and the length and quality of the pelage vary with the season. The muzzle portion of the skull is much more elongated than that of the cat family. The zygomatic arches are wide, there is a transverse lambdoidal ridge at the rear of the cranium and in some species, a sagittal crest running from front to back. The bony orbits around the eye never form a complete ring and the auditory bullae are smooth and rounded. Females have three to seven pairs of mammae.
All canids are digitigrade, meaning they walk on their toes. The tip of the nose is always naked, as are the cushioned pads on the soles of the feet. These latter consist of a single pad behind the tip of each toe and a more-or-less three-lobed central pad under the roots of the digits. Hairs grow between the pads and in the Arctic fox the sole of the foot is densely covered with hair at some times of the year. With the exception of the four-toed African wild dog (Lycaon pictus), five toes are on the forefeet, but the pollex (thumb) is reduced and does not reach the ground. On the hind feet are four toes, but in some domestic dogs, a fifth vestigial toe, known as a dewclaw, is sometimes present, but has no anatomical connection to the rest of the foot. In some species, slightly curved nails are non-retractile and more-or-less blunt while other species have sharper, partially-retractile claws.
The penis in male canids is supported by a baculum and contains a structure called the bulbus glandis, which creates a copulatory tie that lasts for up to an hour during mating. Young canids are born blind, with their eyes opening a few weeks after birth. All living canids (Caninae) have a ligament analogous to the nuchal ligament of ungulates used to maintain the posture of the head and neck with little active muscle exertion; this ligament allows them to conserve energy while running long distances following scent trails with their nose to the ground. However, based on skeletal details of the neck, at least some of the Borophaginae (such as Aelurodon) are believed to have lacked this ligament.
Dentition relates to the arrangement of teeth in the mouth, with the dental notation for the upper-jaw teeth using the upper-case letters I to denote incisors, C for canines, P for premolars, and M for molars, and the lower-case letters i, c, p and m to denote the mandible teeth. Teeth are numbered using one side of the mouth and from the front of the mouth to the back. In carnivores, the upper premolar P4 and the lower molar m1 form the carnassials that are used together in a scissor-like action to shear the muscle and tendon of prey.
Canids use their premolars for cutting and crushing except for the upper fourth premolar P4 (the upper carnassial) that is only used for cutting. They use their molars for grinding except for the lower first molar m1 (the lower carnassial) that has evolved for both cutting and grinding depending on the canid's dietary adaptation. On the lower carnassial, the trigonid is used for slicing and the talonid is used for grinding. The ratio between the trigonid and the talonid indicates a carnivore's dietary habits, with a larger trigonid indicating a hypercarnivore and a larger talonid indicating a more omnivorous diet. Because of its low variability, the length of the lower carnassial is used to provide an estimate of a carnivore's body size.
A study of the estimated bite force at the canine teeth of a large sample of living and fossil mammalian predators, when adjusted for their body mass, found that for placental mammals the bite force at the canines was greatest in the extinct dire wolf (163), followed among the modern canids by the four hypercarnivores that often prey on animals larger than themselves: the African wild dog (142), the gray wolf (136), the dhole (112), and the dingo (108). The bite force at the carnassials showed a similar trend to the canines. A predator's largest prey size is strongly influenced by its biomechanical limits.
Most canids have 42 teeth, with a dental formula of: 3.1.4.23.1.4.3. The bush dog has only one upper molar with two below, the dhole has two above and two below. and the bat-eared fox has three or four upper molars and four lower ones. The molar teeth are strong in most species, allowing the animals to crack open bone to reach the marrow. The deciduous, or baby teeth, formula in canids is 3.1.33.1.3, molars being completely absent.
Almost all canids are social animals and live together in groups. In general, they are territorial or have a home range and sleep in the open, using their dens only for breeding and sometimes in bad weather. In most foxes, and in many of the true dogs, a male and female pair work together to hunt and to raise their young. Gray wolves and some of the other larger canids live in larger groups called packs. African wild dogs have packs which may consist of 20 to 40 animals and packs of fewer than about seven individuals may be incapable of successful reproduction. Hunting in packs has the advantage that larger prey items can be tackled. Some species form packs or live in small family groups depending on the circumstances, including the type of available food. In most species, some individuals live on their own. Within a canid pack, there is a system of dominance so that the strongest, most experienced animals lead the pack. In most cases, the dominant male and female are the only pack members to breed.
Canids communicate with each other by scent signals, by visual clues and gestures, and by vocalizations such as growls, barks, and howls. In most cases, groups have a home territory from which they drive out other conspecifics. The territory is marked by leaving urine scent marks, which warn trespassing individuals. Social behavior is also mediated by secretions from glands on the upper surface of the tail near its root and from the anal glands, preputial glands, and supracaudal glands.
Canids as a group exhibit several reproductive traits that are uncommon among mammals as a whole. They are typically monogamous, provide paternal care to their offspring, have reproductive cycles with lengthy proestral and dioestral phases and have a copulatory tie during mating. They also retain adult offspring in the social group, suppressing the ability of these to breed while making use of the alloparental care they can provide to help raise the next generation of offspring. Most canid species are spontaneous ovulators, although maned wolves are induced ovulators.
During the proestral period, increased levels of estradiol make the female attractive to the male. There is a rise in progesterone during the estral phase when female is receptive. Following this, the level of estradiol fluctuates and there is a lengthy dioestrous phase during which the female is pregnant. Pseudo-pregnancy frequently occurs in canids that have ovulated but failed to conceive. A period of anestrus follows pregnancy or pseudo-pregnancy, there being only one oestral period during each breeding season. Small and medium-sized canids mostly have a gestation period of 50 to 60 days, while larger species average 60 to 65 days. The time of year in which the breeding season occurs is related to the length of day, as has been demonstrated in the case of several species that have been translocated across the equator to the other hemisphere and experiences a six-month shift of phase. Domestic dogs and certain small canids in captivity may come into oestrus more frequently, perhaps because the photoperiod stimulus breaks down under conditions of artificial lighting.
The size of a litter varies, with from one to 16 or more pups being born. The young are born small, blind and helpless and require a long period of parental care. They are kept in a den, most often dug into the ground, for warmth and protection. When the young begin eating solid food, both parents, and often other pack members, bring food back for them from the hunt. This is most often vomited up from the adult's stomach. Where such pack involvement in the feeding of the litter occurs, the breeding success rate is higher than is the case where females split from the group and rear their pups in isolation. Young canids may take a year to mature and learn the skills they need to survive. In some species, such as the African wild dog, male offspring usually remain in the natal pack, while females disperse as a group and join another small group of the opposite sex to form a new pack.
One canid, the domestic dog, entered into a partnership with humans a long time ago. The dog was the first domesticated species. The archaeological record shows the first undisputed dog remains buried beside humans 14,700 years ago, with disputed remains occurring 36,000 years ago. These dates imply that the earliest dogs arose in the time of human hunter-gatherers and not agriculturists.
The fact that wolves are pack animals with cooperative social structures may have been the reason that the relationship developed. Humans benefited from the canid's loyalty, cooperation, teamwork, alertness and tracking abilities, while the wolf may have benefited from the use of weapons to tackle larger prey and the sharing of food. Humans and dogs may have evolved together.
Among canids, only the gray wolf has widely been known to prey on humans. Nonetheless, at least two records of coyotes killing humans have been published, and at least two other reports of golden jackals killing children. Human beings have trapped and hunted some canid species for their fur and some, especially the gray wolf, the coyote and the red fox, for sport. Canids such as the dhole are now endangered in the wild because of persecution, habitat loss, a depletion of ungulate prey species and transmission of diseases from domestic dogs. | [
{
"paragraph_id": 0,
"text": "Canidae (/ˈkænɪdiː/; from Latin, canis, \"dog\") is a biological family of dog-like carnivorans, colloquially referred to as dogs, and constitutes a clade. A member of this family is also called a canid (/ˈkeɪnɪd/). The family includes three subfamilies: the Caninae, the extinct Borophaginae and Hesperocyoninae. The Caninae are known as canines, and include domestic dogs, wolves, coyotes, foxes, jackals and other species.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Canids are found on all continents except Antarctica, having arrived independently or accompanied by human beings over extended periods of time. Canids vary in size from the 2-metre-long (6.6 ft) gray wolf to the 24-centimetre-long (9.4 in) fennec fox. The body forms of canids are similar, typically having long muzzles, upright ears, teeth adapted for cracking bones and slicing flesh, long legs, and bushy tails. They are mostly social animals, living together in family units or small groups and behaving co-operatively. Typically, only the dominant pair in a group breeds and a litter of young are reared annually in an underground den. Canids communicate by scent signals and vocalizations. One canid, the domestic dog, originated from a symbiotic relationship with Upper Paleolithic humans and today remains one of the most widely kept domestic animals.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the history of the carnivores, the family Canidae is represented by the two extinct subfamilies designated as Hesperocyoninae and Borophaginae, and the extant subfamily Caninae. This subfamily includes all living canids and their most recent fossil relatives. All living canids as a group form a dental monophyletic relationship with the extinct borophagines, with both groups having a bicuspid (two points) on the lower carnassial talonid, which gives this tooth an additional ability in mastication. This, together with the development of a distinct entoconid cusp and the broadening of the talonid of the first lower molar, and the corresponding enlargement of the talon of the upper first molar and reduction of its parastyle distinguish these late Cenozoic canids and are the essential differences that identify their clade.",
"title": "Taxonomy"
},
{
"paragraph_id": 3,
"text": "The cat-like feliformia and dog-like Caniforms emerged within the Carnivoramorpha around 45–42 Mya (million years ago). The Canidae first appeared in North America during the Late Eocene (37.8-33.9 Mya). They did not reach Eurasia until the Miocene or to South America until the Late Pliocene.",
"title": "Taxonomy"
},
{
"paragraph_id": 4,
"text": "This cladogram shows the phylogenetic position of canids within Caniformia, based on fossil finds:",
"title": "Taxonomy"
},
{
"paragraph_id": 5,
"text": "The Canidae today includes a diverse group of some 37 species ranging in size from the maned wolf with its long limbs to the short-legged bush dog. Modern canids inhabit forests, tundra, savannahs, and deserts throughout tropical and temperate parts of the world. The evolutionary relationships between the species have been studied in the past using morphological approaches, but more recently, molecular studies have enabled the investigation of phylogenetics relationships. In some species, genetic divergence has been suppressed by the high level of gene flow between different populations and where the species have hybridized, large hybrid zones exist.",
"title": "Evolution"
},
{
"paragraph_id": 6,
"text": "Carnivorans evolved after the extinction of the non-avian dinosaurs 66 million years ago. Around 50 million years ago, or earlier, in the Paleocene, the carnivorans split into two main divisions: caniforms (dog-like) and feliforms (cat-like). By 40 Mya, the first identifiable member of the dog family had arisen. Named Prohesperocyon wilsoni, its fossilized remains have been found in what is now the southwestern part of Texas. The chief features which identify it as a canid include the loss of the upper third molar (part of a trend toward a more shearing bite), and the structure of the middle ear which has an enlarged bulla (the hollow bony structure protecting the delicate parts of the ear). Prohesperocyon probably had slightly longer limbs than its predecessors, and also had parallel and closely touching toes which differ markedly from the splayed arrangements of the digits in bears.",
"title": "Evolution"
},
{
"paragraph_id": 7,
"text": "The canid family soon subdivided into three subfamilies, each of which diverged during the Eocene: Hesperocyoninae (about 39.74–15 Mya), Borophaginae (about 34–32 Mya), and Caninae (about 34–30 Mya). The Caninae are the only surviving subfamily and all present-day canids, including wolves, foxes, coyotes, jackals, and domestic dogs. Members of each subfamily showed an increase in body mass with time and some exhibited specialized hypercarnivorous diets that made them prone to extinction.",
"title": "Evolution"
},
{
"paragraph_id": 8,
"text": "By the Oligocene, all three subfamilies of canids (Hesperocyoninae, Borophaginae, and Caninae) had appeared in the fossil records of North America. The earliest and most primitive branch of the Canidae was the Hesperocyoninae lineage, which included the coyote-sized Mesocyon of the Oligocene (38–24 Mya). These early canids probably evolved for the fast pursuit of prey in a grassland habitat; they resembled modern viverrids in appearance. Hesperocyonines eventually became extinct in the middle Miocene. One of the early members of the Hesperocyonines, the genus Hesperocyon, gave rise to Archaeocyon and Leptocyon. These branches led to the borophagine and canine radiations.",
"title": "Evolution"
},
{
"paragraph_id": 9,
"text": "Around 8 Mya, the Beringian land bridge allowed members of the genus Eucyon a means to enter Asia from North America and they continued on to colonize Europe.",
"title": "Evolution"
},
{
"paragraph_id": 10,
"text": "The Canis, Urocyon, and Vulpes genera developed from canids from North America, where the canine radiation began. The success of these canines was related to the development of lower carnassials that were capable of both mastication and shearing. Around 5 million years ago, some of the Old World Eucyon evolved into the first members of Canis, During the Pliocene, around 4–5 Mya, Canis lepophagus appeared in North America. This was small and sometimes coyote-like. Others were wolf-like in characteristics. C. latrans (the coyote) is theorized to have descended from C. lepophagus.",
"title": "Evolution"
},
{
"paragraph_id": 11,
"text": "The formation of the Isthmus of Panama, about 3 Mya, joined South America to North America, allowing canids to invade South America, where they diversified. However, the most recent common ancestor of the South American canids lived in North America some 4 Mya and more than one incursion across the new land bridge is likely given the fact that more than one lineage is present in South America. Two North American lineages found in South America are the gray fox (Urocyon cinereoargentus) and the now-extinct dire wolf (Aenocyon dirus). Besides these, there are species endemic to South America: the maned wolf (Chrysocyon brachyurus), the short-eared dog (Atelocynus microtis), the bush dog (Speothos venaticus), the crab-eating fox (Cerdocyon thous), and the South American foxes (Lycalopex spp.). The monophyly of this group has been established by molecular means.",
"title": "Evolution"
},
{
"paragraph_id": 12,
"text": "During the Pleistocene, the North American wolf line appeared, with Canis edwardii, clearly identifiable as a wolf, and Canis rufus appeared, possibly a direct descendant of C. edwardii. Around 0.8 Mya, Canis ambrusteri emerged in North America. A large wolf, it was found all over North and Central America and was eventually supplanted by the dire wolf, which then spread into South America during the Late Pleistocene.",
"title": "Evolution"
},
{
"paragraph_id": 13,
"text": "By 0.3 Mya, a number of subspecies of the gray wolf (C. lupus) had developed and had spread throughout Europe and northern Asia. The gray wolf colonized North America during the late Rancholabrean era across the Bering land bridge, with at least three separate invasions, with each one consisting of one or more different Eurasian gray wolf clades. MtDNA studies have shown that there are at least four extant C. lupus lineages. The dire wolf shared its habitat with the gray wolf, but became extinct in a large-scale extinction event that occurred around 11,500 years ago. It may have been more of a scavenger than a hunter; its molars appear to be adapted for crushing bones and it may have gone extinct as a result of the extinction of the large herbivorous animals on whose carcasses it relied.",
"title": "Evolution"
},
{
"paragraph_id": 14,
"text": "In 2015, a study of mitochondrial genome sequences and whole-genome nuclear sequences of African and Eurasian canids indicated that extant wolf-like canids have colonized Africa from Eurasia at least five times throughout the Pliocene and Pleistocene, which is consistent with fossil evidence suggesting that much of African canid fauna diversity resulted from the immigration of Eurasian ancestors, likely coincident with Plio-Pleistocene climatic oscillations between arid and humid conditions. When comparing the African and Eurasian golden jackals, the study concluded that the African specimens represented a distinct monophyletic lineage that should be recognized as a separate species, Canis anthus (African golden wolf). According to a phylogeny derived from nuclear sequences, the Eurasian golden jackal (Canis aureus) diverged from the wolf/coyote lineage 1.9 Mya, but the African golden wolf separated 1.3 Mya. Mitochondrial genome sequences indicated the Ethiopian wolf diverged from the wolf/coyote lineage slightly prior to that.",
"title": "Evolution"
},
{
"paragraph_id": 15,
"text": "Wild canids are found on every continent except Antarctica, and inhabit a wide range of different habitats, including deserts, mountains, forests, and grasslands. They vary in size from the fennec fox, which may be as little as 24 cm (9.4 in) in length and weigh 0.6 kg (1.3 lb), to the gray wolf, which may be up to 160 cm (5.2 ft) long, and can weigh up to 79 kg (174 lb). Only a few species are arboreal—the gray fox, the closely related island fox and the raccoon dog habitually climb trees.",
"title": "Characteristics"
},
{
"paragraph_id": 16,
"text": "All canids have a similar basic form, as exemplified by the gray wolf, although the relative length of muzzle, limbs, ears, and tail vary considerably between species. With the exceptions of the bush dog, the raccoon dog and some domestic dog breeds, canids have relatively long legs and lithe bodies, adapted for chasing prey. The tails are bushy and the length and quality of the pelage vary with the season. The muzzle portion of the skull is much more elongated than that of the cat family. The zygomatic arches are wide, there is a transverse lambdoidal ridge at the rear of the cranium and in some species, a sagittal crest running from front to back. The bony orbits around the eye never form a complete ring and the auditory bullae are smooth and rounded. Females have three to seven pairs of mammae.",
"title": "Characteristics"
},
{
"paragraph_id": 17,
"text": "All canids are digitigrade, meaning they walk on their toes. The tip of the nose is always naked, as are the cushioned pads on the soles of the feet. These latter consist of a single pad behind the tip of each toe and a more-or-less three-lobed central pad under the roots of the digits. Hairs grow between the pads and in the Arctic fox the sole of the foot is densely covered with hair at some times of the year. With the exception of the four-toed African wild dog (Lycaon pictus), five toes are on the forefeet, but the pollex (thumb) is reduced and does not reach the ground. On the hind feet are four toes, but in some domestic dogs, a fifth vestigial toe, known as a dewclaw, is sometimes present, but has no anatomical connection to the rest of the foot. In some species, slightly curved nails are non-retractile and more-or-less blunt while other species have sharper, partially-retractile claws.",
"title": "Characteristics"
},
{
"paragraph_id": 18,
"text": "The penis in male canids is supported by a baculum and contains a structure called the bulbus glandis, which creates a copulatory tie that lasts for up to an hour during mating. Young canids are born blind, with their eyes opening a few weeks after birth. All living canids (Caninae) have a ligament analogous to the nuchal ligament of ungulates used to maintain the posture of the head and neck with little active muscle exertion; this ligament allows them to conserve energy while running long distances following scent trails with their nose to the ground. However, based on skeletal details of the neck, at least some of the Borophaginae (such as Aelurodon) are believed to have lacked this ligament.",
"title": "Characteristics"
},
{
"paragraph_id": 19,
"text": "Dentition relates to the arrangement of teeth in the mouth, with the dental notation for the upper-jaw teeth using the upper-case letters I to denote incisors, C for canines, P for premolars, and M for molars, and the lower-case letters i, c, p and m to denote the mandible teeth. Teeth are numbered using one side of the mouth and from the front of the mouth to the back. In carnivores, the upper premolar P4 and the lower molar m1 form the carnassials that are used together in a scissor-like action to shear the muscle and tendon of prey.",
"title": "Characteristics"
},
{
"paragraph_id": 20,
"text": "Canids use their premolars for cutting and crushing except for the upper fourth premolar P4 (the upper carnassial) that is only used for cutting. They use their molars for grinding except for the lower first molar m1 (the lower carnassial) that has evolved for both cutting and grinding depending on the canid's dietary adaptation. On the lower carnassial, the trigonid is used for slicing and the talonid is used for grinding. The ratio between the trigonid and the talonid indicates a carnivore's dietary habits, with a larger trigonid indicating a hypercarnivore and a larger talonid indicating a more omnivorous diet. Because of its low variability, the length of the lower carnassial is used to provide an estimate of a carnivore's body size.",
"title": "Characteristics"
},
{
"paragraph_id": 21,
"text": "A study of the estimated bite force at the canine teeth of a large sample of living and fossil mammalian predators, when adjusted for their body mass, found that for placental mammals the bite force at the canines was greatest in the extinct dire wolf (163), followed among the modern canids by the four hypercarnivores that often prey on animals larger than themselves: the African wild dog (142), the gray wolf (136), the dhole (112), and the dingo (108). The bite force at the carnassials showed a similar trend to the canines. A predator's largest prey size is strongly influenced by its biomechanical limits.",
"title": "Characteristics"
},
{
"paragraph_id": 22,
"text": "Most canids have 42 teeth, with a dental formula of: 3.1.4.23.1.4.3. The bush dog has only one upper molar with two below, the dhole has two above and two below. and the bat-eared fox has three or four upper molars and four lower ones. The molar teeth are strong in most species, allowing the animals to crack open bone to reach the marrow. The deciduous, or baby teeth, formula in canids is 3.1.33.1.3, molars being completely absent.",
"title": "Characteristics"
},
{
"paragraph_id": 23,
"text": "Almost all canids are social animals and live together in groups. In general, they are territorial or have a home range and sleep in the open, using their dens only for breeding and sometimes in bad weather. In most foxes, and in many of the true dogs, a male and female pair work together to hunt and to raise their young. Gray wolves and some of the other larger canids live in larger groups called packs. African wild dogs have packs which may consist of 20 to 40 animals and packs of fewer than about seven individuals may be incapable of successful reproduction. Hunting in packs has the advantage that larger prey items can be tackled. Some species form packs or live in small family groups depending on the circumstances, including the type of available food. In most species, some individuals live on their own. Within a canid pack, there is a system of dominance so that the strongest, most experienced animals lead the pack. In most cases, the dominant male and female are the only pack members to breed.",
"title": "Life history"
},
{
"paragraph_id": 24,
"text": "Canids communicate with each other by scent signals, by visual clues and gestures, and by vocalizations such as growls, barks, and howls. In most cases, groups have a home territory from which they drive out other conspecifics. The territory is marked by leaving urine scent marks, which warn trespassing individuals. Social behavior is also mediated by secretions from glands on the upper surface of the tail near its root and from the anal glands, preputial glands, and supracaudal glands.",
"title": "Life history"
},
{
"paragraph_id": 25,
"text": "Canids as a group exhibit several reproductive traits that are uncommon among mammals as a whole. They are typically monogamous, provide paternal care to their offspring, have reproductive cycles with lengthy proestral and dioestral phases and have a copulatory tie during mating. They also retain adult offspring in the social group, suppressing the ability of these to breed while making use of the alloparental care they can provide to help raise the next generation of offspring. Most canid species are spontaneous ovulators, although maned wolves are induced ovulators.",
"title": "Life history"
},
{
"paragraph_id": 26,
"text": "During the proestral period, increased levels of estradiol make the female attractive to the male. There is a rise in progesterone during the estral phase when female is receptive. Following this, the level of estradiol fluctuates and there is a lengthy dioestrous phase during which the female is pregnant. Pseudo-pregnancy frequently occurs in canids that have ovulated but failed to conceive. A period of anestrus follows pregnancy or pseudo-pregnancy, there being only one oestral period during each breeding season. Small and medium-sized canids mostly have a gestation period of 50 to 60 days, while larger species average 60 to 65 days. The time of year in which the breeding season occurs is related to the length of day, as has been demonstrated in the case of several species that have been translocated across the equator to the other hemisphere and experiences a six-month shift of phase. Domestic dogs and certain small canids in captivity may come into oestrus more frequently, perhaps because the photoperiod stimulus breaks down under conditions of artificial lighting.",
"title": "Life history"
},
{
"paragraph_id": 27,
"text": "The size of a litter varies, with from one to 16 or more pups being born. The young are born small, blind and helpless and require a long period of parental care. They are kept in a den, most often dug into the ground, for warmth and protection. When the young begin eating solid food, both parents, and often other pack members, bring food back for them from the hunt. This is most often vomited up from the adult's stomach. Where such pack involvement in the feeding of the litter occurs, the breeding success rate is higher than is the case where females split from the group and rear their pups in isolation. Young canids may take a year to mature and learn the skills they need to survive. In some species, such as the African wild dog, male offspring usually remain in the natal pack, while females disperse as a group and join another small group of the opposite sex to form a new pack.",
"title": "Life history"
},
{
"paragraph_id": 28,
"text": "One canid, the domestic dog, entered into a partnership with humans a long time ago. The dog was the first domesticated species. The archaeological record shows the first undisputed dog remains buried beside humans 14,700 years ago, with disputed remains occurring 36,000 years ago. These dates imply that the earliest dogs arose in the time of human hunter-gatherers and not agriculturists.",
"title": "Canids and humans"
},
{
"paragraph_id": 29,
"text": "The fact that wolves are pack animals with cooperative social structures may have been the reason that the relationship developed. Humans benefited from the canid's loyalty, cooperation, teamwork, alertness and tracking abilities, while the wolf may have benefited from the use of weapons to tackle larger prey and the sharing of food. Humans and dogs may have evolved together.",
"title": "Canids and humans"
},
{
"paragraph_id": 30,
"text": "Among canids, only the gray wolf has widely been known to prey on humans. Nonetheless, at least two records of coyotes killing humans have been published, and at least two other reports of golden jackals killing children. Human beings have trapped and hunted some canid species for their fur and some, especially the gray wolf, the coyote and the red fox, for sport. Canids such as the dhole are now endangered in the wild because of persecution, habitat loss, a depletion of ungulate prey species and transmission of diseases from domestic dogs.",
"title": "Canids and humans"
}
] | Canidae is a biological family of dog-like carnivorans, colloquially referred to as dogs, and constitutes a clade. A member of this family is also called a canid. The family includes three subfamilies: the Caninae, the extinct Borophaginae and Hesperocyoninae. The Caninae are known as canines, and include domestic dogs, wolves, coyotes, foxes, jackals and other species. Canids are found on all continents except Antarctica, having arrived independently or accompanied by human beings over extended periods of time. Canids vary in size from the 2-metre-long (6.6 ft) gray wolf to the 24-centimetre-long (9.4 in) fennec fox. The body forms of canids are similar, typically having long muzzles, upright ears, teeth adapted for cracking bones and slicing flesh, long legs, and bushy tails. They are mostly social animals, living together in family units or small groups and behaving co-operatively. Typically, only the dominant pair in a group breeds and a litter of young are reared annually in an underground den. Canids communicate by scent signals and vocalizations. One canid, the domestic dog, originated from a symbiotic relationship with Upper Paleolithic humans and today remains one of the most widely kept domestic animals. | 2001-10-09T19:53:20Z | 2023-12-18T18:06:48Z | [
"Template:Reflist",
"Template:Cite magazine",
"Template:Cite iucn",
"Template:Wikispecies",
"Template:Sfn",
"Template:DentalFormula",
"Template:Pn",
"Template:Cite web",
"Template:ISBN",
"Template:Short description",
"Template:Expand Italian",
"Template:Cite journal",
"Template:Commons category",
"Template:NCBI taxid",
"Template:IPAc-en",
"Template:Further",
"Template:Carnivora",
"Template:Clade",
"Template:Canidae extinct nav",
"Template:Authority control",
"Template:Use dmy dates",
"Template:Convert",
"Template:Rp",
"Template:Multiple image",
"Template:Good article",
"Template:See also",
"Template:Cite news",
"Template:Automatic taxobox",
"Template:Visible anchor",
"Template:Taxonbar",
"Template:Cn",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Canidae |
6,739 | Subspecies of Canis lupus | There are 38 subspecies of Canis lupus listed in the taxonomic authority Mammal Species of the World (2005, 3rd edition). These subspecies were named over the past 250 years, and since their naming, a number of them have gone extinct. The nominate subspecies is the Eurasian wolf (Canis lupus lupus).
In 1758, the Swedish botanist and zoologist Carl Linnaeus published in his Systema Naturae the binomial nomenclature – or the two-word naming – of species. Canis is the Latin word meaning "dog", and under this genus he listed the dog-like carnivores including domestic dogs, wolves, and jackals. He classified the domestic dog as Canis familiaris, and on the next page he classified the wolf as Canis lupus. Linnaeus considered the dog to be a separate species from the wolf because of its head and body and tail cauda recurvata - its upturning tail - which is not found in any other canid.
In 1999, a study of mitochondrial DNA indicated that the domestic dog may have originated from multiple wolf populations, with the dingo and New Guinea singing dog "breeds" having developed at a time when human populations were more isolated from each other. In the third edition of Mammal Species of the World published in 2005, the mammalogist W. Christopher Wozencraft listed under the wolf Canis lupus some 36 wild subspecies, and proposed two additional subspecies: familiaris Linnaeus, 1758 and dingo Meyer, 1793. Wozencraft included hallstromi – the New Guinea singing dog – as a taxonomic synonym for the dingo. Wozencraft referred to the mDNA study as one of the guides in forming his decision, and listed the 38 subspecies under the biological common name of "wolf", with the nominate subspecies being the Eurasian wolf (Canis lupus lupus) based on the type specimen that Linnaeus studied in Sweden. However, the classification of several of these canines as either species or subspecies has recently been challenged.
Living subspecies recognized by MSW3 as of 2005 and divided into Old World and New World:
Sokolov and Rossolimo (1985) recognised nine Old World subspecies of wolf. These were C. l. lupus, C. l. albus, C. l. pallipes, C. l. cubanensis, C. l. campestris, C. l. chanco, C. l. desortorum, C. l. hattai, and C. l. hodophilax. In his 1995 statistical analysis of skull morphometrics, mammalogist Robert Nowak recognized the first four of those subspecies, synonymized campestris, chanco and desortorum with C. l. lupus, but did not examine the two Japanese subspecies. In addition, he recognized C. l. communis as a subspecies distinct from C. l. lupus. In 2003, Nowak also recognized the distinctiveness of C. l. arabs, C. l. hattai, C. l. italicus, and C. l. hodophilax. In 2005, MSW3 included C. l. filchneri. In 2003, two forms were distinguished in southern China and Inner Mongolia as being separate from C. l. chanco and C. l. filchneri and have yet to be named.
For North America, in 1944 the zoologist Edward Goldman recognized as many as 23 subspecies based on morphology. In 1959, E. Raymond Hall proposed that there had been 24 subspecies of lupus in North America. In 1970, L. David Mech proposed that there was "probably far too many subspecific designations...in use", as most did not exhibit enough points of differentiation to be classified as separate subspecies. The 24 subspecies were accepted by many authorities in 1981 and these were based on morphological or geographical differences, or a unique history. In 1995, the American mammologist Robert M. Nowak analyzed data on the skull morphology of wolf specimens from around the world. For North America, he proposed that there were only five subspecies of the wolf. These include a large-toothed Arctic wolf named C. l. arctos, a large wolf from Alaska and western Canada named C. l. occidentalis, a small wolf from southeastern Canada named C. l. lycaon, a small wolf from the southwestern U.S. named C. l. baileyi and a moderate-sized wolf that was originally found from Texas to Hudson Bay and from Oregon to Newfoundland named C. l. nubilus.
The taxonomic classification of Canis lupus in Mammal Species of the World (3rd edition, 2005) listed 27 subspecies of North American wolf, corresponding to the 24 Canis lupus subspecies and the three Canis rufus subspecies of Hall (1981). The table below shows the extant subspecies, with the extinct ones listed in the following section.
Subspecies recognized by MSW3 as of 2005 which have gone extinct over the past 150 years:
Subspecies discovered since the publishing of MSW3 in 2005 which have gone extinct over the past 150 years:
In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group considered the New Guinea singing dog and the dingo to be feral dogs (Canis familiaris). In 2020, a literature review of canid domestication stated that modern dogs were not descended from the same Canis lineage as modern wolves, and proposed that dogs may be descended from a Pleistocene wolf closer in size to a village dog. In 2021, the American Society of Mammalogists also considered dingos a feral dog (Canis familiaris) population.
The Italian wolf (or Apennine wolf) was first recognised as a distinct subspecies (Canis lupus italicus) in 1921 by zoologist Giuseppe Altobello. Altobello's classification was later rejected by several authors, including Reginald Innes Pocock, who synonymised C. l. italicus with C. l. lupus. In 2002, the noted paleontologist R.M. Nowak reaffirmed the morphological distinctiveness of the Italian wolf and recommended the recognition of Canis lupus italicus. A number of DNA studies have found the Italian wolf to be genetically distinct. In 2004, the genetic distinction of the Italian wolf subspecies was supported by analysis which consistently assigned all the wolf genotypes of a sample in Italy to a single group. This population also showed a unique mitochondrial DNA control-region haplotype, the absence of private alleles and lower heterozygosity at microsatellite loci, as compared to other wolf populations. In 2010, a genetic analysis indicated that a single wolf haplotype (w22) unique to the Apennine Peninsula and one of the two haplotypes (w24, w25), unique to the Iberian Peninsula, belonged to the same haplogroup as the prehistoric wolves of Europe. Another haplotype (w10) was found to be common to the Iberian peninsula and the Balkans. These three populations with geographic isolation exhibited a near lack of gene flow and spatially correspond to three glacial refugia.
The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis lupus italicus; however, NCBI/Genbank publishes research papers under that name.
The Iberian wolf was first recognised as a distinct subspecies (Canis lupus signatus) in 1907 by zoologist Ángel Cabrera. The wolves of the Iberian peninsula have morphologically distinct features from other Eurasian wolves and each are considered by their researchers to represent their own subspecies.
The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis lupus signatus; however, NCBI/Genbank does list it.
The Himalayan wolf is distinguished by its mitochondrial DNA, which is basal to all other wolves. The taxonomic name of this wolf is disputed, with the species Canis himalayensis being proposed based on two limited DNA studies. In 2017, a study of mitochondrial DNA, X-chromosome (maternal lineage) markers and Y-chromosome (male lineage) markers found that the Himalayan wolf was genetically basal to the Holarctic grey wolf and has an association with the African golden wolf.
In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group noted that the Himalayan wolf's distribution included the Himalayan range and the Tibetan Plateau. The group recommends that this wolf lineage be known as the "Himalayan wolf" and classified as Canis lupus chanco until a genetic analysis of the holotypes is available. In 2020, further research on the Himalayan wolf found that it warranted species-level recognition under the Unified Species Concept, the Differential Fitness Species Concept, and the Biological Species Concept. It was identified as an Evolutionary Significant Unit that warranted assignment onto the IUCN Red List for its protection.
The Indian plains wolf is a proposed clade within the Indian wolf (Canis lupus pallipes) that is distinguished by its mitochondrial DNA, which is basal to all other wolves except for the Himalayan wolf. The taxonomic status of this wolf clade is disputed, with the separate species Canis indica being proposed based on two limited DNA studies. The proposal has not been endorsed because they relied on a limited number of museum and zoo samples that may not have been representative of the wild population and a call for further fieldwork has been made.
The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis indica; however, NCBI/Genbank lists it as a new subspecies, Canis lupus indica.
In 2017, a comprehensive study found that the gray wolf was present across all of mainland China, both in the past and today. It exists in southern China, which refutes claims made by some researchers in the Western world that the wolf had never existed in southern China. This wolf has not been taxonomically classified.
In 2019, a genomic study on the wolves of China included museum specimens of wolves from southern China that were collected between 1963 and 1988. The wolves in the study formed three clades: northern Asian wolves that included those from northern China and eastern Russia, Himalayan wolves from the Tibetan Plateau, and a unique population from southern China. One specimen from Zhejiang Province in eastern China shared gene flow with the wolves from southern China; however, its genome was 12-14 percent admixed with a canid that may be the dhole or an unknown canid that predates the genetic divergence of the dhole. The wolf population from southern China is believed to be still existing in that region.
A study of the three coastal wolves indicates a close phylogenetic relationship across regions that are geographically and ecologically contiguous, and the study proposed that Canis lupus ligoni (the Alexander Archipelago wolf), Canis lupus columbianus (the British Columbian wolf), and Canis lupus crassodon (the Vancouver Coastal Sea wolf) should be recognized as a single subspecies of Canis lupus, synonymized as Canis lupus crassodon. They share the same habitat and prey species, and form one study's six identified North American ecotypes - a genetically and ecologically distinct population separated from other populations by their different type of habitat.
The eastern wolf has two proposals over its origin. One is that the eastern wolf is a distinct species (C. lycaon) that evolved in North America, as opposed to the gray wolf that evolved in the Old World, and is related to the red wolf. The other is that it is derived from admixture between gray wolves which inhabited the Great Lakes area and coyotes, forming a hybrid that was classified as a distinct species by mistake.
The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis lycaon, however NCBI/Genbank does list it. In 2021, the American Society of Mammalogists also considered Canis lycaon a valid species.
The red wolf is an enigmatic taxon, of which there are two proposals over its origin. One is that the red wolf was a distinct species (C. rufus) that has undergone human-influenced admixture with coyotes. The other is that it was never a distinct species but was derived from past admixture between coyotes and gray wolves, due to the gray wolf population being eliminated by humans.
The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis rufus, however NCBI/Genbank does list it. In 2021, the American Society of Mammalogists also considered Canis rufus a valid species. | [
{
"paragraph_id": 0,
"text": "There are 38 subspecies of Canis lupus listed in the taxonomic authority Mammal Species of the World (2005, 3rd edition). These subspecies were named over the past 250 years, and since their naming, a number of them have gone extinct. The nominate subspecies is the Eurasian wolf (Canis lupus lupus).",
"title": ""
},
{
"paragraph_id": 1,
"text": "In 1758, the Swedish botanist and zoologist Carl Linnaeus published in his Systema Naturae the binomial nomenclature – or the two-word naming – of species. Canis is the Latin word meaning \"dog\", and under this genus he listed the dog-like carnivores including domestic dogs, wolves, and jackals. He classified the domestic dog as Canis familiaris, and on the next page he classified the wolf as Canis lupus. Linnaeus considered the dog to be a separate species from the wolf because of its head and body and tail cauda recurvata - its upturning tail - which is not found in any other canid.",
"title": "Taxonomy"
},
{
"paragraph_id": 2,
"text": "In 1999, a study of mitochondrial DNA indicated that the domestic dog may have originated from multiple wolf populations, with the dingo and New Guinea singing dog \"breeds\" having developed at a time when human populations were more isolated from each other. In the third edition of Mammal Species of the World published in 2005, the mammalogist W. Christopher Wozencraft listed under the wolf Canis lupus some 36 wild subspecies, and proposed two additional subspecies: familiaris Linnaeus, 1758 and dingo Meyer, 1793. Wozencraft included hallstromi – the New Guinea singing dog – as a taxonomic synonym for the dingo. Wozencraft referred to the mDNA study as one of the guides in forming his decision, and listed the 38 subspecies under the biological common name of \"wolf\", with the nominate subspecies being the Eurasian wolf (Canis lupus lupus) based on the type specimen that Linnaeus studied in Sweden. However, the classification of several of these canines as either species or subspecies has recently been challenged.",
"title": "Taxonomy"
},
{
"paragraph_id": 3,
"text": "Living subspecies recognized by MSW3 as of 2005 and divided into Old World and New World:",
"title": "List of extant subspecies"
},
{
"paragraph_id": 4,
"text": "Sokolov and Rossolimo (1985) recognised nine Old World subspecies of wolf. These were C. l. lupus, C. l. albus, C. l. pallipes, C. l. cubanensis, C. l. campestris, C. l. chanco, C. l. desortorum, C. l. hattai, and C. l. hodophilax. In his 1995 statistical analysis of skull morphometrics, mammalogist Robert Nowak recognized the first four of those subspecies, synonymized campestris, chanco and desortorum with C. l. lupus, but did not examine the two Japanese subspecies. In addition, he recognized C. l. communis as a subspecies distinct from C. l. lupus. In 2003, Nowak also recognized the distinctiveness of C. l. arabs, C. l. hattai, C. l. italicus, and C. l. hodophilax. In 2005, MSW3 included C. l. filchneri. In 2003, two forms were distinguished in southern China and Inner Mongolia as being separate from C. l. chanco and C. l. filchneri and have yet to be named.",
"title": "List of extant subspecies"
},
{
"paragraph_id": 5,
"text": "For North America, in 1944 the zoologist Edward Goldman recognized as many as 23 subspecies based on morphology. In 1959, E. Raymond Hall proposed that there had been 24 subspecies of lupus in North America. In 1970, L. David Mech proposed that there was \"probably far too many subspecific designations...in use\", as most did not exhibit enough points of differentiation to be classified as separate subspecies. The 24 subspecies were accepted by many authorities in 1981 and these were based on morphological or geographical differences, or a unique history. In 1995, the American mammologist Robert M. Nowak analyzed data on the skull morphology of wolf specimens from around the world. For North America, he proposed that there were only five subspecies of the wolf. These include a large-toothed Arctic wolf named C. l. arctos, a large wolf from Alaska and western Canada named C. l. occidentalis, a small wolf from southeastern Canada named C. l. lycaon, a small wolf from the southwestern U.S. named C. l. baileyi and a moderate-sized wolf that was originally found from Texas to Hudson Bay and from Oregon to Newfoundland named C. l. nubilus.",
"title": "List of extant subspecies"
},
{
"paragraph_id": 6,
"text": "The taxonomic classification of Canis lupus in Mammal Species of the World (3rd edition, 2005) listed 27 subspecies of North American wolf, corresponding to the 24 Canis lupus subspecies and the three Canis rufus subspecies of Hall (1981). The table below shows the extant subspecies, with the extinct ones listed in the following section.",
"title": "List of extant subspecies"
},
{
"paragraph_id": 7,
"text": "Subspecies recognized by MSW3 as of 2005 which have gone extinct over the past 150 years:",
"title": "List of extinct subspecies"
},
{
"paragraph_id": 8,
"text": "Subspecies discovered since the publishing of MSW3 in 2005 which have gone extinct over the past 150 years:",
"title": "List of extinct subspecies"
},
{
"paragraph_id": 9,
"text": "In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group considered the New Guinea singing dog and the dingo to be feral dogs (Canis familiaris). In 2020, a literature review of canid domestication stated that modern dogs were not descended from the same Canis lineage as modern wolves, and proposed that dogs may be descended from a Pleistocene wolf closer in size to a village dog. In 2021, the American Society of Mammalogists also considered dingos a feral dog (Canis familiaris) population.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 10,
"text": "The Italian wolf (or Apennine wolf) was first recognised as a distinct subspecies (Canis lupus italicus) in 1921 by zoologist Giuseppe Altobello. Altobello's classification was later rejected by several authors, including Reginald Innes Pocock, who synonymised C. l. italicus with C. l. lupus. In 2002, the noted paleontologist R.M. Nowak reaffirmed the morphological distinctiveness of the Italian wolf and recommended the recognition of Canis lupus italicus. A number of DNA studies have found the Italian wolf to be genetically distinct. In 2004, the genetic distinction of the Italian wolf subspecies was supported by analysis which consistently assigned all the wolf genotypes of a sample in Italy to a single group. This population also showed a unique mitochondrial DNA control-region haplotype, the absence of private alleles and lower heterozygosity at microsatellite loci, as compared to other wolf populations. In 2010, a genetic analysis indicated that a single wolf haplotype (w22) unique to the Apennine Peninsula and one of the two haplotypes (w24, w25), unique to the Iberian Peninsula, belonged to the same haplogroup as the prehistoric wolves of Europe. Another haplotype (w10) was found to be common to the Iberian peninsula and the Balkans. These three populations with geographic isolation exhibited a near lack of gene flow and spatially correspond to three glacial refugia.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 11,
"text": "The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis lupus italicus; however, NCBI/Genbank publishes research papers under that name.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 12,
"text": "The Iberian wolf was first recognised as a distinct subspecies (Canis lupus signatus) in 1907 by zoologist Ángel Cabrera. The wolves of the Iberian peninsula have morphologically distinct features from other Eurasian wolves and each are considered by their researchers to represent their own subspecies.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 13,
"text": "The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis lupus signatus; however, NCBI/Genbank does list it.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 14,
"text": "The Himalayan wolf is distinguished by its mitochondrial DNA, which is basal to all other wolves. The taxonomic name of this wolf is disputed, with the species Canis himalayensis being proposed based on two limited DNA studies. In 2017, a study of mitochondrial DNA, X-chromosome (maternal lineage) markers and Y-chromosome (male lineage) markers found that the Himalayan wolf was genetically basal to the Holarctic grey wolf and has an association with the African golden wolf.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 15,
"text": "In 2019, a workshop hosted by the IUCN/SSC Canid Specialist Group noted that the Himalayan wolf's distribution included the Himalayan range and the Tibetan Plateau. The group recommends that this wolf lineage be known as the \"Himalayan wolf\" and classified as Canis lupus chanco until a genetic analysis of the holotypes is available. In 2020, further research on the Himalayan wolf found that it warranted species-level recognition under the Unified Species Concept, the Differential Fitness Species Concept, and the Biological Species Concept. It was identified as an Evolutionary Significant Unit that warranted assignment onto the IUCN Red List for its protection.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 16,
"text": "The Indian plains wolf is a proposed clade within the Indian wolf (Canis lupus pallipes) that is distinguished by its mitochondrial DNA, which is basal to all other wolves except for the Himalayan wolf. The taxonomic status of this wolf clade is disputed, with the separate species Canis indica being proposed based on two limited DNA studies. The proposal has not been endorsed because they relied on a limited number of museum and zoo samples that may not have been representative of the wild population and a call for further fieldwork has been made.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 17,
"text": "The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis indica; however, NCBI/Genbank lists it as a new subspecies, Canis lupus indica.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 18,
"text": "In 2017, a comprehensive study found that the gray wolf was present across all of mainland China, both in the past and today. It exists in southern China, which refutes claims made by some researchers in the Western world that the wolf had never existed in southern China. This wolf has not been taxonomically classified.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 19,
"text": "In 2019, a genomic study on the wolves of China included museum specimens of wolves from southern China that were collected between 1963 and 1988. The wolves in the study formed three clades: northern Asian wolves that included those from northern China and eastern Russia, Himalayan wolves from the Tibetan Plateau, and a unique population from southern China. One specimen from Zhejiang Province in eastern China shared gene flow with the wolves from southern China; however, its genome was 12-14 percent admixed with a canid that may be the dhole or an unknown canid that predates the genetic divergence of the dhole. The wolf population from southern China is believed to be still existing in that region.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 20,
"text": "A study of the three coastal wolves indicates a close phylogenetic relationship across regions that are geographically and ecologically contiguous, and the study proposed that Canis lupus ligoni (the Alexander Archipelago wolf), Canis lupus columbianus (the British Columbian wolf), and Canis lupus crassodon (the Vancouver Coastal Sea wolf) should be recognized as a single subspecies of Canis lupus, synonymized as Canis lupus crassodon. They share the same habitat and prey species, and form one study's six identified North American ecotypes - a genetically and ecologically distinct population separated from other populations by their different type of habitat.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 21,
"text": "The eastern wolf has two proposals over its origin. One is that the eastern wolf is a distinct species (C. lycaon) that evolved in North America, as opposed to the gray wolf that evolved in the Old World, and is related to the red wolf. The other is that it is derived from admixture between gray wolves which inhabited the Great Lakes area and coyotes, forming a hybrid that was classified as a distinct species by mistake.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 22,
"text": "The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis lycaon, however NCBI/Genbank does list it. In 2021, the American Society of Mammalogists also considered Canis lycaon a valid species.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 23,
"text": "The red wolf is an enigmatic taxon, of which there are two proposals over its origin. One is that the red wolf was a distinct species (C. rufus) that has undergone human-influenced admixture with coyotes. The other is that it was never a distinct species but was derived from past admixture between coyotes and gray wolves, due to the gray wolf population being eliminated by humans.",
"title": "Disputed subspecies"
},
{
"paragraph_id": 24,
"text": "The taxonomic reference Mammal Species of the World (3rd edition, 2005) does not recognize Canis rufus, however NCBI/Genbank does list it. In 2021, the American Society of Mammalogists also considered Canis rufus a valid species.",
"title": "Disputed subspecies"
}
] | There are 38 subspecies of Canis lupus listed in the taxonomic authority Mammal Species of the World. These subspecies were named over the past 250 years, and since their naming, a number of them have gone extinct. The nominate subspecies is the Eurasian wolf. | 2001-10-09T19:55:46Z | 2023-12-23T00:45:06Z | [
"Template:Notelist",
"Template:Reflist",
"Template:Cite book",
"Template:ISBN",
"Template:Cite journal",
"Template:Cite report",
"Template:In lang",
"Template:Webarchive",
"Template:Not a typo",
"Template:Clear",
"Template:Smalldiv",
"Template:ITIS",
"Template:Cite web",
"Template:BioRef",
"Template:Dead link",
"Template:Grey wolf subspecies",
"Template:Multiple image",
"Template:Cladogram",
"Template:Harvnb",
"Template:MSW3 Wozencraft",
"Template:Short description",
"Template:As of",
"Template:Further",
"Template:OEtymD",
"Template:Mammal lists"
] | https://en.wikipedia.org/wiki/Subspecies_of_Canis_lupus |
6,742 | Central Asia | Central Asia is a subregion of Asia that stretches from the Caspian Sea in the southwest and Eastern Europe in the northwest to Western China and Mongolia in the east, and from Afghanistan and Iran in the south to Russia in the north. It includes the former Soviet republics of Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan. Central Asian nations are colloquially referred to as the "-stans" as the countries all have names ending with the Persian suffix "-stan", meaning "land of".
In the pre-Islamic and early Islamic eras (c. 1000 and earlier) Central Asia was inhabited predominantly by Iranian peoples, populated by Eastern Iranian-speaking Bactrians, Sogdians, Chorasmians, and the semi-nomadic Scythians and Dahae. After expansion by Turkic peoples, Central Asia also became the homeland for the Uzbeks, Kazakhs, Tatars, Turkmens, Kyrgyz, and Uyghurs; Turkic languages largely replaced the Iranian languages spoken in the area, with the exception of Tajikistan and areas where Tajik is spoken.
Central Asia was historically closely tied to the Silk Road trade routes, acting as a crossroads for the movement of people, goods, and ideas between Europe and the Far East. Most countries in Central Asia are still integral to parts of the world economy.
From the mid-19th century until almost the end of the 20th century, Central Asia was colonised by the Russians, and incorporated into the Russian Empire, and later the Soviet Union, which led to Russians and other Slavs emigrating into the area. Modern-day Central Asia is home to a large population of European settlers, who mostly live in Kazakhstan; 7 million Russians, 500,000 Ukrainians, and about 170,000 Germans. Stalinist-era forced deportation policies also mean that over 300,000 Koreans live there.
Central Asia has a population of about 72 million, in five countries: Kazakhstan (19 million), Kyrgyzstan (7 million), Tajikistan (10 million), Turkmenistan (6 million), and Uzbekistan (35 million).
One of the first geographers to mention Central Asia as a distinct region of the world was Alexander von Humboldt. The borders of Central Asia are subject to multiple definitions. Historically, political geography and culture have been two significant parameters widely used in scholarly definitions of Central Asia. Humboldt's definition composed of every country between 5° North and 5° South of the latitude 44.5°N. Humboldt mentions some geographic features of this region, which include the Caspian Sea in the west, the Altai mountains in the north and the Hindu Kush and Pamir mountains in the South. He did not give an eastern border for the region. His legacy is still seen: Humboldt University of Berlin, named after him, offers a course in Central Asian studies. The Russian geographer Nikolaĭ Khanykov questioned the latitudinal definition of Central Asia and preferred a physical one of all countries located in the region landlocked from water, including Afghanistan, Khorasan (Northeast Iran), Kyrgyzstan, Tajikistan, Turkmenistan, Uyghuristan (Xinjiang), and Uzbekistan.
Russian culture has two distinct terms: Средняя Азия (Srednyaya Aziya or "Middle Asia", the narrower definition, which includes only those traditionally non-Slavic, Central Asian lands that were incorporated within those borders of historical Russia) and Центральная Азия (Tsentralnaya Aziya or "Central Asia", the wider definition, which includes Central Asian lands that have never been part of historical Russia). The latter definition includes Afghanistan and 'East Turkestan'.
The most limited definition was the official one of the Soviet Union, which defined Middle Asia as consisting solely of Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan, omitting Kazakhstan. Soon after the dissolution of the Soviet Union in 1991, the leaders of the four former Soviet Central Asian Republics met in Tashkent and declared that the definition of Central Asia should include Kazakhstan as well as the original four included by the Soviets. Since then, this has become the most common definition of Central Asia.
In 1978, UNESCO defined the region as "Afghanistan, north-eastern Iran, Pakistan, northern India, western China, Mongolia and the Soviet Central Asian Republics".
An alternative method is to define the region based on ethnicity, and in particular, areas populated by Eastern Turkic, Eastern Iranian, or Mongolian peoples. These areas include Xinjiang Uyghur Autonomous Region, the Turkic regions of southern Siberia, the five republics, and Afghan Turkestan. Afghanistan as a whole, the northern and western areas of Pakistan and the Kashmir Valley of India may also be included. The Tibetans and Ladakhis are also included. Most of the mentioned peoples are considered the "indigenous" peoples of the vast region. Central Asia is sometimes referred to as Turkestan.
Central Asia is a region of varied geography, including high passes and mountains (Tian Shan), vast deserts (Kyzyl Kum, Taklamakan), and especially treeless, grassy steppes. The vast steppe areas of Central Asia are considered together with the steppes of Eastern Europe as a homogeneous geographical zone known as the Eurasian Steppe.
Much of the land of Central Asia is too dry or too rugged for farming. The Gobi desert extends from the foot of the Pamirs, 77° E, to the Great Khingan (Da Hinggan) Mountains, 116°–118° E.
Central Asia has the following geographic extremes:
A majority of the people earn a living by herding livestock. Industrial activity centers in the region's cities.
Major rivers of the region include the Amu Darya, the Syr Darya, Irtysh, the Hari River and the Murghab River. Major bodies of water include the Aral Sea and Lake Balkhash, both of which are part of the huge west-central Asian endorheic basin that also includes the Caspian Sea.
Both of these bodies of water have shrunk significantly in recent decades due to the diversion of water from rivers that feed them for irrigation and industrial purposes. Water is an extremely valuable resource in arid Central Asia and can lead to rather significant international disputes.
Central Asia is bounded on the north by the forests of Siberia. The northern half of Central Asia (Kazakhstan) is the middle part of the Eurasian steppe. Westward the Kazakh steppe merges into the Russian-Ukrainian steppe and eastward into the steppes and deserts of Dzungaria and Mongolia. Southward the land becomes increasingly dry and the nomadic population increasingly thin. The south supports areas of dense population and cities wherever irrigation is possible. The main irrigated areas are along the eastern mountains, along the Oxus and Jaxartes Rivers and along the north flank of the Kopet Dagh near the Persian border. East of the Kopet Dagh is the important oasis of Merv and then a few places in Afghanistan like Herat and Balkh. Two projections of the Tian Shan create three "bays" along the eastern mountains. The largest, in the north, is eastern Kazakhstan, traditionally called Jetysu or Semirechye which contains Lake Balkhash. In the center is the small but densely-populated Ferghana valley. In the south is Bactria, later called Tocharistan, which is bounded on the south by the Hindu Kush mountains of Afghanistan. The Syr Darya (Jaxartes) rises in the Ferghana valley and the Amu Darya (Oxus) rises in Bactria. Both flow northwest into the Aral Sea. Where the Oxus meets the Aral Sea it forms a large delta called Khwarazm and later the Khanate of Khiva. North of the Oxus is the less-famous but equally important Zarafshan River which waters the great trading cities of Bokhara and Samarkand. The other great commercial city was Tashkent northwest of the mouth of the Ferghana valley. The land immediately north of the Oxus was called Transoxiana and also Sogdia, especially when referring to the Sogdian merchants who dominated the silk road trade.
To the east, Dzungaria and the Tarim Basin were united into the Manchu-Chinese province of Xinjiang (Sinkiang; Hsin-kiang) about 1759. Caravans from China usually went along the north or south side of the Tarim basin and joined at Kashgar before crossing the mountains northwest to Ferghana or southwest to Bactria. A minor branch of the silk road went north of the Tian Shan through Dzungaria and Zhetysu before turning southwest near Tashkent. Nomadic migrations usually moved from Mongolia through Dzungaria before turning southwest to conquer the settled lands or continuing west toward Europe.
The Kyzyl Kum Desert or semi-desert is between the Oxus and Jaxartes, and the Karakum Desert is between the Oxus and Kopet Dagh in Turkmenistan. Khorasan meant approximately northeast Persia and northern Afghanistan. Margiana was the region around Merv. The Ustyurt Plateau is between the Aral and Caspian Seas.
To the southwest, across the Kopet Dagh, lies Persia. From here Persian and Islamic civilisation penetrated Central Asia and dominated its high culture until the Russian conquest. In the southeast is the route to India. In early times Buddhism spread north and throughout much of history warrior kings and tribes would move southeast to establish their rule in northern India. Most nomadic conquerors entered from the northeast. After 1800 western civilisation in its Russian and Soviet form penetrated from the northwest.
Because Central Asia is landlocked and not buffered by a large body of water, temperature fluctuations are often severe, excluding the hot, sunny summer months. In most areas the climate is dry and continental, with hot summers and cool to cold winters, with occasional snowfall. Outside high-elevation areas, the climate is mostly semi-arid to arid. In lower elevations, summers are hot with blazing sunshine. Winters feature occasional rain or snow from low-pressure systems that cross the area from the Mediterranean Sea. Average monthly precipitation is very low from July to September, rises in autumn (October and November) and is highest in March or April, followed by swift drying in May and June. Winds can be strong, producing dust storms sometimes, especially toward the end of the summer in September and October. Specific cities that exemplify Central Asian climate patterns include Tashkent and Samarkand, Uzbekistan, Ashgabat, Turkmenistan, and Dushanbe, Tajikistan. The last of these represents one of the wettest climates in Central Asia, with an average annual precipitation of over 560 mm (22 inches).
Biogeographically, Central Asia is part of the Palearctic realm. The largest biome in Central Asia is the temperate grasslands, savannas, and shrublands biome. Central Asia also contains the montane grasslands and shrublands, deserts and xeric shrublands and temperate coniferous forests biomes.
As of 2022, Central Asia is one of the most vulnerable regions to global climate change in the world and the region's temperature is growing faster than the global average.
Although, during the golden age of Orientalism the place of Central Asia in the world history was marginalised, contemporary historiography has rediscovered the "centrality" of the Central Asia. The history of Central Asia is defined by the area's climate and geography. The aridness of the region made agriculture difficult, and its distance from the sea cut it off from much trade. Thus, few major cities developed in the region; instead, the area was for millennia dominated by the nomadic horse peoples of the steppe.
Relations between the steppe nomads and the settled people in and around Central Asia were long marked by conflict. The nomadic lifestyle was well suited to warfare, and the steppe horse riders became some of the most militarily potent people in the world, limited only by their lack of internal unity. Any internal unity that was achieved was most probably due to the influence of the Silk Road, which traveled along Central Asia. Periodically, great leaders or changing conditions would organise several tribes into one force and create an almost unstoppable power. These included the Hun invasion of Europe, the Five Barbarians rebellions in China and most notably the Mongol conquest of much of Eurasia.
During pre-Islamic and early Islamic times, Central Asia was inhabited predominantly by speakers of Iranian languages. Among the ancient sedentary Iranian peoples, the Sogdians and Chorasmians played an important role, while Iranian peoples such as Scythians and the later on Alans lived a nomadic or semi-nomadic lifestyle.
The main migration of Turkic peoples occurred between the 6th and 11th centuries, when they spread across most of Central Asia. The Eurasian Steppe slowly transitioned from Indo European and Iranian-speaking groups with dominant West-Eurasian ancestry to a more heterogeneous region with increasing East Asian ancestry through Turkic and Mongolian groups in the past thousands years, including extensive Turkic and later Mongol migrations out of Mongolia and slow assimilation of local populations. In the 8th century AD, the Islamic expansion reached the region but had no significant demographic impact. In the 13th century AD, the Mongolian invasion of Central Asia brought most of the region under Mongolian influence, which had "enormous demographic success", but did not impact the cultural or linguistic landscape.
Once populated by Iranian tribes and other Indo-European speaking people, Central Asia experienced numerous invasions emanating out of Southern Siberia and Mongolia that would drastically affect the region. Genetic data shows that the different Central Asian Turkic-speaking peoples have between ~22% and ~70% East Asian ancestry (represented by "Baikal hunter-gatherer ancestry" shared with other Northeast Asians and Eastern Siberians), in contrast to Iranian-speaking Central Asians, specifically Tajiks, which display genetic continuity to Indo-Iranians of the Iron Age. Certain Turkic ethnic groups, specifically the Kazakhs, display even higher East Asian ancestry. This is explained by substantial Mongolian influence on the Kazakh genome, through significant admixture between blue eyes, blonde hair, the medieval Kipchaks of Central Asia and the invading medieval Mongolians. The data suggests that the Mongol invasion of Central Asia had lasting impacts onto the genetic makeup of Kazakhs. According to recent genetic genealogy testing, the genetic admixture of the Uzbeks clusters somewhere between the Iranian peoples and the Mongols. Another study shows that the Uzbeks are closely related to other Turkic peoples of Central Asia and rather distant from Iranian people. The study also analysed the maternal and paternal DNA haplogroups and shows that Turkic speaking groups are more homogenous than Iranian speaking groups. Genetic studies analyzing the full genome of Uzbeks and other Central Asian populations found that about ~27-60% of the Uzbek ancestry is derived from East Asian sources, with the remainder ancestry (~40–73%) being made up by European and Middle Eastern components. According to a recent study, the Kyrgyz, Kazakhs, Uzbeks, and Turkmens share more of their gene pool with various East Asian and Siberian populations than with West Asian or European populations, though the Turkmens have a large percentage from populations to the east, their main components are Central Asian. The study further suggests that both migration and linguistic assimilation helped to spread the Turkic languages in Eurasia.
The Tang dynasty of China expanded westwards and controlled large parts of Central Asia, directly and indirectly through their Turkic vassals. Tang China actively supported the Turkification of Central Asia, while extending its cultural influence. The Tang Chinese were defeated by the Abbasid Caliphate at the Battle of Talas in 751, marking the end of the Tang dynasty's western expansion and the 150 years of Chinese influence. The Tibetan Empire would take the chance to rule portions of Central Asia and South Asia. During the 13th and 14th centuries, the Mongols conquered and ruled the largest contiguous empire in recorded history. Most of Central Asia fell under the control of the Chagatai Khanate.
The dominance of the nomads ended in the 16th century, as firearms allowed settled peoples to gain control of the region. Russia, China, and other powers expanded into the region and had captured the bulk of Central Asia by the end of the 19th century. After the Russian Revolution, the western Central Asian regions were incorporated into the Soviet Union. The eastern part of Central Asia, known as Xinjiang, was incorporated into the People's Republic of China, having been previously ruled by the Qing dynasty and the Republic of China. Mongolia gained its independence from China and has remained independent but became a Soviet satellite state until the dissolution of the Soviet Union. Afghanistan remained relatively independent of major influence by the Soviet Union until the Saur Revolution of 1978.
The Soviet areas of Central Asia saw much industrialisation and construction of infrastructure, but also the suppression of local cultures, hundreds of thousands of deaths from failed collectivisation programmes, and a lasting legacy of ethnic tensions and environmental problems. Soviet authorities deported millions of people, including entire nationalities, from western areas of the Soviet Union to Central Asia and Siberia. According to Touraj Atabaki and Sanjyot Mehendale, "From 1959 to 1970, about two million people from various parts of the Soviet Union migrated to Central Asia, of which about one million moved to Kazakhstan."
With the collapse of the Soviet Union, five countries gained independence, that is, Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan. The historian and Turkologist Peter B. Golden explains that without the imperial manipulations of the Russian Empire but above all the Soviet Union, the creation of said republics would have been impossible.
In nearly all the new states, former Communist Party officials retained power as local strongmen. None of the new republics could be considered functional democracies in the early days of independence, although in recent years Kyrgyzstan, Kazakhstan and Mongolia have made further progress towards more open societies, unlike Uzbekistan, Tajikistan, and Turkmenistan, which have maintained many Soviet-style repressive tactics.
At the crossroads of Asia, shamanistic practices live alongside Buddhism. Thus, Yama, Lord of Death, was revered in Tibet as a spiritual guardian and judge. Mongolian Buddhism, in particular, was influenced by Tibetan Buddhism. The Qianlong Emperor of Qing China in the 18th century was Tibetan Buddhist and would sometimes travel from Beijing to other cities for personal religious worship.
Central Asia also has an indigenous form of improvisational oral poetry that is over 1000 years old. It is principally practiced in Kyrgyzstan and Kazakhstan by akyns, lyrical improvisationalists. They engage in lyrical battles, the aytysh or the alym sabak. The tradition arose out of early bardic oral historians. They are usually accompanied by a stringed instrument—in Kyrgyzstan, a three-stringed komuz, and in Kazakhstan, a similar two-stringed instrument, the dombra.
Photography in Central Asia began to develop after 1882, when a Russian Mennonite photographer named Wilhelm Penner moved to the Khanate of Khiva during the Mennonite migration to Central Asia led by Claas Epp, Jr. Upon his arrival to Khanate of Khiva, Penner shared his photography skills with a local student Khudaybergen Divanov, who later became the founder of Uzbek photography.
Some also learn to sing the Manas, Kyrgyzstan's epic poem (those who learn the Manas exclusively but do not improvise are called manaschis). During Soviet rule, akyn performance was co-opted by the authorities and subsequently declined in popularity. With the fall of the Soviet Union, it has enjoyed a resurgence, although akyns still do use their art to campaign for political candidates. A 2005 The Washington Post article proposed a similarity between the improvisational art of akyns and modern freestyle rap performed in the West.
As a consequence of Russian colonisation, European fine arts – painting, sculpture and graphics – have developed in Central Asia. The first years of the Soviet regime saw the appearance of modernism, which took inspiration from the Russian avant-garde movement. Until the 1980s, Central Asian arts had developed along with general tendencies of Soviet arts. In the 90s, arts of the region underwent some significant changes. Institutionally speaking, some fields of arts were regulated by the birth of the art market, some stayed as representatives of official views, while many were sponsored by international organisations. The years of 1990–2000 were times for the establishment of contemporary arts. In the region, many important international exhibitions are taking place, Central Asian art is represented in European and American museums, and the Central Asian Pavilion at the Venice Biennale has been organised since 2005.
Equestrian sports are traditional in Central Asia, with disciplines like endurance riding, buzkashi, dzhigit and kyz kuu.
The traditional game of Buzkashi is played throughout the Central Asian region, the countries sometimes organise Buzkashi competition amongst each other. The First regional competition among the Central Asian countries, Russia, Chinese Xinjiang and Turkey was held in 2013. The first world title competition was played in 2017 and won by Kazakhstan.
Association football is popular across Central Asia. Most countries are members of the Central Asian Football Association, a region of the Asian Football Confederation. However, Kazakhstan is a member of the UEFA.
Wrestling is popular across Central Asia, with Kazakhstan having claimed 14 Olympic medals, Uzbekistan seven, and Kyrgyzstan three. As former Soviet states, Central Asian countries have been successful in gymnastics.
Mixed Martial Arts is one of more common sports in Central Asia, Kyrgyz athlete Valentina Shevchenko holding the UFC Flyweight Champion title.
Cricket is the most popular sport in Afghanistan. The Afghanistan national cricket team, first formed in 2001, has claimed wins over Bangladesh, West Indies and Zimbabwe.
Notable Kazakh competitors include cyclists Alexander Vinokourov and Andrey Kashechkin, boxer Vassiliy Jirov and Gennady Golovkin, runner Olga Shishigina, decathlete Dmitriy Karpov, gymnast Aliya Yussupova, judoka Askhat Zhitkeyev and Maxim Rakov, skier Vladimir Smirnov, weightlifter Ilya Ilyin, and figure skaters Denis Ten and Elizabet Tursynbaeva.
Notable Uzbekistani competitors include cyclist Djamolidine Abdoujaparov, boxer Ruslan Chagaev, canoer Michael Kolganov, gymnast Oksana Chusovitina, tennis player Denis Istomin, chess player Rustam Kasimdzhanov, and figure skater Misha Ge.
Since gaining independence in the early 1990s, the Central Asian republics have gradually been moving from a state-controlled economy to a market economy. However, reform has been deliberately gradual and selective, as governments strive to limit the social cost and ameliorate living standards. All five countries are implementing structural reforms to improve competitiveness. Kazakhstan is the only CIS country to be included in the 2020 and 2019 IWB World Competitiveness rankings. In particular, they have been modernizing the industrial sector and fostering the development of service industries through business-friendly fiscal policies and other measures, to reduce the share of agriculture in GDP. Between 2005 and 2013, the share of agriculture dropped in all but Tajikistan, where it increased while industry decreased. The fastest growth in industry was observed in Turkmenistan, whereas the services sector progressed most in the other four countries.
Public policies pursued by Central Asian governments focus on buffering the political and economic spheres from external shocks. This includes maintaining a trade balance, minimizing public debt and accumulating national reserves. They cannot totally insulate themselves from negative exterior forces, however, such as the persistently weak recovery of global industrial production and international trade since 2008. Notwithstanding this, they have emerged relatively unscathed from the global financial crisis of 2008–2009. Growth faltered only briefly in Kazakhstan, Tajikistan and Turkmenistan and not at all in Uzbekistan, where the economy grew by more than 7% per year on average between 2008 and 2013. Turkmenistan achieved unusually high 14.7% growth in 2011. Kyrgyzstan's performance has been more erratic but this phenomenon was visible well before 2008.
The republics which have fared best benefitted from the commodities boom during the first decade of the 2000s. Kazakhstan and Turkmenistan have abundant oil and natural gas reserves and Uzbekistan's own reserves make it more or less self-sufficient. Kyrgyzstan, Tajikistan and Uzbekistan all have gold reserves and Kazakhstan has the world's largest uranium reserves. Fluctuating global demand for cotton, aluminium and other metals (except gold) in recent years has hit Tajikistan hardest, since aluminium and raw cotton are its chief exports − the Tajik Aluminium Company is the country's primary industrial asset. In January 2014, the Minister of Agriculture announced the government's intention to reduce the acreage of land cultivated by cotton to make way for other crops. Uzbekistan and Turkmenistan are major cotton exporters themselves, ranking fifth and ninth respectively worldwide for volume in 2014.
Although both exports and imports have grown significantly over the past decade, Central Asian republics countries remain vulnerable to economic shocks, owing to their reliance on exports of raw materials, a restricted circle of trading partners and a negligible manufacturing capacity. Kyrgyzstan has the added disadvantage of being considered resource poor, although it does have ample water. Most of its electricity is generated by hydropower.
The Kyrgyz economy was shaken by a series of shocks between 2010 and 2012. In April 2010, President Kurmanbek Bakiyev was deposed by a popular uprising, with former minister of foreign affairs Roza Otunbayeva assuring the interim presidency until the election of Almazbek Atambayev in November 2011. Food prices rose two years in a row and, in 2012, production at the major Kumtor gold mine fell by 60% after the site was perturbed by geological movements. According to the World Bank, 33.7% of the population was living in absolute poverty in 2010 and 36.8% a year later.
Despite high rates of economic growth in recent years, GDP per capita in Central Asia was higher than the average for developing countries only in Kazakhstan in 2013 (PPP$23,206) and Turkmenistan (PPP$14 201). It dropped to PPP$5,167 for Uzbekistan, home to 45% of the region's population, and was even lower for Kyrgyzstan and Tajikistan.
Kazakhstan leads the Central Asian region in terms of foreign direct investments. The Kazakh economy accounts for more than 70% of all the investment attracted in Central Asia.
In terms of the economic influence of big powers, China is viewed as one of the key economic players in Central Asia, especially after Beijing launched its grand development strategy known as the Belt and Road Initiative (BRI) in 2013.
The Central Asian countries attracted $378.2 billion of foreign direct investment (FDI) between 2007 and 2019. Kazakhstan accounted for 77.7% of the total FDI directed to the region. Kazakhstan is also the largest country in Central Asia accounting for more than 60 percent of the region's gross domestic product (GDP).
Central Asian nations fared better economically throughout the COVID-19 pandemic. Many variables are likely to have been at play, but disparities in economic structure, the intensity of the pandemic, and accompanying containment efforts may all be linked to part of the variety in nations' experiences. Central Asian countries are, however, predicted to be hit the worst in the future. Only 4% of permanently closed businesses anticipate to return in the future, with huge differences across sectors, ranging from 3% in lodging and food services to 27% in retail commerce.
In 2022, experts assessed that global climate change is likely to pose multiple economic risks to Central Asia and may possibly result in many billions of losses unless proper adaptation measures are developed to counter growing temperatures across the region.
Bolstered by strong economic growth in all but Kyrgyzstan, national development strategies are fostering new high-tech industries, pooling resources and orienting the economy towards export markets. Many national research institutions established during the Soviet era have since become obsolete with the development of new technologies and changing national priorities. This has led countries to reduce the number of national research institutions since 2009 by grouping existing institutions to create research hubs. Several of the Turkmen Academy of Science's institutes were merged in 2014: the Institute of Botany was merged with the Institute of Medicinal Plants to become the Institute of Biology and Medicinal Plants; the Sun Institute was merged with the Institute of Physics and Mathematics to become the Institute of Solar Energy; and the Institute of Seismology merged with the State Service for Seismology to become the Institute of Seismology and Atmospheric Physics. In Uzbekistan, more than 10 institutions of the Academy of Sciences have been reorganised, following the issuance of a decree by the Cabinet of Ministers in February 2012. The aim is to orient academic research towards problem-solving and ensure continuity between basic and applied research. For example, the Mathematics and Information Technology Research Institute has been subsumed under the National University of Uzbekistan and the Institute for Comprehensive Research on Regional Problems of Samarkand has been transformed into a problem-solving laboratory on environmental issues within Samarkand State University. Other research institutions have remained attached to the Uzbek Academy of Sciences, such as the Centre of Genomics and Bioinformatics.
Kazakhstan and Turkmenistan are also building technology parks as part of their drive to modernise infrastructure. In 2011, construction began of a technopark in the village of Bikrova near Ashgabat, the Turkmen capital. It will combine research, education, industrial facilities, business incubators and exhibition centres. The technopark will house research on alternative energy sources (sun, wind) and the assimilation of nanotechnologies. Between 2010 and 2012, technological parks were set up in the east, south and north Kazakhstan oblasts (administrative units) and in the capital, Astana. A Centre for Metallurgy was also established in the east Kazakhstan oblast, as well as a Centre for Oil and Gas Technologies which will be part of the planned Caspian Energy Hub. In addition, the Centre for Technology Commercialisation has been set up in Kazakhstan as part of the Parasat National Scientific and Technological Holding, a joint stock company established in 2008 that is 100% state-owned. The centre supports research projects in technology marketing, intellectual property protection, technology licensing contracts and start-ups. The centre plans to conduct a technology audit in Kazakhstan and to review the legal framework regulating the commercialisation of research results and technology.
Countries are seeking to augment the efficiency of traditional extractive sectors but also to make greater use of information and communication technologies and other modern technologies, such as solar energy, to develop the business sector, education and research. In March 2013, two research institutes were created by presidential decree to foster the development of alternative energy sources in Uzbekistan, with funding from the Asian Development Bank and other institutions: the SPU Physical−Technical Institute (Physics Sun Institute) and the International Solar Energy Institute. Three universities have been set up since 2011 to foster competence in strategic economic areas: Nazarbayev University in Kazakhstan (first intake in 2011), an international research university, Inha University in Uzbekistan (first intake in 2014), specializing in information and communication technologies, and the International Oil and Gas University in Turkmenistan (founded in 2013). Kazakhstan and Uzbekistan are both generalizing the teaching of foreign languages at school, in order to facilitate international ties. Kazakhstan and Uzbekistan have both adopted the three-tier bachelor's, master's and PhD degree system, in 2007 and 2012 respectively, which is gradually replacing the Soviet system of Candidates and Doctors of Science. In 2010, Kazakhstan became the only Central Asian member of the Bologna Process, which seeks to harmonise higher education systems in order to create a European Higher Education Area.
The Central Asian republics' ambition of developing the business sector, education and research is being hampered by chronic low investment in research and development. Over the decade to 2013, the region's investment in research and development hovered around 0.2–0.3% of GDP. Uzbekistan broke with this trend in 2013 by raising its own research intensity to 0.41% of GDP.
Kazakhstan is the only country where the business enterprise and private non-profit sectors make any significant contribution to research and development – but research intensity overall is low in Kazakhstan: just 0.18% of GDP in 2013. Moreover, few industrial enterprises conduct research in Kazakhstan. Only one in eight (12.5%) of the country's manufacturing firms were active in innovation in 2012, according to a survey by the UNESCO Institute for Statistics. Enterprises prefer to purchase technological solutions that are already embodied in imported machinery and equipment. Just 4% of firms purchase the license and patents that come with this technology. Nevertheless, there appears to be a growing demand for the products of research, since enterprises spent 4.5 times more on scientific and technological services in 2008 than in 1997.
Kazakhstan and Uzbekistan count the highest researcher density in Central Asia. The number of researchers per million population is close to the world average (1,083 in 2013) in Kazakhstan (1,046) and higher than the world average in Uzbekistan (1,097).
Kazakhstan is the only Central Asian country where the business enterprise and private non-profit sectors make any significant contribution to research and development. Uzbekistan is in a particularly vulnerable position, with its heavy reliance on higher education: three-quarters of researchers were employed by the university sector in 2013 and just 6% in the business enterprise sector. With most Uzbek university researchers nearing retirement, this imbalance imperils Uzbekistan's research future. Almost all holders of a Candidate of Science, Doctor of Science or PhD are more than 40 years old and half are aged over 60; more than one in three researchers (38.4%) holds a PhD degree, or its equivalent, the remainder holding a bachelor's or master's degree.
Kazakhstan, Kyrgyzstan and Uzbekistan have all maintained a share of women researchers above 40% since the fall of the Soviet Union. Kazakhstan has even achieved gender parity, with Kazakh women dominating medical and health research and representing some 45–55% of engineering and technology researchers in 2013. In Tajikistan, however, only one in three scientists (34%) was a woman in 2013, down from 40% in 2002. Although policies are in place to give Tajik women equal rights and opportunities, these are underfunded and poorly understood. Turkmenistan has offered a state guarantee of equality for women since a law adopted in 2007 but the lack of available data makes it impossible to draw any conclusions as to the law's impact on research. As for Turkmenistan, it does not make data available on higher education, research expenditure or researchers.
Table: PhDs obtained in science and engineering in Central Asia, 2013 or closest year
Source: UNESCO Science Report: towards 2030 (2015), Table 14.1
Note: PhD graduates in science cover life sciences, physical sciences, mathematics and statistics, and computing; PhDs in engineering also cover manufacturing and construction. For Central Asia, the generic term of PhD also encompasses Candidate of Science and Doctor of Science degrees. Data are unavailable for Turkmenistan.
Table: Central Asian researchers by field of science and gender, 2013 or closest year
Source: UNESCO Science Report: towards 2030 (2015), Table 14.1
The number of scientific papers published in Central Asia grew by almost 50% between 2005 and 2014, driven by Kazakhstan, which overtook Uzbekistan over this period to become the region's most prolific scientific publisher, according to Thomson Reuters' Web of Science (Science Citation Index Expanded). Between 2005 and 2014, Kazakhstan's share of scientific papers from the region grew from 35% to 56%. Although two-thirds of papers from the region have a foreign co-author, the main partners tend to come from beyond Central Asia, namely the Russian Federation, USA, German, United Kingdom and Japan.
Five Kazakh patents were registered at the US Patent and Trademark Office between 2008 and 2013, compared to three for Uzbek inventors and none at all for the other three Central Asian republics, Kyrgyzstan, Tajikistan and Turkmenistan.
Kazakhstan is Central Asia's main trader in high-tech products. Kazakh imports nearly doubled between 2008 and 2013, from US$2.7 billion to US$5.1 billion. There has been a surge in imports of computers, electronics and telecommunications; these products represented an investment of US$744 million in 2008 and US$2.6 billion five years later. The growth in exports was more gradual – from US$2.3 billion to US$3.1 billion – and dominated by chemical products (other than pharmaceuticals), which represented two-thirds of exports in 2008 (US$1.5 billion) and 83% (US$2.6 billion) in 2013.
The five Central Asian republics belong to several international bodies, including the Organization for Security and Co-operation in Europe, the Economic Cooperation Organization and the Shanghai Cooperation Organisation. They are also members of the Central Asia Regional Economic Cooperation (CAREC) Programme, which also includes Afghanistan, Azerbaijan, China, Mongolia and Pakistan. In November 2011, the 10 member countries adopted the CAREC 2020 Strategy, a blueprint for furthering regional co-operation. Over the decade to 2020, US$50 billion is being invested in priority projects in transport, trade and energy to improve members' competitiveness. The landlocked Central Asian republics are conscious of the need to co-operate in order to maintain and develop their transport networks and energy, communication and irrigation systems. Only Kazakhstan, Azerbaijan, and Turkmenistan border the Caspian Sea and none of the republics has direct access to an ocean, complicating the transportation of hydrocarbons, in particular, to world markets.
Kazakhstan is also one of the three founding members of the Eurasian Economic Union in 2014, along with Belarus and the Russian Federation. Armenia and Kyrgyzstan have since joined this body. As co-operation among the member states in science and technology is already considerable and well-codified in legal texts, the Eurasian Economic Union is expected to have a limited additional impact on co-operation among public laboratories or academia but it should encourage business ties and scientific mobility, since it includes provision for the free circulation of labour and unified patent regulations.
Kazakhstan and Tajikistan participated in the Innovative Biotechnologies Programme (2011–2015) launched by the Eurasian Economic Community, the predecessor of the Eurasian Economic Union, The programme also involved Belarus and the Russian Federation. Within this programme, prizes were awarded at an annual bio-industry exhibition and conference. In 2012, 86 Russian organisations participated, plus three from Belarus, one from Kazakhstan and three from Tajikistan, as well as two scientific research groups from Germany. At the time, Vladimir Debabov, scientific director of the Genetika State Research Institute for Genetics and the Selection of Industrial Micro-organisms in the Russian Federation, stressed the paramount importance of developing bio-industry. "In the world today, there is a strong tendency to switch from petrochemicals to renewable biological sources", he said. "Biotechnology is developing two to three times faster than chemicals."
Kazakhstan also participated in a second project of the Eurasian Economic Community, the establishment of the Centre for Innovative Technologies on 4 April 2013, with the signing of an agreement between the Russian Venture Company (a government fund of funds), the Kazakh JSC National Agency and the Belarusian Innovative Foundation. Each of the selected projects is entitled to funding of US$3–90 million and is implemented within a public–private partnership. The first few approved projects focused on supercomputers, space technologies, medicine, petroleum recycling, nanotechnologies and the ecological use of natural resources. Once these initial projects have spawned viable commercial products, the venture company plans to reinvest the profits in new projects. This venture company is not a purely economic structure; it has also been designed to promote a common economic space among the three participating countries. Kazakhstan recognises the role civil society initiatives have to address the consequences of the COVID-19 crisis.
Four of the five Central Asian republics have also been involved in a project launched by the European Union in September 2013, IncoNet CA. The aim of this project is to encourage Central Asian countries to participate in research projects within Horizon 2020, the European Union's eighth research and innovation funding programme. The focus of this research projects is on three societal challenges considered as being of mutual interest to both the European Union and Central Asia, namely: climate change, energy and health. IncoNet CA builds on the experience of earlier projects which involved other regions, such as Eastern Europe, the South Caucasus and the Western Balkans. IncoNet CA focuses on twinning research facilities in Central Asia and Europe. It involves a consortium of partner institutions from Austria, the Czech Republic, Estonia, Germany, Hungary, Kazakhstan, Kyrgyzstan, Poland, Portugal, Tajikistan, Turkey and Uzbekistan. In May 2014, the European Union launched a 24-month call for project applications from twinned institutions – universities, companies and research institutes – for funding of up to €10, 000 to enable them to visit one another's facilities to discuss project ideas or prepare joint events like workshops.
The International Science and Technology Center (ISTC) was established in 1992 by the European Union, Japan, the Russian Federation and the US to engage weapons scientists in civilian research projects and to foster technology transfer. ISTC branches have been set up in the following countries party to the agreement: Armenia, Belarus, Georgia, Kazakhstan, Kyrgyzstan and Tajikistan. The headquarters of ISTC were moved to Nazarbayev University in Kazakhstan in June 2014, three years after the Russian Federation announced its withdrawal from the centre.
Kyrgyzstan, Tajikistan and Kazakhstan have been members of the World Trade Organization since 1998, 2013 and 2015 respectively.
By a broad definition including Mongolia and Afghanistan, more than 90 million people live in Central Asia, about 2% of Asia's total population. Of the regions of Asia, only North Asia has fewer people. It has a population density of 9 people per km, vastly less than the 80.5 people per km of the continent as a whole. Kazakhstan is one of the least densely populated countries in the world.
Russian, as well as being spoken by around six million ethnic Russians and Ukrainians of Central Asia, is the de facto lingua franca throughout the former Soviet Central Asian Republics. Mandarin Chinese has an equally dominant presence in Inner Mongolia, Qinghai and Xinjiang.
The languages of the majority of the inhabitants of the former Soviet Central Asian Republics belong to the Turkic language group. Turkmen is mainly spoken in Turkmenistan, and as a minority language in Afghanistan, Russia, Iran and Turkey. Kazakh and Kyrgyz are related languages of the Kypchak group of Turkic languages and are spoken throughout Kazakhstan, Kyrgyzstan, and as a minority language in Tajikistan, Afghanistan and Xinjiang. Uzbek and Uyghur are spoken in Uzbekistan, Tajikistan, Kyrgyzstan, Afghanistan and Xinjiang.
Middle Iranian languages were once spoken throughout Central Asia, such as the once prominent Sogdian, Khwarezmian, Bactrian and Scythian, which are now extinct and belonged to the Eastern Iranian family. The Eastern Iranian Pashto language is still spoken in Afghanistan and northwestern Pakistan. Other minor Eastern Iranian languages such as Shughni, Munji, Ishkashimi, Sarikoli, Wakhi, Yaghnobi and Ossetic are also spoken at various places in Central Asia. Varieties of Persian are also spoken as a major language in the region, locally known as Dari (in Afghanistan), Tajik (in Tajikistan and Uzbekistan), and Bukhori (by the Bukharan Jews of Central Asia).
Tocharian, another Indo-European language group, which was once predominant in oases on the northern edge of the Tarim Basin of Xinjiang, is now extinct.
Other language groups include the Tibetic languages, spoken by around six million people across the Tibetan Plateau and into Qinghai, Sichuan (Szechwan), Ladakh and Baltistan, and the Nuristani languages of northeastern Afghanistan. Korean is spoken by the Koryo-saram minority, mainly in Kazakhstan and Uzbekistan.
Religion in Central Asia
Islam is the religion most common in the Central Asian Republics, Afghanistan, Xinjiang, and the peripheral western regions, such as Bashkortostan. Most Central Asian Muslims are Sunni, although there are sizable Shia minorities in Afghanistan and Tajikistan.
Buddhism and Zoroastrianism were the major faiths in Central Asia before the arrival of Islam. Zoroastrian influence is still felt today in such celebrations as Nowruz, held in all five of the Central Asian states. The transmission of Buddhism along the Silk Road eventually brought the religion to China. Amongst the Turkic peoples, Tengrism was the leading religion before Islam. Tibetan Buddhism is most common in Tibet, Mongolia, Ladakh, and the southern Russian regions of Siberia.
The form of Christianity most practiced in the region in previous centuries was Nestorianism, but now the largest denomination is the Russian Orthodox Church, with many members in Kazakhstan, where about 25% of the population of 19 million identify as Christian, 17% in Uzbekistan and 5% in Kyrgyzstan. Pew Research Center estimates indicate that in 2010, around 6 million Christians lived in Central Asian countries, the Pew Forum study finds that Kazakhstan (4.1 million) has the largest Christian population in the region, followed by Uzbekistan (710,000), Kyrgyzstan (660,000), Turkmenistan (320,000) and Tajikistan (100,000).
The Bukharan Jews were once a sizable community in Uzbekistan and Tajikistan, but nearly all have emigrated since the dissolution of the Soviet Union.
In Siberia, shaministic practices persist, including forms of divination such as Kumalak.
Contact and migration with Han people from China has brought Confucianism, Daoism, Mahayana Buddhism, and other Chinese folk beliefs into the region.
Central Asia is where many integral beliefs and elements in various religious traditions of Judaism, Christianity, Islam, Buddhism, and Hinduism originated.
Central Asia has long been a strategic location merely because of its proximity to several great powers on the Eurasian landmass. The region itself never held a dominant stationary population nor was able to make use of natural resources. Thus, it has rarely throughout history become the seat of power for an empire or influential state. Central Asia has been divided, redivided, conquered out of existence, and fragmented time and time again. Central Asia has served more as the battleground for outside powers than as a power in its own right.
Central Asia had both the advantage and disadvantage of a central location between four historical seats of power. From its central location, it has access to trade routes to and from all the regional powers. On the other hand, it has been continuously vulnerable to attack from all sides throughout its history, resulting in political fragmentation or outright power vacuum, as it is successively dominated.
In the post–Cold War era, Central Asia is an ethnic cauldron, prone to instability and conflicts, without a sense of national identity, but rather a mess of historical cultural influences, tribal and clan loyalties, and religious fervor. Projecting influence into the area is no longer just Russia, but also Turkey, Iran, China, Pakistan, India and the United States:
Russian historian Lev Gumilev wrote that Xiongnu, Mongols (Mongol Empire, Zunghar Khanate) and Turkic peoples (First Turkic Khaganate, Uyghur Khaganate) played a role to stop Chinese aggression to the north. The Turkic Khaganate had special policy against Chinese assimilation policy. Another interesting theoretical analysis on the historical-geopolitics of the Central Asia was made through the reinterpretation of Orkhun Inscripts.
The region, along with Russia, is also part of "the great pivot" as per the Heartland Theory of Halford Mackinder, which says that the power which controls Central Asia—richly endowed with natural resources—shall ultimately be the "empire of the world".
In the context of the United States' War on Terror, Central Asia has once again become the center of geostrategic calculations. Pakistan's status has been upgraded by the U.S. government to Major non-NATO ally because of its central role in serving as a staging point for the invasion of Afghanistan, providing intelligence on Al-Qaeda operations in the region, and leading the hunt on Osama bin Laden.
Afghanistan, which had served as a haven and source of support for Al-Qaeda under the protection of Mullah Omar and the Taliban, was the target of a U.S. invasion in 2001 and ongoing reconstruction and drug-eradication efforts. U.S. military bases have also been established in Uzbekistan and Kyrgyzstan, causing both Russia and the People's Republic of China to voice their concern over a permanent U.S. military presence in the region.
Western governments have accused Russia, China and the former Soviet republics of justifying the suppression of separatist movements, and the associated ethnics and religion with the War on Terror. | [
{
"paragraph_id": 0,
"text": "Central Asia is a subregion of Asia that stretches from the Caspian Sea in the southwest and Eastern Europe in the northwest to Western China and Mongolia in the east, and from Afghanistan and Iran in the south to Russia in the north. It includes the former Soviet republics of Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan. Central Asian nations are colloquially referred to as the \"-stans\" as the countries all have names ending with the Persian suffix \"-stan\", meaning \"land of\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the pre-Islamic and early Islamic eras (c. 1000 and earlier) Central Asia was inhabited predominantly by Iranian peoples, populated by Eastern Iranian-speaking Bactrians, Sogdians, Chorasmians, and the semi-nomadic Scythians and Dahae. After expansion by Turkic peoples, Central Asia also became the homeland for the Uzbeks, Kazakhs, Tatars, Turkmens, Kyrgyz, and Uyghurs; Turkic languages largely replaced the Iranian languages spoken in the area, with the exception of Tajikistan and areas where Tajik is spoken.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Central Asia was historically closely tied to the Silk Road trade routes, acting as a crossroads for the movement of people, goods, and ideas between Europe and the Far East. Most countries in Central Asia are still integral to parts of the world economy.",
"title": ""
},
{
"paragraph_id": 3,
"text": "From the mid-19th century until almost the end of the 20th century, Central Asia was colonised by the Russians, and incorporated into the Russian Empire, and later the Soviet Union, which led to Russians and other Slavs emigrating into the area. Modern-day Central Asia is home to a large population of European settlers, who mostly live in Kazakhstan; 7 million Russians, 500,000 Ukrainians, and about 170,000 Germans. Stalinist-era forced deportation policies also mean that over 300,000 Koreans live there.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Central Asia has a population of about 72 million, in five countries: Kazakhstan (19 million), Kyrgyzstan (7 million), Tajikistan (10 million), Turkmenistan (6 million), and Uzbekistan (35 million).",
"title": ""
},
{
"paragraph_id": 5,
"text": "One of the first geographers to mention Central Asia as a distinct region of the world was Alexander von Humboldt. The borders of Central Asia are subject to multiple definitions. Historically, political geography and culture have been two significant parameters widely used in scholarly definitions of Central Asia. Humboldt's definition composed of every country between 5° North and 5° South of the latitude 44.5°N. Humboldt mentions some geographic features of this region, which include the Caspian Sea in the west, the Altai mountains in the north and the Hindu Kush and Pamir mountains in the South. He did not give an eastern border for the region. His legacy is still seen: Humboldt University of Berlin, named after him, offers a course in Central Asian studies. The Russian geographer Nikolaĭ Khanykov questioned the latitudinal definition of Central Asia and preferred a physical one of all countries located in the region landlocked from water, including Afghanistan, Khorasan (Northeast Iran), Kyrgyzstan, Tajikistan, Turkmenistan, Uyghuristan (Xinjiang), and Uzbekistan.",
"title": "Definitions"
},
{
"paragraph_id": 6,
"text": "Russian culture has two distinct terms: Средняя Азия (Srednyaya Aziya or \"Middle Asia\", the narrower definition, which includes only those traditionally non-Slavic, Central Asian lands that were incorporated within those borders of historical Russia) and Центральная Азия (Tsentralnaya Aziya or \"Central Asia\", the wider definition, which includes Central Asian lands that have never been part of historical Russia). The latter definition includes Afghanistan and 'East Turkestan'.",
"title": "Definitions"
},
{
"paragraph_id": 7,
"text": "The most limited definition was the official one of the Soviet Union, which defined Middle Asia as consisting solely of Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan, omitting Kazakhstan. Soon after the dissolution of the Soviet Union in 1991, the leaders of the four former Soviet Central Asian Republics met in Tashkent and declared that the definition of Central Asia should include Kazakhstan as well as the original four included by the Soviets. Since then, this has become the most common definition of Central Asia.",
"title": "Definitions"
},
{
"paragraph_id": 8,
"text": "In 1978, UNESCO defined the region as \"Afghanistan, north-eastern Iran, Pakistan, northern India, western China, Mongolia and the Soviet Central Asian Republics\".",
"title": "Definitions"
},
{
"paragraph_id": 9,
"text": "An alternative method is to define the region based on ethnicity, and in particular, areas populated by Eastern Turkic, Eastern Iranian, or Mongolian peoples. These areas include Xinjiang Uyghur Autonomous Region, the Turkic regions of southern Siberia, the five republics, and Afghan Turkestan. Afghanistan as a whole, the northern and western areas of Pakistan and the Kashmir Valley of India may also be included. The Tibetans and Ladakhis are also included. Most of the mentioned peoples are considered the \"indigenous\" peoples of the vast region. Central Asia is sometimes referred to as Turkestan.",
"title": "Definitions"
},
{
"paragraph_id": 10,
"text": "Central Asia is a region of varied geography, including high passes and mountains (Tian Shan), vast deserts (Kyzyl Kum, Taklamakan), and especially treeless, grassy steppes. The vast steppe areas of Central Asia are considered together with the steppes of Eastern Europe as a homogeneous geographical zone known as the Eurasian Steppe.",
"title": "Geography"
},
{
"paragraph_id": 11,
"text": "Much of the land of Central Asia is too dry or too rugged for farming. The Gobi desert extends from the foot of the Pamirs, 77° E, to the Great Khingan (Da Hinggan) Mountains, 116°–118° E.",
"title": "Geography"
},
{
"paragraph_id": 12,
"text": "Central Asia has the following geographic extremes:",
"title": "Geography"
},
{
"paragraph_id": 13,
"text": "A majority of the people earn a living by herding livestock. Industrial activity centers in the region's cities.",
"title": "Geography"
},
{
"paragraph_id": 14,
"text": "Major rivers of the region include the Amu Darya, the Syr Darya, Irtysh, the Hari River and the Murghab River. Major bodies of water include the Aral Sea and Lake Balkhash, both of which are part of the huge west-central Asian endorheic basin that also includes the Caspian Sea.",
"title": "Geography"
},
{
"paragraph_id": 15,
"text": "Both of these bodies of water have shrunk significantly in recent decades due to the diversion of water from rivers that feed them for irrigation and industrial purposes. Water is an extremely valuable resource in arid Central Asia and can lead to rather significant international disputes.",
"title": "Geography"
},
{
"paragraph_id": 16,
"text": "Central Asia is bounded on the north by the forests of Siberia. The northern half of Central Asia (Kazakhstan) is the middle part of the Eurasian steppe. Westward the Kazakh steppe merges into the Russian-Ukrainian steppe and eastward into the steppes and deserts of Dzungaria and Mongolia. Southward the land becomes increasingly dry and the nomadic population increasingly thin. The south supports areas of dense population and cities wherever irrigation is possible. The main irrigated areas are along the eastern mountains, along the Oxus and Jaxartes Rivers and along the north flank of the Kopet Dagh near the Persian border. East of the Kopet Dagh is the important oasis of Merv and then a few places in Afghanistan like Herat and Balkh. Two projections of the Tian Shan create three \"bays\" along the eastern mountains. The largest, in the north, is eastern Kazakhstan, traditionally called Jetysu or Semirechye which contains Lake Balkhash. In the center is the small but densely-populated Ferghana valley. In the south is Bactria, later called Tocharistan, which is bounded on the south by the Hindu Kush mountains of Afghanistan. The Syr Darya (Jaxartes) rises in the Ferghana valley and the Amu Darya (Oxus) rises in Bactria. Both flow northwest into the Aral Sea. Where the Oxus meets the Aral Sea it forms a large delta called Khwarazm and later the Khanate of Khiva. North of the Oxus is the less-famous but equally important Zarafshan River which waters the great trading cities of Bokhara and Samarkand. The other great commercial city was Tashkent northwest of the mouth of the Ferghana valley. The land immediately north of the Oxus was called Transoxiana and also Sogdia, especially when referring to the Sogdian merchants who dominated the silk road trade.",
"title": "Historical regions"
},
{
"paragraph_id": 17,
"text": "To the east, Dzungaria and the Tarim Basin were united into the Manchu-Chinese province of Xinjiang (Sinkiang; Hsin-kiang) about 1759. Caravans from China usually went along the north or south side of the Tarim basin and joined at Kashgar before crossing the mountains northwest to Ferghana or southwest to Bactria. A minor branch of the silk road went north of the Tian Shan through Dzungaria and Zhetysu before turning southwest near Tashkent. Nomadic migrations usually moved from Mongolia through Dzungaria before turning southwest to conquer the settled lands or continuing west toward Europe.",
"title": "Historical regions"
},
{
"paragraph_id": 18,
"text": "The Kyzyl Kum Desert or semi-desert is between the Oxus and Jaxartes, and the Karakum Desert is between the Oxus and Kopet Dagh in Turkmenistan. Khorasan meant approximately northeast Persia and northern Afghanistan. Margiana was the region around Merv. The Ustyurt Plateau is between the Aral and Caspian Seas.",
"title": "Historical regions"
},
{
"paragraph_id": 19,
"text": "To the southwest, across the Kopet Dagh, lies Persia. From here Persian and Islamic civilisation penetrated Central Asia and dominated its high culture until the Russian conquest. In the southeast is the route to India. In early times Buddhism spread north and throughout much of history warrior kings and tribes would move southeast to establish their rule in northern India. Most nomadic conquerors entered from the northeast. After 1800 western civilisation in its Russian and Soviet form penetrated from the northwest.",
"title": "Historical regions"
},
{
"paragraph_id": 20,
"text": "Because Central Asia is landlocked and not buffered by a large body of water, temperature fluctuations are often severe, excluding the hot, sunny summer months. In most areas the climate is dry and continental, with hot summers and cool to cold winters, with occasional snowfall. Outside high-elevation areas, the climate is mostly semi-arid to arid. In lower elevations, summers are hot with blazing sunshine. Winters feature occasional rain or snow from low-pressure systems that cross the area from the Mediterranean Sea. Average monthly precipitation is very low from July to September, rises in autumn (October and November) and is highest in March or April, followed by swift drying in May and June. Winds can be strong, producing dust storms sometimes, especially toward the end of the summer in September and October. Specific cities that exemplify Central Asian climate patterns include Tashkent and Samarkand, Uzbekistan, Ashgabat, Turkmenistan, and Dushanbe, Tajikistan. The last of these represents one of the wettest climates in Central Asia, with an average annual precipitation of over 560 mm (22 inches).",
"title": "Climate"
},
{
"paragraph_id": 21,
"text": "Biogeographically, Central Asia is part of the Palearctic realm. The largest biome in Central Asia is the temperate grasslands, savannas, and shrublands biome. Central Asia also contains the montane grasslands and shrublands, deserts and xeric shrublands and temperate coniferous forests biomes.",
"title": "Climate"
},
{
"paragraph_id": 22,
"text": "As of 2022, Central Asia is one of the most vulnerable regions to global climate change in the world and the region's temperature is growing faster than the global average.",
"title": "Climate"
},
{
"paragraph_id": 23,
"text": "Although, during the golden age of Orientalism the place of Central Asia in the world history was marginalised, contemporary historiography has rediscovered the \"centrality\" of the Central Asia. The history of Central Asia is defined by the area's climate and geography. The aridness of the region made agriculture difficult, and its distance from the sea cut it off from much trade. Thus, few major cities developed in the region; instead, the area was for millennia dominated by the nomadic horse peoples of the steppe.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Relations between the steppe nomads and the settled people in and around Central Asia were long marked by conflict. The nomadic lifestyle was well suited to warfare, and the steppe horse riders became some of the most militarily potent people in the world, limited only by their lack of internal unity. Any internal unity that was achieved was most probably due to the influence of the Silk Road, which traveled along Central Asia. Periodically, great leaders or changing conditions would organise several tribes into one force and create an almost unstoppable power. These included the Hun invasion of Europe, the Five Barbarians rebellions in China and most notably the Mongol conquest of much of Eurasia.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "During pre-Islamic and early Islamic times, Central Asia was inhabited predominantly by speakers of Iranian languages. Among the ancient sedentary Iranian peoples, the Sogdians and Chorasmians played an important role, while Iranian peoples such as Scythians and the later on Alans lived a nomadic or semi-nomadic lifestyle.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "The main migration of Turkic peoples occurred between the 6th and 11th centuries, when they spread across most of Central Asia. The Eurasian Steppe slowly transitioned from Indo European and Iranian-speaking groups with dominant West-Eurasian ancestry to a more heterogeneous region with increasing East Asian ancestry through Turkic and Mongolian groups in the past thousands years, including extensive Turkic and later Mongol migrations out of Mongolia and slow assimilation of local populations. In the 8th century AD, the Islamic expansion reached the region but had no significant demographic impact. In the 13th century AD, the Mongolian invasion of Central Asia brought most of the region under Mongolian influence, which had \"enormous demographic success\", but did not impact the cultural or linguistic landscape.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Once populated by Iranian tribes and other Indo-European speaking people, Central Asia experienced numerous invasions emanating out of Southern Siberia and Mongolia that would drastically affect the region. Genetic data shows that the different Central Asian Turkic-speaking peoples have between ~22% and ~70% East Asian ancestry (represented by \"Baikal hunter-gatherer ancestry\" shared with other Northeast Asians and Eastern Siberians), in contrast to Iranian-speaking Central Asians, specifically Tajiks, which display genetic continuity to Indo-Iranians of the Iron Age. Certain Turkic ethnic groups, specifically the Kazakhs, display even higher East Asian ancestry. This is explained by substantial Mongolian influence on the Kazakh genome, through significant admixture between blue eyes, blonde hair, the medieval Kipchaks of Central Asia and the invading medieval Mongolians. The data suggests that the Mongol invasion of Central Asia had lasting impacts onto the genetic makeup of Kazakhs. According to recent genetic genealogy testing, the genetic admixture of the Uzbeks clusters somewhere between the Iranian peoples and the Mongols. Another study shows that the Uzbeks are closely related to other Turkic peoples of Central Asia and rather distant from Iranian people. The study also analysed the maternal and paternal DNA haplogroups and shows that Turkic speaking groups are more homogenous than Iranian speaking groups. Genetic studies analyzing the full genome of Uzbeks and other Central Asian populations found that about ~27-60% of the Uzbek ancestry is derived from East Asian sources, with the remainder ancestry (~40–73%) being made up by European and Middle Eastern components. According to a recent study, the Kyrgyz, Kazakhs, Uzbeks, and Turkmens share more of their gene pool with various East Asian and Siberian populations than with West Asian or European populations, though the Turkmens have a large percentage from populations to the east, their main components are Central Asian. The study further suggests that both migration and linguistic assimilation helped to spread the Turkic languages in Eurasia.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "The Tang dynasty of China expanded westwards and controlled large parts of Central Asia, directly and indirectly through their Turkic vassals. Tang China actively supported the Turkification of Central Asia, while extending its cultural influence. The Tang Chinese were defeated by the Abbasid Caliphate at the Battle of Talas in 751, marking the end of the Tang dynasty's western expansion and the 150 years of Chinese influence. The Tibetan Empire would take the chance to rule portions of Central Asia and South Asia. During the 13th and 14th centuries, the Mongols conquered and ruled the largest contiguous empire in recorded history. Most of Central Asia fell under the control of the Chagatai Khanate.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "The dominance of the nomads ended in the 16th century, as firearms allowed settled peoples to gain control of the region. Russia, China, and other powers expanded into the region and had captured the bulk of Central Asia by the end of the 19th century. After the Russian Revolution, the western Central Asian regions were incorporated into the Soviet Union. The eastern part of Central Asia, known as Xinjiang, was incorporated into the People's Republic of China, having been previously ruled by the Qing dynasty and the Republic of China. Mongolia gained its independence from China and has remained independent but became a Soviet satellite state until the dissolution of the Soviet Union. Afghanistan remained relatively independent of major influence by the Soviet Union until the Saur Revolution of 1978.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "The Soviet areas of Central Asia saw much industrialisation and construction of infrastructure, but also the suppression of local cultures, hundreds of thousands of deaths from failed collectivisation programmes, and a lasting legacy of ethnic tensions and environmental problems. Soviet authorities deported millions of people, including entire nationalities, from western areas of the Soviet Union to Central Asia and Siberia. According to Touraj Atabaki and Sanjyot Mehendale, \"From 1959 to 1970, about two million people from various parts of the Soviet Union migrated to Central Asia, of which about one million moved to Kazakhstan.\"",
"title": "History"
},
{
"paragraph_id": 31,
"text": "With the collapse of the Soviet Union, five countries gained independence, that is, Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan. The historian and Turkologist Peter B. Golden explains that without the imperial manipulations of the Russian Empire but above all the Soviet Union, the creation of said republics would have been impossible.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "In nearly all the new states, former Communist Party officials retained power as local strongmen. None of the new republics could be considered functional democracies in the early days of independence, although in recent years Kyrgyzstan, Kazakhstan and Mongolia have made further progress towards more open societies, unlike Uzbekistan, Tajikistan, and Turkmenistan, which have maintained many Soviet-style repressive tactics.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "At the crossroads of Asia, shamanistic practices live alongside Buddhism. Thus, Yama, Lord of Death, was revered in Tibet as a spiritual guardian and judge. Mongolian Buddhism, in particular, was influenced by Tibetan Buddhism. The Qianlong Emperor of Qing China in the 18th century was Tibetan Buddhist and would sometimes travel from Beijing to other cities for personal religious worship.",
"title": "Culture"
},
{
"paragraph_id": 34,
"text": "Central Asia also has an indigenous form of improvisational oral poetry that is over 1000 years old. It is principally practiced in Kyrgyzstan and Kazakhstan by akyns, lyrical improvisationalists. They engage in lyrical battles, the aytysh or the alym sabak. The tradition arose out of early bardic oral historians. They are usually accompanied by a stringed instrument—in Kyrgyzstan, a three-stringed komuz, and in Kazakhstan, a similar two-stringed instrument, the dombra.",
"title": "Culture"
},
{
"paragraph_id": 35,
"text": "Photography in Central Asia began to develop after 1882, when a Russian Mennonite photographer named Wilhelm Penner moved to the Khanate of Khiva during the Mennonite migration to Central Asia led by Claas Epp, Jr. Upon his arrival to Khanate of Khiva, Penner shared his photography skills with a local student Khudaybergen Divanov, who later became the founder of Uzbek photography.",
"title": "Culture"
},
{
"paragraph_id": 36,
"text": "Some also learn to sing the Manas, Kyrgyzstan's epic poem (those who learn the Manas exclusively but do not improvise are called manaschis). During Soviet rule, akyn performance was co-opted by the authorities and subsequently declined in popularity. With the fall of the Soviet Union, it has enjoyed a resurgence, although akyns still do use their art to campaign for political candidates. A 2005 The Washington Post article proposed a similarity between the improvisational art of akyns and modern freestyle rap performed in the West.",
"title": "Culture"
},
{
"paragraph_id": 37,
"text": "As a consequence of Russian colonisation, European fine arts – painting, sculpture and graphics – have developed in Central Asia. The first years of the Soviet regime saw the appearance of modernism, which took inspiration from the Russian avant-garde movement. Until the 1980s, Central Asian arts had developed along with general tendencies of Soviet arts. In the 90s, arts of the region underwent some significant changes. Institutionally speaking, some fields of arts were regulated by the birth of the art market, some stayed as representatives of official views, while many were sponsored by international organisations. The years of 1990–2000 were times for the establishment of contemporary arts. In the region, many important international exhibitions are taking place, Central Asian art is represented in European and American museums, and the Central Asian Pavilion at the Venice Biennale has been organised since 2005.",
"title": "Culture"
},
{
"paragraph_id": 38,
"text": "Equestrian sports are traditional in Central Asia, with disciplines like endurance riding, buzkashi, dzhigit and kyz kuu.",
"title": "Culture"
},
{
"paragraph_id": 39,
"text": "The traditional game of Buzkashi is played throughout the Central Asian region, the countries sometimes organise Buzkashi competition amongst each other. The First regional competition among the Central Asian countries, Russia, Chinese Xinjiang and Turkey was held in 2013. The first world title competition was played in 2017 and won by Kazakhstan.",
"title": "Culture"
},
{
"paragraph_id": 40,
"text": "Association football is popular across Central Asia. Most countries are members of the Central Asian Football Association, a region of the Asian Football Confederation. However, Kazakhstan is a member of the UEFA.",
"title": "Culture"
},
{
"paragraph_id": 41,
"text": "Wrestling is popular across Central Asia, with Kazakhstan having claimed 14 Olympic medals, Uzbekistan seven, and Kyrgyzstan three. As former Soviet states, Central Asian countries have been successful in gymnastics.",
"title": "Culture"
},
{
"paragraph_id": 42,
"text": "Mixed Martial Arts is one of more common sports in Central Asia, Kyrgyz athlete Valentina Shevchenko holding the UFC Flyweight Champion title.",
"title": "Culture"
},
{
"paragraph_id": 43,
"text": "Cricket is the most popular sport in Afghanistan. The Afghanistan national cricket team, first formed in 2001, has claimed wins over Bangladesh, West Indies and Zimbabwe.",
"title": "Culture"
},
{
"paragraph_id": 44,
"text": "Notable Kazakh competitors include cyclists Alexander Vinokourov and Andrey Kashechkin, boxer Vassiliy Jirov and Gennady Golovkin, runner Olga Shishigina, decathlete Dmitriy Karpov, gymnast Aliya Yussupova, judoka Askhat Zhitkeyev and Maxim Rakov, skier Vladimir Smirnov, weightlifter Ilya Ilyin, and figure skaters Denis Ten and Elizabet Tursynbaeva.",
"title": "Culture"
},
{
"paragraph_id": 45,
"text": "Notable Uzbekistani competitors include cyclist Djamolidine Abdoujaparov, boxer Ruslan Chagaev, canoer Michael Kolganov, gymnast Oksana Chusovitina, tennis player Denis Istomin, chess player Rustam Kasimdzhanov, and figure skater Misha Ge.",
"title": "Culture"
},
{
"paragraph_id": 46,
"text": "Since gaining independence in the early 1990s, the Central Asian republics have gradually been moving from a state-controlled economy to a market economy. However, reform has been deliberately gradual and selective, as governments strive to limit the social cost and ameliorate living standards. All five countries are implementing structural reforms to improve competitiveness. Kazakhstan is the only CIS country to be included in the 2020 and 2019 IWB World Competitiveness rankings. In particular, they have been modernizing the industrial sector and fostering the development of service industries through business-friendly fiscal policies and other measures, to reduce the share of agriculture in GDP. Between 2005 and 2013, the share of agriculture dropped in all but Tajikistan, where it increased while industry decreased. The fastest growth in industry was observed in Turkmenistan, whereas the services sector progressed most in the other four countries.",
"title": "Economy"
},
{
"paragraph_id": 47,
"text": "Public policies pursued by Central Asian governments focus on buffering the political and economic spheres from external shocks. This includes maintaining a trade balance, minimizing public debt and accumulating national reserves. They cannot totally insulate themselves from negative exterior forces, however, such as the persistently weak recovery of global industrial production and international trade since 2008. Notwithstanding this, they have emerged relatively unscathed from the global financial crisis of 2008–2009. Growth faltered only briefly in Kazakhstan, Tajikistan and Turkmenistan and not at all in Uzbekistan, where the economy grew by more than 7% per year on average between 2008 and 2013. Turkmenistan achieved unusually high 14.7% growth in 2011. Kyrgyzstan's performance has been more erratic but this phenomenon was visible well before 2008.",
"title": "Economy"
},
{
"paragraph_id": 48,
"text": "The republics which have fared best benefitted from the commodities boom during the first decade of the 2000s. Kazakhstan and Turkmenistan have abundant oil and natural gas reserves and Uzbekistan's own reserves make it more or less self-sufficient. Kyrgyzstan, Tajikistan and Uzbekistan all have gold reserves and Kazakhstan has the world's largest uranium reserves. Fluctuating global demand for cotton, aluminium and other metals (except gold) in recent years has hit Tajikistan hardest, since aluminium and raw cotton are its chief exports − the Tajik Aluminium Company is the country's primary industrial asset. In January 2014, the Minister of Agriculture announced the government's intention to reduce the acreage of land cultivated by cotton to make way for other crops. Uzbekistan and Turkmenistan are major cotton exporters themselves, ranking fifth and ninth respectively worldwide for volume in 2014.",
"title": "Economy"
},
{
"paragraph_id": 49,
"text": "Although both exports and imports have grown significantly over the past decade, Central Asian republics countries remain vulnerable to economic shocks, owing to their reliance on exports of raw materials, a restricted circle of trading partners and a negligible manufacturing capacity. Kyrgyzstan has the added disadvantage of being considered resource poor, although it does have ample water. Most of its electricity is generated by hydropower.",
"title": "Economy"
},
{
"paragraph_id": 50,
"text": "The Kyrgyz economy was shaken by a series of shocks between 2010 and 2012. In April 2010, President Kurmanbek Bakiyev was deposed by a popular uprising, with former minister of foreign affairs Roza Otunbayeva assuring the interim presidency until the election of Almazbek Atambayev in November 2011. Food prices rose two years in a row and, in 2012, production at the major Kumtor gold mine fell by 60% after the site was perturbed by geological movements. According to the World Bank, 33.7% of the population was living in absolute poverty in 2010 and 36.8% a year later.",
"title": "Economy"
},
{
"paragraph_id": 51,
"text": "Despite high rates of economic growth in recent years, GDP per capita in Central Asia was higher than the average for developing countries only in Kazakhstan in 2013 (PPP$23,206) and Turkmenistan (PPP$14 201). It dropped to PPP$5,167 for Uzbekistan, home to 45% of the region's population, and was even lower for Kyrgyzstan and Tajikistan.",
"title": "Economy"
},
{
"paragraph_id": 52,
"text": "Kazakhstan leads the Central Asian region in terms of foreign direct investments. The Kazakh economy accounts for more than 70% of all the investment attracted in Central Asia.",
"title": "Economy"
},
{
"paragraph_id": 53,
"text": "In terms of the economic influence of big powers, China is viewed as one of the key economic players in Central Asia, especially after Beijing launched its grand development strategy known as the Belt and Road Initiative (BRI) in 2013.",
"title": "Economy"
},
{
"paragraph_id": 54,
"text": "The Central Asian countries attracted $378.2 billion of foreign direct investment (FDI) between 2007 and 2019. Kazakhstan accounted for 77.7% of the total FDI directed to the region. Kazakhstan is also the largest country in Central Asia accounting for more than 60 percent of the region's gross domestic product (GDP).",
"title": "Economy"
},
{
"paragraph_id": 55,
"text": "Central Asian nations fared better economically throughout the COVID-19 pandemic. Many variables are likely to have been at play, but disparities in economic structure, the intensity of the pandemic, and accompanying containment efforts may all be linked to part of the variety in nations' experiences. Central Asian countries are, however, predicted to be hit the worst in the future. Only 4% of permanently closed businesses anticipate to return in the future, with huge differences across sectors, ranging from 3% in lodging and food services to 27% in retail commerce.",
"title": "Economy"
},
{
"paragraph_id": 56,
"text": "In 2022, experts assessed that global climate change is likely to pose multiple economic risks to Central Asia and may possibly result in many billions of losses unless proper adaptation measures are developed to counter growing temperatures across the region.",
"title": "Economy"
},
{
"paragraph_id": 57,
"text": "Bolstered by strong economic growth in all but Kyrgyzstan, national development strategies are fostering new high-tech industries, pooling resources and orienting the economy towards export markets. Many national research institutions established during the Soviet era have since become obsolete with the development of new technologies and changing national priorities. This has led countries to reduce the number of national research institutions since 2009 by grouping existing institutions to create research hubs. Several of the Turkmen Academy of Science's institutes were merged in 2014: the Institute of Botany was merged with the Institute of Medicinal Plants to become the Institute of Biology and Medicinal Plants; the Sun Institute was merged with the Institute of Physics and Mathematics to become the Institute of Solar Energy; and the Institute of Seismology merged with the State Service for Seismology to become the Institute of Seismology and Atmospheric Physics. In Uzbekistan, more than 10 institutions of the Academy of Sciences have been reorganised, following the issuance of a decree by the Cabinet of Ministers in February 2012. The aim is to orient academic research towards problem-solving and ensure continuity between basic and applied research. For example, the Mathematics and Information Technology Research Institute has been subsumed under the National University of Uzbekistan and the Institute for Comprehensive Research on Regional Problems of Samarkand has been transformed into a problem-solving laboratory on environmental issues within Samarkand State University. Other research institutions have remained attached to the Uzbek Academy of Sciences, such as the Centre of Genomics and Bioinformatics.",
"title": "Education, science and technology"
},
{
"paragraph_id": 58,
"text": "Kazakhstan and Turkmenistan are also building technology parks as part of their drive to modernise infrastructure. In 2011, construction began of a technopark in the village of Bikrova near Ashgabat, the Turkmen capital. It will combine research, education, industrial facilities, business incubators and exhibition centres. The technopark will house research on alternative energy sources (sun, wind) and the assimilation of nanotechnologies. Between 2010 and 2012, technological parks were set up in the east, south and north Kazakhstan oblasts (administrative units) and in the capital, Astana. A Centre for Metallurgy was also established in the east Kazakhstan oblast, as well as a Centre for Oil and Gas Technologies which will be part of the planned Caspian Energy Hub. In addition, the Centre for Technology Commercialisation has been set up in Kazakhstan as part of the Parasat National Scientific and Technological Holding, a joint stock company established in 2008 that is 100% state-owned. The centre supports research projects in technology marketing, intellectual property protection, technology licensing contracts and start-ups. The centre plans to conduct a technology audit in Kazakhstan and to review the legal framework regulating the commercialisation of research results and technology.",
"title": "Education, science and technology"
},
{
"paragraph_id": 59,
"text": "Countries are seeking to augment the efficiency of traditional extractive sectors but also to make greater use of information and communication technologies and other modern technologies, such as solar energy, to develop the business sector, education and research. In March 2013, two research institutes were created by presidential decree to foster the development of alternative energy sources in Uzbekistan, with funding from the Asian Development Bank and other institutions: the SPU Physical−Technical Institute (Physics Sun Institute) and the International Solar Energy Institute. Three universities have been set up since 2011 to foster competence in strategic economic areas: Nazarbayev University in Kazakhstan (first intake in 2011), an international research university, Inha University in Uzbekistan (first intake in 2014), specializing in information and communication technologies, and the International Oil and Gas University in Turkmenistan (founded in 2013). Kazakhstan and Uzbekistan are both generalizing the teaching of foreign languages at school, in order to facilitate international ties. Kazakhstan and Uzbekistan have both adopted the three-tier bachelor's, master's and PhD degree system, in 2007 and 2012 respectively, which is gradually replacing the Soviet system of Candidates and Doctors of Science. In 2010, Kazakhstan became the only Central Asian member of the Bologna Process, which seeks to harmonise higher education systems in order to create a European Higher Education Area.",
"title": "Education, science and technology"
},
{
"paragraph_id": 60,
"text": "The Central Asian republics' ambition of developing the business sector, education and research is being hampered by chronic low investment in research and development. Over the decade to 2013, the region's investment in research and development hovered around 0.2–0.3% of GDP. Uzbekistan broke with this trend in 2013 by raising its own research intensity to 0.41% of GDP.",
"title": "Education, science and technology"
},
{
"paragraph_id": 61,
"text": "Kazakhstan is the only country where the business enterprise and private non-profit sectors make any significant contribution to research and development – but research intensity overall is low in Kazakhstan: just 0.18% of GDP in 2013. Moreover, few industrial enterprises conduct research in Kazakhstan. Only one in eight (12.5%) of the country's manufacturing firms were active in innovation in 2012, according to a survey by the UNESCO Institute for Statistics. Enterprises prefer to purchase technological solutions that are already embodied in imported machinery and equipment. Just 4% of firms purchase the license and patents that come with this technology. Nevertheless, there appears to be a growing demand for the products of research, since enterprises spent 4.5 times more on scientific and technological services in 2008 than in 1997.",
"title": "Education, science and technology"
},
{
"paragraph_id": 62,
"text": "Kazakhstan and Uzbekistan count the highest researcher density in Central Asia. The number of researchers per million population is close to the world average (1,083 in 2013) in Kazakhstan (1,046) and higher than the world average in Uzbekistan (1,097).",
"title": "Education, science and technology"
},
{
"paragraph_id": 63,
"text": "Kazakhstan is the only Central Asian country where the business enterprise and private non-profit sectors make any significant contribution to research and development. Uzbekistan is in a particularly vulnerable position, with its heavy reliance on higher education: three-quarters of researchers were employed by the university sector in 2013 and just 6% in the business enterprise sector. With most Uzbek university researchers nearing retirement, this imbalance imperils Uzbekistan's research future. Almost all holders of a Candidate of Science, Doctor of Science or PhD are more than 40 years old and half are aged over 60; more than one in three researchers (38.4%) holds a PhD degree, or its equivalent, the remainder holding a bachelor's or master's degree.",
"title": "Education, science and technology"
},
{
"paragraph_id": 64,
"text": "Kazakhstan, Kyrgyzstan and Uzbekistan have all maintained a share of women researchers above 40% since the fall of the Soviet Union. Kazakhstan has even achieved gender parity, with Kazakh women dominating medical and health research and representing some 45–55% of engineering and technology researchers in 2013. In Tajikistan, however, only one in three scientists (34%) was a woman in 2013, down from 40% in 2002. Although policies are in place to give Tajik women equal rights and opportunities, these are underfunded and poorly understood. Turkmenistan has offered a state guarantee of equality for women since a law adopted in 2007 but the lack of available data makes it impossible to draw any conclusions as to the law's impact on research. As for Turkmenistan, it does not make data available on higher education, research expenditure or researchers.",
"title": "Education, science and technology"
},
{
"paragraph_id": 65,
"text": "Table: PhDs obtained in science and engineering in Central Asia, 2013 or closest year",
"title": "Education, science and technology"
},
{
"paragraph_id": 66,
"text": "Source: UNESCO Science Report: towards 2030 (2015), Table 14.1",
"title": "Education, science and technology"
},
{
"paragraph_id": 67,
"text": "Note: PhD graduates in science cover life sciences, physical sciences, mathematics and statistics, and computing; PhDs in engineering also cover manufacturing and construction. For Central Asia, the generic term of PhD also encompasses Candidate of Science and Doctor of Science degrees. Data are unavailable for Turkmenistan.",
"title": "Education, science and technology"
},
{
"paragraph_id": 68,
"text": "Table: Central Asian researchers by field of science and gender, 2013 or closest year",
"title": "Education, science and technology"
},
{
"paragraph_id": 69,
"text": "Source: UNESCO Science Report: towards 2030 (2015), Table 14.1",
"title": "Education, science and technology"
},
{
"paragraph_id": 70,
"text": "The number of scientific papers published in Central Asia grew by almost 50% between 2005 and 2014, driven by Kazakhstan, which overtook Uzbekistan over this period to become the region's most prolific scientific publisher, according to Thomson Reuters' Web of Science (Science Citation Index Expanded). Between 2005 and 2014, Kazakhstan's share of scientific papers from the region grew from 35% to 56%. Although two-thirds of papers from the region have a foreign co-author, the main partners tend to come from beyond Central Asia, namely the Russian Federation, USA, German, United Kingdom and Japan.",
"title": "Education, science and technology"
},
{
"paragraph_id": 71,
"text": "Five Kazakh patents were registered at the US Patent and Trademark Office between 2008 and 2013, compared to three for Uzbek inventors and none at all for the other three Central Asian republics, Kyrgyzstan, Tajikistan and Turkmenistan.",
"title": "Education, science and technology"
},
{
"paragraph_id": 72,
"text": "Kazakhstan is Central Asia's main trader in high-tech products. Kazakh imports nearly doubled between 2008 and 2013, from US$2.7 billion to US$5.1 billion. There has been a surge in imports of computers, electronics and telecommunications; these products represented an investment of US$744 million in 2008 and US$2.6 billion five years later. The growth in exports was more gradual – from US$2.3 billion to US$3.1 billion – and dominated by chemical products (other than pharmaceuticals), which represented two-thirds of exports in 2008 (US$1.5 billion) and 83% (US$2.6 billion) in 2013.",
"title": "Education, science and technology"
},
{
"paragraph_id": 73,
"text": "The five Central Asian republics belong to several international bodies, including the Organization for Security and Co-operation in Europe, the Economic Cooperation Organization and the Shanghai Cooperation Organisation. They are also members of the Central Asia Regional Economic Cooperation (CAREC) Programme, which also includes Afghanistan, Azerbaijan, China, Mongolia and Pakistan. In November 2011, the 10 member countries adopted the CAREC 2020 Strategy, a blueprint for furthering regional co-operation. Over the decade to 2020, US$50 billion is being invested in priority projects in transport, trade and energy to improve members' competitiveness. The landlocked Central Asian republics are conscious of the need to co-operate in order to maintain and develop their transport networks and energy, communication and irrigation systems. Only Kazakhstan, Azerbaijan, and Turkmenistan border the Caspian Sea and none of the republics has direct access to an ocean, complicating the transportation of hydrocarbons, in particular, to world markets.",
"title": "Education, science and technology"
},
{
"paragraph_id": 74,
"text": "Kazakhstan is also one of the three founding members of the Eurasian Economic Union in 2014, along with Belarus and the Russian Federation. Armenia and Kyrgyzstan have since joined this body. As co-operation among the member states in science and technology is already considerable and well-codified in legal texts, the Eurasian Economic Union is expected to have a limited additional impact on co-operation among public laboratories or academia but it should encourage business ties and scientific mobility, since it includes provision for the free circulation of labour and unified patent regulations.",
"title": "Education, science and technology"
},
{
"paragraph_id": 75,
"text": "Kazakhstan and Tajikistan participated in the Innovative Biotechnologies Programme (2011–2015) launched by the Eurasian Economic Community, the predecessor of the Eurasian Economic Union, The programme also involved Belarus and the Russian Federation. Within this programme, prizes were awarded at an annual bio-industry exhibition and conference. In 2012, 86 Russian organisations participated, plus three from Belarus, one from Kazakhstan and three from Tajikistan, as well as two scientific research groups from Germany. At the time, Vladimir Debabov, scientific director of the Genetika State Research Institute for Genetics and the Selection of Industrial Micro-organisms in the Russian Federation, stressed the paramount importance of developing bio-industry. \"In the world today, there is a strong tendency to switch from petrochemicals to renewable biological sources\", he said. \"Biotechnology is developing two to three times faster than chemicals.\"",
"title": "Education, science and technology"
},
{
"paragraph_id": 76,
"text": "Kazakhstan also participated in a second project of the Eurasian Economic Community, the establishment of the Centre for Innovative Technologies on 4 April 2013, with the signing of an agreement between the Russian Venture Company (a government fund of funds), the Kazakh JSC National Agency and the Belarusian Innovative Foundation. Each of the selected projects is entitled to funding of US$3–90 million and is implemented within a public–private partnership. The first few approved projects focused on supercomputers, space technologies, medicine, petroleum recycling, nanotechnologies and the ecological use of natural resources. Once these initial projects have spawned viable commercial products, the venture company plans to reinvest the profits in new projects. This venture company is not a purely economic structure; it has also been designed to promote a common economic space among the three participating countries. Kazakhstan recognises the role civil society initiatives have to address the consequences of the COVID-19 crisis.",
"title": "Education, science and technology"
},
{
"paragraph_id": 77,
"text": "Four of the five Central Asian republics have also been involved in a project launched by the European Union in September 2013, IncoNet CA. The aim of this project is to encourage Central Asian countries to participate in research projects within Horizon 2020, the European Union's eighth research and innovation funding programme. The focus of this research projects is on three societal challenges considered as being of mutual interest to both the European Union and Central Asia, namely: climate change, energy and health. IncoNet CA builds on the experience of earlier projects which involved other regions, such as Eastern Europe, the South Caucasus and the Western Balkans. IncoNet CA focuses on twinning research facilities in Central Asia and Europe. It involves a consortium of partner institutions from Austria, the Czech Republic, Estonia, Germany, Hungary, Kazakhstan, Kyrgyzstan, Poland, Portugal, Tajikistan, Turkey and Uzbekistan. In May 2014, the European Union launched a 24-month call for project applications from twinned institutions – universities, companies and research institutes – for funding of up to €10, 000 to enable them to visit one another's facilities to discuss project ideas or prepare joint events like workshops.",
"title": "Education, science and technology"
},
{
"paragraph_id": 78,
"text": "The International Science and Technology Center (ISTC) was established in 1992 by the European Union, Japan, the Russian Federation and the US to engage weapons scientists in civilian research projects and to foster technology transfer. ISTC branches have been set up in the following countries party to the agreement: Armenia, Belarus, Georgia, Kazakhstan, Kyrgyzstan and Tajikistan. The headquarters of ISTC were moved to Nazarbayev University in Kazakhstan in June 2014, three years after the Russian Federation announced its withdrawal from the centre.",
"title": "Education, science and technology"
},
{
"paragraph_id": 79,
"text": "Kyrgyzstan, Tajikistan and Kazakhstan have been members of the World Trade Organization since 1998, 2013 and 2015 respectively.",
"title": "Education, science and technology"
},
{
"paragraph_id": 80,
"text": "By a broad definition including Mongolia and Afghanistan, more than 90 million people live in Central Asia, about 2% of Asia's total population. Of the regions of Asia, only North Asia has fewer people. It has a population density of 9 people per km, vastly less than the 80.5 people per km of the continent as a whole. Kazakhstan is one of the least densely populated countries in the world.",
"title": "Demographics"
},
{
"paragraph_id": 81,
"text": "Russian, as well as being spoken by around six million ethnic Russians and Ukrainians of Central Asia, is the de facto lingua franca throughout the former Soviet Central Asian Republics. Mandarin Chinese has an equally dominant presence in Inner Mongolia, Qinghai and Xinjiang.",
"title": "Demographics"
},
{
"paragraph_id": 82,
"text": "The languages of the majority of the inhabitants of the former Soviet Central Asian Republics belong to the Turkic language group. Turkmen is mainly spoken in Turkmenistan, and as a minority language in Afghanistan, Russia, Iran and Turkey. Kazakh and Kyrgyz are related languages of the Kypchak group of Turkic languages and are spoken throughout Kazakhstan, Kyrgyzstan, and as a minority language in Tajikistan, Afghanistan and Xinjiang. Uzbek and Uyghur are spoken in Uzbekistan, Tajikistan, Kyrgyzstan, Afghanistan and Xinjiang.",
"title": "Demographics"
},
{
"paragraph_id": 83,
"text": "Middle Iranian languages were once spoken throughout Central Asia, such as the once prominent Sogdian, Khwarezmian, Bactrian and Scythian, which are now extinct and belonged to the Eastern Iranian family. The Eastern Iranian Pashto language is still spoken in Afghanistan and northwestern Pakistan. Other minor Eastern Iranian languages such as Shughni, Munji, Ishkashimi, Sarikoli, Wakhi, Yaghnobi and Ossetic are also spoken at various places in Central Asia. Varieties of Persian are also spoken as a major language in the region, locally known as Dari (in Afghanistan), Tajik (in Tajikistan and Uzbekistan), and Bukhori (by the Bukharan Jews of Central Asia).",
"title": "Demographics"
},
{
"paragraph_id": 84,
"text": "Tocharian, another Indo-European language group, which was once predominant in oases on the northern edge of the Tarim Basin of Xinjiang, is now extinct.",
"title": "Demographics"
},
{
"paragraph_id": 85,
"text": "Other language groups include the Tibetic languages, spoken by around six million people across the Tibetan Plateau and into Qinghai, Sichuan (Szechwan), Ladakh and Baltistan, and the Nuristani languages of northeastern Afghanistan. Korean is spoken by the Koryo-saram minority, mainly in Kazakhstan and Uzbekistan.",
"title": "Demographics"
},
{
"paragraph_id": 86,
"text": "Religion in Central Asia",
"title": "Demographics"
},
{
"paragraph_id": 87,
"text": "Islam is the religion most common in the Central Asian Republics, Afghanistan, Xinjiang, and the peripheral western regions, such as Bashkortostan. Most Central Asian Muslims are Sunni, although there are sizable Shia minorities in Afghanistan and Tajikistan.",
"title": "Demographics"
},
{
"paragraph_id": 88,
"text": "Buddhism and Zoroastrianism were the major faiths in Central Asia before the arrival of Islam. Zoroastrian influence is still felt today in such celebrations as Nowruz, held in all five of the Central Asian states. The transmission of Buddhism along the Silk Road eventually brought the religion to China. Amongst the Turkic peoples, Tengrism was the leading religion before Islam. Tibetan Buddhism is most common in Tibet, Mongolia, Ladakh, and the southern Russian regions of Siberia.",
"title": "Demographics"
},
{
"paragraph_id": 89,
"text": "The form of Christianity most practiced in the region in previous centuries was Nestorianism, but now the largest denomination is the Russian Orthodox Church, with many members in Kazakhstan, where about 25% of the population of 19 million identify as Christian, 17% in Uzbekistan and 5% in Kyrgyzstan. Pew Research Center estimates indicate that in 2010, around 6 million Christians lived in Central Asian countries, the Pew Forum study finds that Kazakhstan (4.1 million) has the largest Christian population in the region, followed by Uzbekistan (710,000), Kyrgyzstan (660,000), Turkmenistan (320,000) and Tajikistan (100,000).",
"title": "Demographics"
},
{
"paragraph_id": 90,
"text": "The Bukharan Jews were once a sizable community in Uzbekistan and Tajikistan, but nearly all have emigrated since the dissolution of the Soviet Union.",
"title": "Demographics"
},
{
"paragraph_id": 91,
"text": "In Siberia, shaministic practices persist, including forms of divination such as Kumalak.",
"title": "Demographics"
},
{
"paragraph_id": 92,
"text": "Contact and migration with Han people from China has brought Confucianism, Daoism, Mahayana Buddhism, and other Chinese folk beliefs into the region.",
"title": "Demographics"
},
{
"paragraph_id": 93,
"text": "Central Asia is where many integral beliefs and elements in various religious traditions of Judaism, Christianity, Islam, Buddhism, and Hinduism originated.",
"title": "Demographics"
},
{
"paragraph_id": 94,
"text": "Central Asia has long been a strategic location merely because of its proximity to several great powers on the Eurasian landmass. The region itself never held a dominant stationary population nor was able to make use of natural resources. Thus, it has rarely throughout history become the seat of power for an empire or influential state. Central Asia has been divided, redivided, conquered out of existence, and fragmented time and time again. Central Asia has served more as the battleground for outside powers than as a power in its own right.",
"title": "Geostrategy"
},
{
"paragraph_id": 95,
"text": "Central Asia had both the advantage and disadvantage of a central location between four historical seats of power. From its central location, it has access to trade routes to and from all the regional powers. On the other hand, it has been continuously vulnerable to attack from all sides throughout its history, resulting in political fragmentation or outright power vacuum, as it is successively dominated.",
"title": "Geostrategy"
},
{
"paragraph_id": 96,
"text": "In the post–Cold War era, Central Asia is an ethnic cauldron, prone to instability and conflicts, without a sense of national identity, but rather a mess of historical cultural influences, tribal and clan loyalties, and religious fervor. Projecting influence into the area is no longer just Russia, but also Turkey, Iran, China, Pakistan, India and the United States:",
"title": "Geostrategy"
},
{
"paragraph_id": 97,
"text": "Russian historian Lev Gumilev wrote that Xiongnu, Mongols (Mongol Empire, Zunghar Khanate) and Turkic peoples (First Turkic Khaganate, Uyghur Khaganate) played a role to stop Chinese aggression to the north. The Turkic Khaganate had special policy against Chinese assimilation policy. Another interesting theoretical analysis on the historical-geopolitics of the Central Asia was made through the reinterpretation of Orkhun Inscripts.",
"title": "Geostrategy"
},
{
"paragraph_id": 98,
"text": "The region, along with Russia, is also part of \"the great pivot\" as per the Heartland Theory of Halford Mackinder, which says that the power which controls Central Asia—richly endowed with natural resources—shall ultimately be the \"empire of the world\".",
"title": "Geostrategy"
},
{
"paragraph_id": 99,
"text": "In the context of the United States' War on Terror, Central Asia has once again become the center of geostrategic calculations. Pakistan's status has been upgraded by the U.S. government to Major non-NATO ally because of its central role in serving as a staging point for the invasion of Afghanistan, providing intelligence on Al-Qaeda operations in the region, and leading the hunt on Osama bin Laden.",
"title": "Geostrategy"
},
{
"paragraph_id": 100,
"text": "Afghanistan, which had served as a haven and source of support for Al-Qaeda under the protection of Mullah Omar and the Taliban, was the target of a U.S. invasion in 2001 and ongoing reconstruction and drug-eradication efforts. U.S. military bases have also been established in Uzbekistan and Kyrgyzstan, causing both Russia and the People's Republic of China to voice their concern over a permanent U.S. military presence in the region.",
"title": "Geostrategy"
},
{
"paragraph_id": 101,
"text": "Western governments have accused Russia, China and the former Soviet republics of justifying the suppression of separatist movements, and the associated ethnics and religion with the War on Terror.",
"title": "Geostrategy"
}
] | Central Asia is a subregion of Asia that stretches from the Caspian Sea in the southwest and Eastern Europe in the northwest to Western China and Mongolia in the east, and from Afghanistan and Iran in the south to Russia in the north. It includes the former Soviet republics of Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan. Central Asian nations are colloquially referred to as the "-stans" as the countries all have names ending with the Persian suffix "-stan", meaning "land of". In the pre-Islamic and early Islamic eras Central Asia was inhabited predominantly by Iranian peoples, populated by Eastern Iranian-speaking Bactrians, Sogdians, Chorasmians, and the semi-nomadic Scythians and Dahae. After expansion by Turkic peoples, Central Asia also became the homeland for the Uzbeks, Kazakhs, Tatars, Turkmens, Kyrgyz, and Uyghurs; Turkic languages largely replaced the Iranian languages spoken in the area, with the exception of Tajikistan and areas where Tajik is spoken. Central Asia was historically closely tied to the Silk Road trade routes, acting as a crossroads for the movement of people, goods, and ideas between Europe and the Far East. Most countries in Central Asia are still integral to parts of the world economy. From the mid-19th century until almost the end of the 20th century, Central Asia was colonised by the Russians, and incorporated into the Russian Empire, and later the Soviet Union, which led to Russians and other Slavs emigrating into the area. Modern-day Central Asia is home to a large population of European settlers, who mostly live in Kazakhstan; 7 million Russians, 500,000 Ukrainians, and about 170,000 Germans. Stalinist-era forced deportation policies also mean that over 300,000 Koreans live there. Central Asia has a population of about 72 million, in five countries: Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan, and Uzbekistan (35 million). | 2001-10-09T20:52:23Z | 2023-12-31T22:57:11Z | [
"Template:Location map ",
"Template:Clarify",
"Template:Largest cities",
"Template:Lang-tg",
"Template:Short description",
"Template:Sfnp",
"Template:Wide image",
"Template:-",
"Template:Cite journal",
"Template:Citation",
"Template:Sister project links",
"Template:Navboxes",
"Template:Distinguish",
"Template:Nts",
"Template:ISSN",
"Template:Lang-ru",
"Template:Portal",
"Template:Cite magazine",
"Template:Iranica",
"Template:Convert",
"Template:Further",
"Template:UN Population",
"Template:Flag",
"Template:Harvp",
"Template:Circa",
"Template:Multiple image",
"Template:Cite web",
"Template:Cite news",
"Template:Encyclopaedia Iranica",
"Template:Cite book",
"Template:In lang",
"Template:ISBN",
"Template:Authority control",
"Template:Div col end",
"Template:Lang-uz-Latn-Cyrl",
"Template:Lang-fa",
"Template:Webarchive",
"Template:Main",
"Template:See also",
"Template:Reflist",
"Template:Refbegin",
"Template:Free-content attribution",
"Template:Refend",
"Template:Use dmy dates",
"Template:Infobox continent",
"Template:Div col",
"Template:Pie chart"
] | https://en.wikipedia.org/wiki/Central_Asia |
6,744 | Constantine II | Constantine II may refer to: | [
{
"paragraph_id": 0,
"text": "Constantine II may refer to:",
"title": ""
}
] | Constantine II may refer to: Constantine II (emperor) (317–340), Roman Emperor 337–340
Constantine III (usurper), known as Constantine II of Britain in British legend
Constans II, Byzantine emperor (630–668)
Antipope Constantine II, antipope from 767 to 768
Constantine II of Scotland, King of Scotland 900–942 or 943
Constantine II, Prince of Armenia
Constantine II of Cagliari
Constantine II of Torres, called de Martis, was the giudice of Logudoro
Constantine II the Woolmaker, Catholicos of the Armenian Apostolic Church
Constantine II, King of Armenia, first Latin King of Armenian Cilicia of the Lusignan dynasty
Constantine II of Bulgaria, last emperor of Bulgaria 1396–1422.
Eskender (1471–1494), Emperor of Ethiopia sometimes known as Constantine II
Constantine II of Georgia
Constantine II, Prince of Mukhrani, Georgian nobleman
Constantine II of Kakheti, King of Kakheti 1722–1732
Constantine II of Greece (1940–2023), Olympic champion (1960) and formerly King of the Hellenes March 6, 1964 – June 1, 1973 | 2023-04-21T13:49:55Z | [
"Template:Hndis"
] | https://en.wikipedia.org/wiki/Constantine_II |
|
6,745 | Couscous | Couscous (Arabic: كُسْكُس, romanized: kuskus; Berber languages: ⵙⴽⵙⵓ, romanized: seksu) – sometimes called kusksi or kseksu – is a traditional North African dish of small steamed granules of rolled semolina that is often served with a stew spooned on top. Pearl millet, sorghum, bulgur, and other cereals are sometimes cooked in a similar way in other regions, and the resulting dishes are also sometimes called couscous.
Couscous is a staple food throughout the Maghrebi cuisines of Algeria, Tunisia, Mauritania, Morocco, and Libya. It was integrated into French and European cuisine at the beginning of the twentieth century, through the French colonial empire and the Pieds-Noirs of Algeria.
In 2020, couscous was added to UNESCO's Intangible Cultural Heritage list.
The word "couscous" (alternately cuscus or kuskus) was first noted in early 17th century French, from Arabic kuskus, from kaskasa 'to pound', and is probably of Berber origin. The term seksu is attested in various Berber dialects such as Kabyle and Rifain, while Saharan Berber dialects such as Touareg and Ghadames have a slightly different form, keskesu. This widespread geographical dispersion of the term strongly suggests its local Berber origin, lending further support to its likely Berber roots as Algerian linguist Salem Chaker suggests.
The Berber root *KS means "well formed, well rolled, rounded." Numerous names and pronunciations for couscous exist around the world.
It is unclear when couscous originated. Food historian Lucie Bolens believes couscous originated millennia ago, during the reign of Masinissa in the ancient kingdom of Numidia in present-day Algeria. Traces of cooking vessels akin to couscoussiers have been found in graves from the 3rd century BC, from the time of the berber kings of Numidia, in the city of Tiaret, Algeria. Couscoussiers dating back to the 12th century were found in the ruins of Igiliz, located in the Sous valley of Morocco.
According to food writer Charles Perry, couscous originated among the Berbers of Algeria and Morocco between the end of the 11th-century Zirid dynasty, modern-day Algeria, and the rise of the 13th-century Almohad Caliphate. The historian Hady Idris noted that couscous is attested to during the Hafsid dynasty, but not the Zirid dynasty.
In the 12th century, Maghrebi cooks were preparing dishes of non-mushy grains by stirring flour with water to create light, round balls of couscous dough that could be steamed.
The historian Maxime Rodinson found three recipes for couscous from the 13th century Arabic cookbook Kitab al-Wusla ila al-Habib, written by an Ayyubid author, and the anonymous Arabic cooking book Kitab al tabikh and Ibn Razin al-Tujibi's Fadalat al-khiwan also contain recipes.
Couscous is believed to have been spread among the inhabitants of the Iberian Peninsula by the Berber dynasties of the 13th century, though it is no longer found in traditional Spanish or Portuguese cuisine. In modern day Trapani, Sicily, the dish is still made to the medieval recipe of Andalusian author Ibn Razin al-Tujibi. Ligurian families that moved from Tabarka to Sardinia brought the dish with them to Carloforte in the 18th century.
Known in France since the 16th century, it was brought into French cuisine at the beginning of the 20th century via the French colonial empire and the Pieds-Noirs.
Couscous is traditionally made from semolina, the hardest part of the grain of durum wheat (the hardest of all forms of wheat), which resists the grinding of the millstone. The semolina is sprinkled with water and rolled with the hands to form small pellets, sprinkled with dry flour to keep them separate, and then sieved. Any pellets that are too small to be finished, granules of couscous fall through the sieve and are again rolled and sprinkled with dry semolina and rolled into pellets. This labor-intensive process continues until all the semolina has been formed into tiny couscous granules. In the traditional method of preparing couscous, groups of people come together to make large batches over several days, which are then dried in the sun and used for several months. Handmade couscous may need to be rehydrated as it is prepared; this is achieved by a process of moistening and steaming over stew until the couscous reaches the desired light and fluffy consistency.
In some regions, couscous is made from farina or coarsely ground barley or pearl millet.
In modern times, couscous production is largely mechanized, and the product is sold worldwide. This couscous can be sauteed before it is cooked in water or another liquid. Properly cooked couscous is light and fluffy, not gummy or gritty.
Traditionally, North Africans use a food steamer (called a taseksut in the Berber language, a كِسْكَاس kiskas in Arabic or a couscoussier in French language). The base is a tall metal pot shaped like an oil jar, where the meat and vegetables are cooked as a stew. On top of the base, a steamer sits where the couscous is cooked, absorbing the flavours from the stew. The steamer's lid has holes around its edge so steam can escape. It is also possible to use a pot with a steamer insert. If the holes are too big, the steamer can be lined with damp cheesecloth.
The couscous that is sold in most Western grocery stores is usually pre-steamed and dried. It is typically prepared by adding 1.5 measures of boiling water or stock to each measure of couscous and then leaving it covered tightly for about five minutes. Pre-steamed couscous takes less time to prepare than regular couscous, most dried pasta, or dried grains (such as rice). Packaged sets of quick-preparation couscous and canned vegetables, and generally meat, are routinely sold in European grocery stores and supermarkets. Couscous is widely consumed in France, where it was introduced by Maghreb immigrants and voted the third most popular dish in a 2011 survey.
In December 2020, Algeria, Mauritania, Morocco, and Tunisia obtained official recognition for the knowledge, know-how, and practices pertaining to the production and consumption of couscous on the Representative List of the Intangible Cultural Heritage of Humanity by UNESCO. The joint submission by the four countries was hailed as an "example of international cooperation."
Couscous proper is about 2 mm in diameter, but there also exists a larger variety (3 mm more) known as berkoukes, as well as an ultra-fine version (around 1 mm). In Morocco, Algeria, Tunisia, and Libya, it is generally served with vegetables (carrots, potatoes, and turnips) cooked in a spicy or mild broth or stew, usually with some meat (generally, chicken, lamb, or mutton).
Algerian couscous is a traditional staple food in Algeria, and it plays an important role in Algerian culture and cuisine. It is commonly served with vegetables, meat, or fish. In Algeria, there are various types of couscous dishes.
In Tunisia, couscous is usually spicy, made with harissa sauce, and served commonly with vegetables and meat, including lamb, fish, seafood, beef, and sometimes (in southern regions) camel. Fish couscous is a Tunisian specialty and can also be made with octopus, squid or other seafood in a hot, red, spicy sauce. Couscous can also be served as a dessert. It is then called Masfuf. Masfuf can also contain raisins, grapes, or pomegranate seeds.
In Libya, couscous is mostly served with lamb (but sometimes camel meat or, rarely, beef) in Tripoli and the western parts of Libya, but not during official ceremonies or weddings. Another way to eat couscous is as a dessert; it is prepared with dates, sesame, and pure honey and is locally referred to as maghrood.
In Malta, small round pasta slightly larger than typical couscous is known as kusksu. It is commonly used in a dish of the same name, which includes broad beans (known in Maltese as ful) and ġbejniet, a local type of cheese.
In Mauritania, the couscous uses large wheat grains (mabroum) and is darker than the yellow couscous of Morocco. It is cooked with lamb, beef, or camel meat together with vegetables, primarily onion, tomato, and carrots, then mixed with a sauce and served with ghee, locally known as dhen.
Couscous is made from crushed wheat flour rolled into its constituent granules or pearls, making it distinct from pasta, even pasta such as orzo and risoni of similar size, which is made from ground wheat and either molded or extruded. Couscous and pasta have similar nutritional value, although pasta is usually more refined.
Several dishes worldwide are also made from granules, like those of couscous rolled from flour from grains or other milled or grated starchy crops. | [
{
"paragraph_id": 0,
"text": "Couscous (Arabic: كُسْكُس, romanized: kuskus; Berber languages: ⵙⴽⵙⵓ, romanized: seksu) – sometimes called kusksi or kseksu – is a traditional North African dish of small steamed granules of rolled semolina that is often served with a stew spooned on top. Pearl millet, sorghum, bulgur, and other cereals are sometimes cooked in a similar way in other regions, and the resulting dishes are also sometimes called couscous.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Couscous is a staple food throughout the Maghrebi cuisines of Algeria, Tunisia, Mauritania, Morocco, and Libya. It was integrated into French and European cuisine at the beginning of the twentieth century, through the French colonial empire and the Pieds-Noirs of Algeria.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 2020, couscous was added to UNESCO's Intangible Cultural Heritage list.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The word \"couscous\" (alternately cuscus or kuskus) was first noted in early 17th century French, from Arabic kuskus, from kaskasa 'to pound', and is probably of Berber origin. The term seksu is attested in various Berber dialects such as Kabyle and Rifain, while Saharan Berber dialects such as Touareg and Ghadames have a slightly different form, keskesu. This widespread geographical dispersion of the term strongly suggests its local Berber origin, lending further support to its likely Berber roots as Algerian linguist Salem Chaker suggests.",
"title": "Etymology"
},
{
"paragraph_id": 4,
"text": "The Berber root *KS means \"well formed, well rolled, rounded.\" Numerous names and pronunciations for couscous exist around the world.",
"title": "Etymology"
},
{
"paragraph_id": 5,
"text": "It is unclear when couscous originated. Food historian Lucie Bolens believes couscous originated millennia ago, during the reign of Masinissa in the ancient kingdom of Numidia in present-day Algeria. Traces of cooking vessels akin to couscoussiers have been found in graves from the 3rd century BC, from the time of the berber kings of Numidia, in the city of Tiaret, Algeria. Couscoussiers dating back to the 12th century were found in the ruins of Igiliz, located in the Sous valley of Morocco.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "According to food writer Charles Perry, couscous originated among the Berbers of Algeria and Morocco between the end of the 11th-century Zirid dynasty, modern-day Algeria, and the rise of the 13th-century Almohad Caliphate. The historian Hady Idris noted that couscous is attested to during the Hafsid dynasty, but not the Zirid dynasty.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the 12th century, Maghrebi cooks were preparing dishes of non-mushy grains by stirring flour with water to create light, round balls of couscous dough that could be steamed.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The historian Maxime Rodinson found three recipes for couscous from the 13th century Arabic cookbook Kitab al-Wusla ila al-Habib, written by an Ayyubid author, and the anonymous Arabic cooking book Kitab al tabikh and Ibn Razin al-Tujibi's Fadalat al-khiwan also contain recipes.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Couscous is believed to have been spread among the inhabitants of the Iberian Peninsula by the Berber dynasties of the 13th century, though it is no longer found in traditional Spanish or Portuguese cuisine. In modern day Trapani, Sicily, the dish is still made to the medieval recipe of Andalusian author Ibn Razin al-Tujibi. Ligurian families that moved from Tabarka to Sardinia brought the dish with them to Carloforte in the 18th century.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "Known in France since the 16th century, it was brought into French cuisine at the beginning of the 20th century via the French colonial empire and the Pieds-Noirs.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Couscous is traditionally made from semolina, the hardest part of the grain of durum wheat (the hardest of all forms of wheat), which resists the grinding of the millstone. The semolina is sprinkled with water and rolled with the hands to form small pellets, sprinkled with dry flour to keep them separate, and then sieved. Any pellets that are too small to be finished, granules of couscous fall through the sieve and are again rolled and sprinkled with dry semolina and rolled into pellets. This labor-intensive process continues until all the semolina has been formed into tiny couscous granules. In the traditional method of preparing couscous, groups of people come together to make large batches over several days, which are then dried in the sun and used for several months. Handmade couscous may need to be rehydrated as it is prepared; this is achieved by a process of moistening and steaming over stew until the couscous reaches the desired light and fluffy consistency.",
"title": "Preparation"
},
{
"paragraph_id": 12,
"text": "In some regions, couscous is made from farina or coarsely ground barley or pearl millet.",
"title": "Preparation"
},
{
"paragraph_id": 13,
"text": "In modern times, couscous production is largely mechanized, and the product is sold worldwide. This couscous can be sauteed before it is cooked in water or another liquid. Properly cooked couscous is light and fluffy, not gummy or gritty.",
"title": "Preparation"
},
{
"paragraph_id": 14,
"text": "Traditionally, North Africans use a food steamer (called a taseksut in the Berber language, a كِسْكَاس kiskas in Arabic or a couscoussier in French language). The base is a tall metal pot shaped like an oil jar, where the meat and vegetables are cooked as a stew. On top of the base, a steamer sits where the couscous is cooked, absorbing the flavours from the stew. The steamer's lid has holes around its edge so steam can escape. It is also possible to use a pot with a steamer insert. If the holes are too big, the steamer can be lined with damp cheesecloth.",
"title": "Preparation"
},
{
"paragraph_id": 15,
"text": "The couscous that is sold in most Western grocery stores is usually pre-steamed and dried. It is typically prepared by adding 1.5 measures of boiling water or stock to each measure of couscous and then leaving it covered tightly for about five minutes. Pre-steamed couscous takes less time to prepare than regular couscous, most dried pasta, or dried grains (such as rice). Packaged sets of quick-preparation couscous and canned vegetables, and generally meat, are routinely sold in European grocery stores and supermarkets. Couscous is widely consumed in France, where it was introduced by Maghreb immigrants and voted the third most popular dish in a 2011 survey.",
"title": "Preparation"
},
{
"paragraph_id": 16,
"text": "In December 2020, Algeria, Mauritania, Morocco, and Tunisia obtained official recognition for the knowledge, know-how, and practices pertaining to the production and consumption of couscous on the Representative List of the Intangible Cultural Heritage of Humanity by UNESCO. The joint submission by the four countries was hailed as an \"example of international cooperation.\"",
"title": "Recognition"
},
{
"paragraph_id": 17,
"text": "Couscous proper is about 2 mm in diameter, but there also exists a larger variety (3 mm more) known as berkoukes, as well as an ultra-fine version (around 1 mm). In Morocco, Algeria, Tunisia, and Libya, it is generally served with vegetables (carrots, potatoes, and turnips) cooked in a spicy or mild broth or stew, usually with some meat (generally, chicken, lamb, or mutton).",
"title": "Local variations"
},
{
"paragraph_id": 18,
"text": "Algerian couscous is a traditional staple food in Algeria, and it plays an important role in Algerian culture and cuisine. It is commonly served with vegetables, meat, or fish. In Algeria, there are various types of couscous dishes.",
"title": "Local variations"
},
{
"paragraph_id": 19,
"text": "In Tunisia, couscous is usually spicy, made with harissa sauce, and served commonly with vegetables and meat, including lamb, fish, seafood, beef, and sometimes (in southern regions) camel. Fish couscous is a Tunisian specialty and can also be made with octopus, squid or other seafood in a hot, red, spicy sauce. Couscous can also be served as a dessert. It is then called Masfuf. Masfuf can also contain raisins, grapes, or pomegranate seeds.",
"title": "Local variations"
},
{
"paragraph_id": 20,
"text": "In Libya, couscous is mostly served with lamb (but sometimes camel meat or, rarely, beef) in Tripoli and the western parts of Libya, but not during official ceremonies or weddings. Another way to eat couscous is as a dessert; it is prepared with dates, sesame, and pure honey and is locally referred to as maghrood.",
"title": "Local variations"
},
{
"paragraph_id": 21,
"text": "In Malta, small round pasta slightly larger than typical couscous is known as kusksu. It is commonly used in a dish of the same name, which includes broad beans (known in Maltese as ful) and ġbejniet, a local type of cheese.",
"title": "Local variations"
},
{
"paragraph_id": 22,
"text": "In Mauritania, the couscous uses large wheat grains (mabroum) and is darker than the yellow couscous of Morocco. It is cooked with lamb, beef, or camel meat together with vegetables, primarily onion, tomato, and carrots, then mixed with a sauce and served with ghee, locally known as dhen.",
"title": "Local variations"
},
{
"paragraph_id": 23,
"text": "Couscous is made from crushed wheat flour rolled into its constituent granules or pearls, making it distinct from pasta, even pasta such as orzo and risoni of similar size, which is made from ground wheat and either molded or extruded. Couscous and pasta have similar nutritional value, although pasta is usually more refined.",
"title": "Similar foods"
},
{
"paragraph_id": 24,
"text": "Several dishes worldwide are also made from granules, like those of couscous rolled from flour from grains or other milled or grated starchy crops.",
"title": "Similar foods"
}
] | Couscous – sometimes called kusksi or kseksu – is a traditional North African dish of small steamed granules of rolled semolina that is often served with a stew spooned on top. Pearl millet, sorghum, bulgur, and other cereals are sometimes cooked in a similar way in other regions, and the resulting dishes are also sometimes called couscous. Couscous is a staple food throughout the Maghrebi cuisines of Algeria, Tunisia, Mauritania, Morocco, and Libya. It was integrated into French and European cuisine at the beginning of the twentieth century, through the French colonial empire and the Pieds-Noirs of Algeria. In 2020, couscous was added to UNESCO's Intangible Cultural Heritage list. | 2001-10-17T08:18:50Z | 2023-12-17T23:54:37Z | [
"Template:Cite encyclopedia",
"Template:Convert",
"Template:Transl",
"Template:Cookbook",
"Template:Notelist",
"Template:Noodle",
"Template:Wheat",
"Template:Deadlink",
"Template:Cite news",
"Template:Authority control",
"Template:Use mdy dates",
"Template:IPA-pt",
"Template:Commons category",
"Template:Portal",
"Template:Reflist",
"Template:Cuisine of Morocco",
"Template:Cuisine of Tunisia",
"Template:Jewish baked goods",
"Template:Distinguish",
"Template:Infobox food",
"Template:Efn",
"Template:Cite book",
"Template:Lang-ar",
"Template:Lang",
"Template:Lang-he",
"Template:Short description",
"Template:About",
"Template:Rp",
"Template:Cite web",
"Template:Cite journal",
"Template:African cuisine",
"Template:Cuisine of Algeria",
"Template:Use American English",
"Template:Pp-semi-indef",
"Template:Lang-ber",
"Template:Cuisine of Israel"
] | https://en.wikipedia.org/wiki/Couscous |
6,746 | Constantius II | Constantius II (Latin: Flavius Julius Constantius; Greek: Κωνστάντιος, translit. Kōnstántios; 7 August 317 – 3 November 361) was Roman emperor from 337 to 361. His reign saw constant warfare on the borders against the Sasanian Empire and Germanic peoples, while internally the Roman Empire went through repeated civil wars, court intrigues, and usurpations. His religious policies inflamed domestic conflicts that would continue after his death.
Constantius was a son of Constantine the Great, who elevated him to the imperial rank of Caesar on 8 November 324 and after whose death Constantius became Augustus together with his brothers, Constantine II and Constans on 9 September 337. He promptly oversaw the massacre of his father-in-law, an uncle, and several cousins, consolidating his hold on power. The brothers divided the empire among themselves, with Constantius receiving Greece, Thrace, the Asian provinces, and Egypt in the east. For the following decade a costly and inconclusive war against Persia took most of Constantius's time and attention. In the meantime, his brothers Constantine and Constans warred over the western provinces of the empire, leaving the former dead in 340 and the latter as sole ruler of the west. The two remaining brothers maintained an uneasy peace with each other until, in 350, Constans was overthrown and assassinated by the usurper Magnentius.
Unwilling to accept Magnentius as co-ruler, Constantius waged a civil war against the usurper, defeating him at the battles of Mursa Major in 351 and Mons Seleucus in 353. Magnentius committed suicide after the latter battle, leaving Constantius as sole ruler of the empire. In 351, Constantius elevated his cousin Constantius Gallus to the subordinate rank of Caesar to rule in the east, but had him executed three years later after receiving scathing reports of his violent and corrupt nature. Shortly thereafter, in 355, Constantius promoted his last surviving cousin, Gallus' younger half-brother Julian, to the rank of Caesar.
As emperor, Constantius promoted Arianism, banned pagan sacrifices, and issued laws against Jews. His military campaigns against Germanic tribes were successful: he defeated the Alamanni in 354 and campaigned across the Danube against the Quadi and Sarmatians in 357. The war against the Sasanians, which had been in a lull since 350, erupted with renewed intensity in 359 and Constantius travelled to the east in 360 to restore stability after the loss of several border fortresses. However, Julian claimed the rank of Augustus in 360, leading to war between the two after Constantius' attempts to persuade Julian to back down failed. No battle was fought, as Constantius became ill and died of fever on 3 November 361 in Mopsuestia, allegedly naming Julian as his rightful successor before his death.
Constantius was born in 317 at Sirmium, Pannonia, now Serbia. He was the third son of Constantine the Great, and second by his second wife Fausta, the daughter of Maximian. Constantius was made caesar by his father on 8 November 324. In 336, religious unrest in Armenia and tense relations between Constantine and king Shapur II caused war to break out between Rome and Sassanid Persia. Though he made initial preparations for the war, Constantine fell ill and sent Constantius east to take command of the eastern frontier. Before Constantius arrived, the Persian general Narses, who was possibly the king's brother, overran Mesopotamia and captured Amida. Constantius promptly attacked Narses, and after suffering minor setbacks defeated and killed Narses at the Battle of Narasara. Constantius captured Amida and initiated a major refortification of the city, enhancing the city's circuit walls and constructing large towers. He also built a new stronghold in the hinterland nearby, naming it Antinopolis.
In early 337, Constantius hurried to Constantinople after receiving news that his father was near death. After Constantine died, Constantius buried him with lavish ceremony in the Church of the Holy Apostles. Soon after his father's death Constantius supposedly ordered a massacre of his relatives descended from the second marriage of his paternal grandfather Constantius Chlorus, though the details are unclear. Eutropius, writing between 350 and 370, states that Constantius merely sanctioned "the act, rather than commanding it". The massacre killed two of Constantius' uncles and six of his cousins, including Hannibalianus and Dalmatius, rulers of Pontus and Moesia respectively. The massacre left Constantius, his older brother Constantine II, his younger brother Constans, and three cousins Gallus, Julian and Nepotianus as the only surviving male relatives of Constantine the Great.
Soon after, Constantius met his brothers in Pannonia at Sirmium to formalize the partition of the empire. Constantius received the eastern provinces, including Constantinople, Thrace, Asia Minor, Syria, Egypt, and Cyrenaica; Constantine received Britannia, Gaul, Hispania, and Mauretania; and Constans, initially under the supervision of Constantine II, received Italy, Africa, Illyricum, Pannonia, Macedonia, and Achaea.
Constantius then hurried east to Antioch to resume the war with Persia. While Constantius was away from the eastern frontier in early 337, King Shapur II assembled a large army, which included war elephants, and launched an attack on Roman territory, laying waste to Mesopotamia and putting the city of Nisibis under siege. Despite initial success, Shapur lifted his siege after his army missed an opportunity to exploit a collapsed wall. When Constantius learned of Shapur's withdrawal from Roman territory, he prepared his army for a counter-attack.
Constantius repeatedly defended the eastern border against invasions by the Sassanid Empire under Shapur. These conflicts were mainly limited to Sassanid sieges of the major fortresses of Roman Mesopotamia, including Nisibis (Nusaybin), Singara, and Amida (Diyarbakir). Although Shapur seems to have been victorious in most of these confrontations, the Sassanids were able to achieve little. However, the Romans won a decisive victory at the Battle of Narasara, killing Shapur's brother, Narses. Ultimately, Constantius was able to push back the invasion, and Shapur failed to make any significant gains.
Meanwhile, Constantine II desired to retain control of Constans' realm, leading the brothers into open conflict. Constantine was killed in 340 near Aquileia during an ambush. As a result, Constans took control of his deceased brother's realms and became sole ruler of the Western two-thirds of the empire. This division lasted until 350, when Constans was assassinated by forces loyal to the usurper Magnentius.
As the only surviving son of Constantine the Great, Constantius felt that the position of emperor was his alone, and he determined to march west to fight the usurper, Magnentius. However, feeling that the east still required some sort of imperial presence, he elevated his cousin Constantius Gallus to caesar of the eastern provinces. As an extra measure to ensure the loyalty of his cousin, he married the elder of his two sisters, Constantina, to him.
Before facing Magnentius, Constantius first came to terms with Vetranio, a loyal general in Illyricum who had recently been acclaimed emperor by his soldiers. Vetranio immediately sent letters to Constantius pledging his loyalty, which Constantius may have accepted simply in order to stop Magnentius from gaining more support. These events may have been spurred by the action of Constantina, who had since traveled east to marry Gallus. Constantius subsequently sent Vetranio the imperial diadem and acknowledged the general's new position as augustus. However, when Constantius arrived, Vetranio willingly resigned his position and accepted Constantius’ offer of a comfortable retirement in Bithynia.
In 351, Constantius clashed with Magnentius in Pannonia with a large army. The ensuing Battle of Mursa Major was one of the largest and bloodiest battles ever between two Roman armies. The result was a victory for Constantius, but a costly one. Magnentius survived the battle and, determined to fight on, withdrew into northern Italy. Rather than pursuing his opponent, however, Constantius turned his attention to securing the Danubian border, where he spent the early months of 352 campaigning against the Sarmatians along the middle Danube. After achieving his aims, Constantius advanced on Magnentius in Italy. This action led the cities of Italy to switch their allegiance to him and eject the usurper's garrisons. Again, Magnentius withdrew, this time to southern Gaul.
In 353, Constantius and Magnentius met for the final time at the Battle of Mons Seleucus in southern Gaul, and again Constantius emerged the victor. Magnentius, realizing the futility of continuing his position, committed suicide on 10 August 353.
Constantius spent much of the rest of 353 and early 354 on campaign against the Alamanni on the Danube frontier. The campaign was successful and raiding by the Alamanni ceased temporarily. In the meantime, Constantius had been receiving disturbing reports regarding the actions of his cousin Gallus. Possibly as a result of these reports, Constantius concluded a peace with the Alamanni and traveled to Mediolanum (Milan).
In Mediolanum, Constantius first summoned Ursicinus, Gallus’ magister equitum, for reasons that remain unclear. Constantius then summoned Gallus and Constantina. Although Gallus and Constantina complied with the order at first, when Constantina died in Bithynia, Gallus began to hesitate. However, after some convincing by one of Constantius’ agents, Gallus continued his journey west, passing through Constantinople and Thrace to Poetovio (Ptuj) in Pannonia.
In Poetovio, Gallus was arrested by the soldiers of Constantius under the command of Barbatio. Gallus was then moved to Pola and interrogated. Gallus claimed that it was Constantina who was to blame for all the trouble while he was in charge of the eastern provinces. This angered Constantius so greatly that he immediately ordered Gallus' execution. He soon changed his mind, however, and recanted the order. Unfortunately for Gallus, this second order was delayed by Eusebius, one of Constantius' eunuchs, and Gallus was executed.
Laws dating from the 350s prescribed the death penalty for those who performed or attended pagan sacrifices, and for the worshipping of idols. Pagan temples were shut down, and the Altar of Victory was removed from the Senate meeting house. There were also frequent episodes of ordinary Christians destroying, pillaging and desecrating many ancient pagan temples, tombs and monuments. Paganism was still popular among the population at the time. The emperor's policies were passively resisted by many governors and magistrates.
In spite of this, Constantius never made any attempt to disband the various Roman priestly colleges or the Vestal Virgins. He never acted against the various pagan schools. At times, he actually made some effort to protect paganism. In fact, he even ordered the election of a priest for Africa. Also, he remained pontifex maximus and was deified by the Roman Senate after his death. His relative moderation toward paganism is reflected by the fact that it was over twenty years after his death, during the reign of Gratian, that any pagan senator protested his treatment of their religion.
Although often considered an Arian, Constantius ultimately preferred a third, compromise version that lay somewhere in between Arianism and the Nicene Creed, retrospectively called Semi-Arianism. During his reign he attempted to mold the Christian church to follow this compromise position, convening several Christian councils. "Unfortunately for his memory the theologians whose advice he took were ultimately discredited and the malcontents whom he pressed to conform emerged victorious," writes the historian A.H.M. Jones. "The great councils of 359–60 are therefore not reckoned ecumenical in the tradition of the church, and Constantius II is not remembered as a restorer of unity, but as a heretic who arbitrarily imposed his will on the church."
Judaism faced some severe restrictions under Constantius, who seems to have followed an anti-Jewish policy in line with that of his father. This included edicts to limit the ownership of slaves by Jewish people and banning marriages between Jews and Christian women. Later edicts sought to discourage conversions from Christianity to Judaism by confiscating the apostate's property. However, Constantius' actions in this regard may not have been so much to do with Jewish religion as with Jewish business—apparently, privately owned Jewish businesses were often in competition with state-owned businesses. As a result, Constantius may have sought to provide an advantage to state-owned businesses by limiting the skilled workers and slaves available to Jewish businesses.
On 11 August 355, the magister militum Claudius Silvanus revolted in Gaul. Silvanus had surrendered to Constantius after the Battle of Mursa Major. Constantius had made him magister militum in 353 with the purpose of blocking the German threats, a feat that Silvanus achieved by bribing the German tribes with the money he had collected. A plot organized by members of Constantius' court led the emperor to recall Silvanus. After Silvanus revolted, he received a letter from Constantius recalling him to Milan, but which made no reference to the revolt. Ursicinus, who was meant to replace Silvanus, bribed some troops, and Silvanus was killed.
Constantius realised that too many threats still faced the Empire, however, and he could not possibly handle all of them by himself. So on 6 November 355, he elevated his last remaining male relative, Julian, to the rank of caesar. A few days later, Julian was married to Helena, the last surviving sister of Constantius. Constantius soon sent Julian off to Gaul.
Constantius spent the next few years overseeing affairs in the western part of the empire primarily from his base at Mediolanum. In April–May 357 he visited Rome for the only time in his life. The same year, he forced Sarmatian and Quadi invaders out of Pannonia and Moesia Inferior, then led a successful counter-attack across the Danube.
In the winter of 357–58, Constantius received ambassadors from Shapur II who demanded that Rome restore the lands surrendered by Narseh. Despite rejecting these terms, Constantius tried to avert war with the Sassanid Empire by sending two embassies to Shapur II. Shapur II nevertheless launched another invasion of Roman Mesopotamia. In 360, when news reached Constantius that Shapur II had destroyed Singara (Sinjar), and taken Kiphas (Hasankeyf), Amida (Diyarbakır), and Ad Tigris (Cizre), he decided to travel east to face the re-emergent threat.
In the meantime, Julian had won some victories against the Alamanni, who had once again invaded Roman Gaul. However, when Constantius requested reinforcements from Julian's army for the eastern campaign, the Gallic legions revolted and proclaimed Julian augustus.
On account of the immediate Sassanid threat, Constantius was unable to directly respond to his cousin's usurpation, other than by sending missives in which he tried to convince Julian to resign the title of augustus and be satisfied with that of caesar. By 361, Constantius saw no alternative but to face the usurper with force, and yet the threat of the Sassanids remained. Constantius had already spent part of early 361 unsuccessfully attempting to re-take the fortress of Ad Tigris. After a time he had withdrawn to Antioch to regroup and prepare for a confrontation with Shapur II. The campaigns of the previous year had inflicted heavy losses on the Sassanids, however, and they did not attempt another round of campaigns that year. This temporary respite in hostilities allowed Constantius to turn his full attention to facing Julian.
Constantius immediately gathered his forces and set off west. However, by the time he reached Mopsuestia in Cilicia, it was clear that he was fatally ill and would not survive to face Julian. The sources claim that realising his death was near, Constantius had himself baptised by Euzoius, the Semi-Arian bishop of Antioch, and then declared that Julian was his rightful successor. Constantius II died of fever on 3 November 361.
Like Constantine the Great, he was buried in the Church of the Holy Apostles, in a porphyry sarcophagus that was described in the 10th century by Constantine VII Porphyrogenitus in the De Ceremoniis.
Constantius II was married three times:
First to a daughter of his half-uncle Julius Constantius, whose name is unknown. She was a full-sister of Gallus and a half-sister of Julian. She died c. 352/3.
Second, to Eusebia, a woman of Macedonian origin, originally from the city of Thessalonica, whom Constantius married before his defeat of Magnentius in 353. She died in 360.
Third and lastly, in 360, to Faustina, who gave birth to Constantius' only child, a posthumous daughter named Constantia, who later married Emperor Gratian.
Constantius II is a particularly difficult figure to judge properly due to the hostility of most sources toward him. A. H. M. Jones writes that Constantius "appears in the pages of Ammianus as a conscientious emperor but a vain and stupid man, an easy prey to flatterers. He was timid and suspicious, and interested persons could easily play on his fears for their own advantage." However, Kent and M. and A. Hirmer suggest that Constantius "has suffered at the hands of unsympathetic authors, ecclesiastical and civil alike. To orthodox churchmen he was a bigoted supporter of the Arian heresy, to Julian the Apostate and the many who have subsequently taken his part he was a murderer, a tyrant and inept as a ruler". They go on to add, "Most contemporaries seem in fact to have held him in high esteem, and he certainly inspired loyalty in a way his brother could not". | [
{
"paragraph_id": 0,
"text": "Constantius II (Latin: Flavius Julius Constantius; Greek: Κωνστάντιος, translit. Kōnstántios; 7 August 317 – 3 November 361) was Roman emperor from 337 to 361. His reign saw constant warfare on the borders against the Sasanian Empire and Germanic peoples, while internally the Roman Empire went through repeated civil wars, court intrigues, and usurpations. His religious policies inflamed domestic conflicts that would continue after his death.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Constantius was a son of Constantine the Great, who elevated him to the imperial rank of Caesar on 8 November 324 and after whose death Constantius became Augustus together with his brothers, Constantine II and Constans on 9 September 337. He promptly oversaw the massacre of his father-in-law, an uncle, and several cousins, consolidating his hold on power. The brothers divided the empire among themselves, with Constantius receiving Greece, Thrace, the Asian provinces, and Egypt in the east. For the following decade a costly and inconclusive war against Persia took most of Constantius's time and attention. In the meantime, his brothers Constantine and Constans warred over the western provinces of the empire, leaving the former dead in 340 and the latter as sole ruler of the west. The two remaining brothers maintained an uneasy peace with each other until, in 350, Constans was overthrown and assassinated by the usurper Magnentius.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Unwilling to accept Magnentius as co-ruler, Constantius waged a civil war against the usurper, defeating him at the battles of Mursa Major in 351 and Mons Seleucus in 353. Magnentius committed suicide after the latter battle, leaving Constantius as sole ruler of the empire. In 351, Constantius elevated his cousin Constantius Gallus to the subordinate rank of Caesar to rule in the east, but had him executed three years later after receiving scathing reports of his violent and corrupt nature. Shortly thereafter, in 355, Constantius promoted his last surviving cousin, Gallus' younger half-brother Julian, to the rank of Caesar.",
"title": ""
},
{
"paragraph_id": 3,
"text": "As emperor, Constantius promoted Arianism, banned pagan sacrifices, and issued laws against Jews. His military campaigns against Germanic tribes were successful: he defeated the Alamanni in 354 and campaigned across the Danube against the Quadi and Sarmatians in 357. The war against the Sasanians, which had been in a lull since 350, erupted with renewed intensity in 359 and Constantius travelled to the east in 360 to restore stability after the loss of several border fortresses. However, Julian claimed the rank of Augustus in 360, leading to war between the two after Constantius' attempts to persuade Julian to back down failed. No battle was fought, as Constantius became ill and died of fever on 3 November 361 in Mopsuestia, allegedly naming Julian as his rightful successor before his death.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Constantius was born in 317 at Sirmium, Pannonia, now Serbia. He was the third son of Constantine the Great, and second by his second wife Fausta, the daughter of Maximian. Constantius was made caesar by his father on 8 November 324. In 336, religious unrest in Armenia and tense relations between Constantine and king Shapur II caused war to break out between Rome and Sassanid Persia. Though he made initial preparations for the war, Constantine fell ill and sent Constantius east to take command of the eastern frontier. Before Constantius arrived, the Persian general Narses, who was possibly the king's brother, overran Mesopotamia and captured Amida. Constantius promptly attacked Narses, and after suffering minor setbacks defeated and killed Narses at the Battle of Narasara. Constantius captured Amida and initiated a major refortification of the city, enhancing the city's circuit walls and constructing large towers. He also built a new stronghold in the hinterland nearby, naming it Antinopolis.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "In early 337, Constantius hurried to Constantinople after receiving news that his father was near death. After Constantine died, Constantius buried him with lavish ceremony in the Church of the Holy Apostles. Soon after his father's death Constantius supposedly ordered a massacre of his relatives descended from the second marriage of his paternal grandfather Constantius Chlorus, though the details are unclear. Eutropius, writing between 350 and 370, states that Constantius merely sanctioned \"the act, rather than commanding it\". The massacre killed two of Constantius' uncles and six of his cousins, including Hannibalianus and Dalmatius, rulers of Pontus and Moesia respectively. The massacre left Constantius, his older brother Constantine II, his younger brother Constans, and three cousins Gallus, Julian and Nepotianus as the only surviving male relatives of Constantine the Great.",
"title": "Augustus in the East"
},
{
"paragraph_id": 6,
"text": "Soon after, Constantius met his brothers in Pannonia at Sirmium to formalize the partition of the empire. Constantius received the eastern provinces, including Constantinople, Thrace, Asia Minor, Syria, Egypt, and Cyrenaica; Constantine received Britannia, Gaul, Hispania, and Mauretania; and Constans, initially under the supervision of Constantine II, received Italy, Africa, Illyricum, Pannonia, Macedonia, and Achaea.",
"title": "Augustus in the East"
},
{
"paragraph_id": 7,
"text": "Constantius then hurried east to Antioch to resume the war with Persia. While Constantius was away from the eastern frontier in early 337, King Shapur II assembled a large army, which included war elephants, and launched an attack on Roman territory, laying waste to Mesopotamia and putting the city of Nisibis under siege. Despite initial success, Shapur lifted his siege after his army missed an opportunity to exploit a collapsed wall. When Constantius learned of Shapur's withdrawal from Roman territory, he prepared his army for a counter-attack.",
"title": "Augustus in the East"
},
{
"paragraph_id": 8,
"text": "Constantius repeatedly defended the eastern border against invasions by the Sassanid Empire under Shapur. These conflicts were mainly limited to Sassanid sieges of the major fortresses of Roman Mesopotamia, including Nisibis (Nusaybin), Singara, and Amida (Diyarbakir). Although Shapur seems to have been victorious in most of these confrontations, the Sassanids were able to achieve little. However, the Romans won a decisive victory at the Battle of Narasara, killing Shapur's brother, Narses. Ultimately, Constantius was able to push back the invasion, and Shapur failed to make any significant gains.",
"title": "Augustus in the East"
},
{
"paragraph_id": 9,
"text": "Meanwhile, Constantine II desired to retain control of Constans' realm, leading the brothers into open conflict. Constantine was killed in 340 near Aquileia during an ambush. As a result, Constans took control of his deceased brother's realms and became sole ruler of the Western two-thirds of the empire. This division lasted until 350, when Constans was assassinated by forces loyal to the usurper Magnentius.",
"title": "Augustus in the East"
},
{
"paragraph_id": 10,
"text": "As the only surviving son of Constantine the Great, Constantius felt that the position of emperor was his alone, and he determined to march west to fight the usurper, Magnentius. However, feeling that the east still required some sort of imperial presence, he elevated his cousin Constantius Gallus to caesar of the eastern provinces. As an extra measure to ensure the loyalty of his cousin, he married the elder of his two sisters, Constantina, to him.",
"title": "Augustus in the East"
},
{
"paragraph_id": 11,
"text": "Before facing Magnentius, Constantius first came to terms with Vetranio, a loyal general in Illyricum who had recently been acclaimed emperor by his soldiers. Vetranio immediately sent letters to Constantius pledging his loyalty, which Constantius may have accepted simply in order to stop Magnentius from gaining more support. These events may have been spurred by the action of Constantina, who had since traveled east to marry Gallus. Constantius subsequently sent Vetranio the imperial diadem and acknowledged the general's new position as augustus. However, when Constantius arrived, Vetranio willingly resigned his position and accepted Constantius’ offer of a comfortable retirement in Bithynia.",
"title": "Augustus in the East"
},
{
"paragraph_id": 12,
"text": "In 351, Constantius clashed with Magnentius in Pannonia with a large army. The ensuing Battle of Mursa Major was one of the largest and bloodiest battles ever between two Roman armies. The result was a victory for Constantius, but a costly one. Magnentius survived the battle and, determined to fight on, withdrew into northern Italy. Rather than pursuing his opponent, however, Constantius turned his attention to securing the Danubian border, where he spent the early months of 352 campaigning against the Sarmatians along the middle Danube. After achieving his aims, Constantius advanced on Magnentius in Italy. This action led the cities of Italy to switch their allegiance to him and eject the usurper's garrisons. Again, Magnentius withdrew, this time to southern Gaul.",
"title": "Augustus in the East"
},
{
"paragraph_id": 13,
"text": "In 353, Constantius and Magnentius met for the final time at the Battle of Mons Seleucus in southern Gaul, and again Constantius emerged the victor. Magnentius, realizing the futility of continuing his position, committed suicide on 10 August 353.",
"title": "Augustus in the East"
},
{
"paragraph_id": 14,
"text": "Constantius spent much of the rest of 353 and early 354 on campaign against the Alamanni on the Danube frontier. The campaign was successful and raiding by the Alamanni ceased temporarily. In the meantime, Constantius had been receiving disturbing reports regarding the actions of his cousin Gallus. Possibly as a result of these reports, Constantius concluded a peace with the Alamanni and traveled to Mediolanum (Milan).",
"title": "Solo reign"
},
{
"paragraph_id": 15,
"text": "In Mediolanum, Constantius first summoned Ursicinus, Gallus’ magister equitum, for reasons that remain unclear. Constantius then summoned Gallus and Constantina. Although Gallus and Constantina complied with the order at first, when Constantina died in Bithynia, Gallus began to hesitate. However, after some convincing by one of Constantius’ agents, Gallus continued his journey west, passing through Constantinople and Thrace to Poetovio (Ptuj) in Pannonia.",
"title": "Solo reign"
},
{
"paragraph_id": 16,
"text": "In Poetovio, Gallus was arrested by the soldiers of Constantius under the command of Barbatio. Gallus was then moved to Pola and interrogated. Gallus claimed that it was Constantina who was to blame for all the trouble while he was in charge of the eastern provinces. This angered Constantius so greatly that he immediately ordered Gallus' execution. He soon changed his mind, however, and recanted the order. Unfortunately for Gallus, this second order was delayed by Eusebius, one of Constantius' eunuchs, and Gallus was executed.",
"title": "Solo reign"
},
{
"paragraph_id": 17,
"text": "Laws dating from the 350s prescribed the death penalty for those who performed or attended pagan sacrifices, and for the worshipping of idols. Pagan temples were shut down, and the Altar of Victory was removed from the Senate meeting house. There were also frequent episodes of ordinary Christians destroying, pillaging and desecrating many ancient pagan temples, tombs and monuments. Paganism was still popular among the population at the time. The emperor's policies were passively resisted by many governors and magistrates.",
"title": "Solo reign"
},
{
"paragraph_id": 18,
"text": "In spite of this, Constantius never made any attempt to disband the various Roman priestly colleges or the Vestal Virgins. He never acted against the various pagan schools. At times, he actually made some effort to protect paganism. In fact, he even ordered the election of a priest for Africa. Also, he remained pontifex maximus and was deified by the Roman Senate after his death. His relative moderation toward paganism is reflected by the fact that it was over twenty years after his death, during the reign of Gratian, that any pagan senator protested his treatment of their religion.",
"title": "Solo reign"
},
{
"paragraph_id": 19,
"text": "Although often considered an Arian, Constantius ultimately preferred a third, compromise version that lay somewhere in between Arianism and the Nicene Creed, retrospectively called Semi-Arianism. During his reign he attempted to mold the Christian church to follow this compromise position, convening several Christian councils. \"Unfortunately for his memory the theologians whose advice he took were ultimately discredited and the malcontents whom he pressed to conform emerged victorious,\" writes the historian A.H.M. Jones. \"The great councils of 359–60 are therefore not reckoned ecumenical in the tradition of the church, and Constantius II is not remembered as a restorer of unity, but as a heretic who arbitrarily imposed his will on the church.\"",
"title": "Solo reign"
},
{
"paragraph_id": 20,
"text": "Judaism faced some severe restrictions under Constantius, who seems to have followed an anti-Jewish policy in line with that of his father. This included edicts to limit the ownership of slaves by Jewish people and banning marriages between Jews and Christian women. Later edicts sought to discourage conversions from Christianity to Judaism by confiscating the apostate's property. However, Constantius' actions in this regard may not have been so much to do with Jewish religion as with Jewish business—apparently, privately owned Jewish businesses were often in competition with state-owned businesses. As a result, Constantius may have sought to provide an advantage to state-owned businesses by limiting the skilled workers and slaves available to Jewish businesses.",
"title": "Solo reign"
},
{
"paragraph_id": 21,
"text": "On 11 August 355, the magister militum Claudius Silvanus revolted in Gaul. Silvanus had surrendered to Constantius after the Battle of Mursa Major. Constantius had made him magister militum in 353 with the purpose of blocking the German threats, a feat that Silvanus achieved by bribing the German tribes with the money he had collected. A plot organized by members of Constantius' court led the emperor to recall Silvanus. After Silvanus revolted, he received a letter from Constantius recalling him to Milan, but which made no reference to the revolt. Ursicinus, who was meant to replace Silvanus, bribed some troops, and Silvanus was killed.",
"title": "Solo reign"
},
{
"paragraph_id": 22,
"text": "Constantius realised that too many threats still faced the Empire, however, and he could not possibly handle all of them by himself. So on 6 November 355, he elevated his last remaining male relative, Julian, to the rank of caesar. A few days later, Julian was married to Helena, the last surviving sister of Constantius. Constantius soon sent Julian off to Gaul.",
"title": "Solo reign"
},
{
"paragraph_id": 23,
"text": "Constantius spent the next few years overseeing affairs in the western part of the empire primarily from his base at Mediolanum. In April–May 357 he visited Rome for the only time in his life. The same year, he forced Sarmatian and Quadi invaders out of Pannonia and Moesia Inferior, then led a successful counter-attack across the Danube.",
"title": "Solo reign"
},
{
"paragraph_id": 24,
"text": "In the winter of 357–58, Constantius received ambassadors from Shapur II who demanded that Rome restore the lands surrendered by Narseh. Despite rejecting these terms, Constantius tried to avert war with the Sassanid Empire by sending two embassies to Shapur II. Shapur II nevertheless launched another invasion of Roman Mesopotamia. In 360, when news reached Constantius that Shapur II had destroyed Singara (Sinjar), and taken Kiphas (Hasankeyf), Amida (Diyarbakır), and Ad Tigris (Cizre), he decided to travel east to face the re-emergent threat.",
"title": "Solo reign"
},
{
"paragraph_id": 25,
"text": "In the meantime, Julian had won some victories against the Alamanni, who had once again invaded Roman Gaul. However, when Constantius requested reinforcements from Julian's army for the eastern campaign, the Gallic legions revolted and proclaimed Julian augustus.",
"title": "Solo reign"
},
{
"paragraph_id": 26,
"text": "On account of the immediate Sassanid threat, Constantius was unable to directly respond to his cousin's usurpation, other than by sending missives in which he tried to convince Julian to resign the title of augustus and be satisfied with that of caesar. By 361, Constantius saw no alternative but to face the usurper with force, and yet the threat of the Sassanids remained. Constantius had already spent part of early 361 unsuccessfully attempting to re-take the fortress of Ad Tigris. After a time he had withdrawn to Antioch to regroup and prepare for a confrontation with Shapur II. The campaigns of the previous year had inflicted heavy losses on the Sassanids, however, and they did not attempt another round of campaigns that year. This temporary respite in hostilities allowed Constantius to turn his full attention to facing Julian.",
"title": "Solo reign"
},
{
"paragraph_id": 27,
"text": "Constantius immediately gathered his forces and set off west. However, by the time he reached Mopsuestia in Cilicia, it was clear that he was fatally ill and would not survive to face Julian. The sources claim that realising his death was near, Constantius had himself baptised by Euzoius, the Semi-Arian bishop of Antioch, and then declared that Julian was his rightful successor. Constantius II died of fever on 3 November 361.",
"title": "Solo reign"
},
{
"paragraph_id": 28,
"text": "Like Constantine the Great, he was buried in the Church of the Holy Apostles, in a porphyry sarcophagus that was described in the 10th century by Constantine VII Porphyrogenitus in the De Ceremoniis.",
"title": "Solo reign"
},
{
"paragraph_id": 29,
"text": "Constantius II was married three times:",
"title": "Marriages and children"
},
{
"paragraph_id": 30,
"text": "First to a daughter of his half-uncle Julius Constantius, whose name is unknown. She was a full-sister of Gallus and a half-sister of Julian. She died c. 352/3.",
"title": "Marriages and children"
},
{
"paragraph_id": 31,
"text": "Second, to Eusebia, a woman of Macedonian origin, originally from the city of Thessalonica, whom Constantius married before his defeat of Magnentius in 353. She died in 360.",
"title": "Marriages and children"
},
{
"paragraph_id": 32,
"text": "Third and lastly, in 360, to Faustina, who gave birth to Constantius' only child, a posthumous daughter named Constantia, who later married Emperor Gratian.",
"title": "Marriages and children"
},
{
"paragraph_id": 33,
"text": "Constantius II is a particularly difficult figure to judge properly due to the hostility of most sources toward him. A. H. M. Jones writes that Constantius \"appears in the pages of Ammianus as a conscientious emperor but a vain and stupid man, an easy prey to flatterers. He was timid and suspicious, and interested persons could easily play on his fears for their own advantage.\" However, Kent and M. and A. Hirmer suggest that Constantius \"has suffered at the hands of unsympathetic authors, ecclesiastical and civil alike. To orthodox churchmen he was a bigoted supporter of the Arian heresy, to Julian the Apostate and the many who have subsequently taken his part he was a murderer, a tyrant and inept as a ruler\". They go on to add, \"Most contemporaries seem in fact to have held him in high esteem, and he certainly inspired loyalty in a way his brother could not\".",
"title": "Reputation"
}
] | Constantius II was Roman emperor from 337 to 361. His reign saw constant warfare on the borders against the Sasanian Empire and Germanic peoples, while internally the Roman Empire went through repeated civil wars, court intrigues, and usurpations. His religious policies inflamed domestic conflicts that would continue after his death. Constantius was a son of Constantine the Great, who elevated him to the imperial rank of Caesar on 8 November 324 and after whose death Constantius became Augustus together with his brothers, Constantine II and Constans on 9 September 337. He promptly oversaw the massacre of his father-in-law, an uncle, and several cousins, consolidating his hold on power. The brothers divided the empire among themselves, with Constantius receiving Greece, Thrace, the Asian provinces, and Egypt in the east. For the following decade a costly and inconclusive war against Persia took most of Constantius's time and attention. In the meantime, his brothers Constantine and Constans warred over the western provinces of the empire, leaving the former dead in 340 and the latter as sole ruler of the west. The two remaining brothers maintained an uneasy peace with each other until, in 350, Constans was overthrown and assassinated by the usurper Magnentius. Unwilling to accept Magnentius as co-ruler, Constantius waged a civil war against the usurper, defeating him at the battles of Mursa Major in 351 and Mons Seleucus in 353. Magnentius committed suicide after the latter battle, leaving Constantius as sole ruler of the empire. In 351, Constantius elevated his cousin Constantius Gallus to the subordinate rank of Caesar to rule in the east, but had him executed three years later after receiving scathing reports of his violent and corrupt nature. Shortly thereafter, in 355, Constantius promoted his last surviving cousin, Gallus' younger half-brother Julian, to the rank of Caesar. As emperor, Constantius promoted Arianism, banned pagan sacrifices, and issued laws against Jews. His military campaigns against Germanic tribes were successful: he defeated the Alamanni in 354 and campaigned across the Danube against the Quadi and Sarmatians in 357. The war against the Sasanians, which had been in a lull since 350, erupted with renewed intensity in 359 and Constantius travelled to the east in 360 to restore stability after the loss of several border fortresses. However, Julian claimed the rank of Augustus in 360, leading to war between the two after Constantius' attempts to persuade Julian to back down failed. No battle was fought, as Constantius became ill and died of fever on 3 November 361 in Mopsuestia, allegedly naming Julian as his rightful successor before his death. | 2001-10-09T22:40:05Z | 2023-12-31T08:39:13Z | [
"Template:Citation needed",
"Template:Constantinian dynasty family tree",
"Template:Wikiquote",
"Template:S-reg",
"Template:Distinguish",
"Template:Use dmy dates",
"Template:Lang-grc-gre",
"Template:Sfn",
"Template:Main",
"Template:Chart top",
"Template:Tree chart",
"Template:Break",
"Template:S-hou",
"Template:Refend",
"Template:S-off",
"Template:Tree chart/start",
"Template:Tree chart/end",
"Template:Commons",
"Template:S-ttl",
"Template:S-aft",
"Template:Infobox royalty",
"Template:Chart bottom",
"Template:Cite journal",
"Template:S-bef",
"Template:See also",
"Template:Refbegin",
"Template:ISBN",
"Template:Wikicite",
"Template:S-start",
"Template:Short description",
"Template:Cite book",
"Template:Webarchive",
"Template:Dead link",
"Template:Authority control",
"Template:Lang-la",
"Template:Reflist",
"Template:Cite web",
"Template:Cbignore",
"Template:S-end",
"Template:Roman emperors"
] | https://en.wikipedia.org/wiki/Constantius_II |
6,747 | Constans | Flavius Julius Constans (c. 323 – 350), sometimes called Constans I, was Roman emperor from 337 to 350. He held the imperial rank of caesar from 333, and was the youngest son of Constantine the Great.
After his father's death, he was made augustus alongside his brothers in September 337. Constans was given the administration of the praetorian prefectures of Italy, Illyricum, and Africa. He defeated the Sarmatians in a campaign shortly afterwards. Quarrels over the sharing of power led to a civil war with his eldest brother and co-emperor Constantine II, who invaded Italy in 340 and was killed in battle by Constans's forces near Aquileia. Constans gained from him the praetorian prefecture of Gaul. Thereafter there were tensions with his remaining brother and co-augustus Constantius II (r. 337–361), including over the exiled bishop Athanasius of Alexandria, who in turn eulogized Constans as "the most pious Augustus... of blessed and everlasting memory." In the following years he campaigned against the Franks, and in 343 he visited Roman Britain, the last legitimate emperor to do so.
In January 350, Magnentius (r. 350–353) the commander of the Jovians and Herculians, a corps in the Roman army, was acclaimed augustus at Augustodunum (Autun) with the support of Marcellinus, the comes rei privatae. Magnentius overthrew and killed Constans. Surviving sources, possibly influenced by the propaganda of Magnentius's faction, accuse Constans of misrule and of homosexuality.
Constans was probably born in 323. He was the third and youngest son of Constantine I and Fausta, his father's second wife. He was the grandson of both the augusti Constantius I and Maximian. When he was born his father Constantine was the empire's senior augustus, and at war with his colleague and brother-in-law Licinius I (r. 308–324). At the time of Constans's birth, his eldest brother Constantine II and his half-brother Crispus, Constantine's first-born son, already held the rank of caesar. Constans's half-aunt Constantia, a daughter of Constantius I, was Licinius's wife and mother to another caesar, Licinius II.
After the defeat of Licinius by Crispus at the Battle of the Hellespont and at the Battle of Chrysopolis by Constantine, Licinius and his son were spared at Constantine's half-sister's urging. Licinius was executed on a pretext shortly afterwards. In 326, Constans's mother Fausta was also put to death on Constantine's orders, as were Constans's half-brother Crispus and Licinius II. This left Constans's branch of the Constantinian dynasty – descended from Constantius I's relationship with Helena – in control of the imperial college.
According to the works of both Ausonius and Libanius he was educated at Constantinople under the tutelage of the poet Aemilius Magnus Arborius, who instructed him in Latin.
On 25 December 333, his father Constantine I elevated Constans to the imperial rank of caesar at Constantinople. He was nobilissimus caesar alongside his brothers Constantine II and Constantius II. Constans became engaged to Olympias, the daughter of the praetorian prefect Ablabius, but the marriage never came to pass. Official imagery was changed to accommodate an image of Constans as co-caesar beside his brothers and their father the augustus.
It is possible that the occasion of Constans's elevation to the imperial college was timed to coincide with the celebration of the millennium of the city of Byzantium, whose re-foundation as Constantinople Constantine had begun the previous decade. In 248, Rome had celebrated its own millennium, the Secular Games (Latin: ludi saeculares), in the reign of Philip the Arab (r. 244–249). Philip may also have raised his son to co-augustus at the start of the anniversary year. Rome had been calculated by the 1st-century BC Latin author Marcus Terentius Varro to have been founded by Romulus in 753 BC. Byzantium was thought to have been founded in 667 BC by Byzas, according to the reckoning derived from the Histories of Herodotus, the 5th-century BC Greek historian, and the writings of Constantine's court historian Eusebius of Caesarea in his Chronicon.
With Constantine's death in 337, Constans and his two brothers, Constantine II and Constantius II, divided the Roman world among themselves and disposed of virtually all relatives who could possibly have a claim to the throne. The army proclaimed them augusti on 9 September 337. Almost immediately, Constans was required to deal with a Sarmatian invasion in late 337, in which he won a resounding victory.
Constans managed to extract the prefecture of Illyricum and the diocese of Thrace, provinces that were originally to be ruled by his cousin Dalmatius, as per Constantine I's proposed division after his death. Constantine II soon complained that he had not received the amount of territory that was his due as the eldest son.
Annoyed that Constans had received Thrace and Macedonia after the death of Dalmatius, Constantine demanded that Constans hand over the African provinces, which he agreed to do in order to maintain a fragile peace. Soon, however, they began quarreling over which parts of the African provinces belonged to Carthage and Constantine, and which parts belonged to Italy and Constans. This led to growing tensions between the two brothers, which were only heightened by Constans finally coming of age and Constantine refusing to give up his guardianship. In 340 Constantine II invaded Italy. Constans, at that time in Dacia, detached and sent a select and disciplined body of his Illyrian troops, stating that he would follow them in person with the remainder of his forces. Constantine was eventually trapped at Aquileia, where he died, leaving Constans to inherit all of his brother's former territories – Hispania, Britannia and Gaul.
Constans began his reign in an energetic fashion. In 341–342, he led a successful campaign against the Franks, and in the early months of 343 he visited Britain, probably as part of a military campaign.
Regarding religion, Constans was tolerant of Judaism and promulgated an edict banning pagan sacrifices in 341. He suppressed Donatism in Africa and supported Nicene orthodoxy against Arianism, which was championed by his brother Constantius. Although Constans called the Council of Serdica in 343 to settle the conflict, it was a complete failure, and by 346 the two emperors were on the point of open warfare over the dispute.
Surviving sources, possibly influenced by the propaganda of Magnentius's faction, accuse Constans of misrule and of homosexuality. The Roman historian Eutropius says Constans "indulged in great vices," in reference to his homosexuality, and Aurelius Victor stated that Constans had a reputation for scandalous behaviour with "handsome barbarian hostages." Nevertheless, Constans did sponsor a decree alongside Constantius II that ruled that "unnatural" sex should be punished meticulously. However, according to John Boswell, it was likely that Constans promulgated the legislation under pressure from Christian leaders, in an attempt to placate public outrage at his own perceived indecencies.
In the final years of his reign, Constans developed a reputation for cruelty and misrule. Dominated by favourites and openly preferring his select bodyguard, he lost the support of the legions. On 18 January 350, the general Magnentius declared himself emperor at Augustodunum (Autun) with the support of the troops on the Rhine frontier and, later, the western provinces of the Empire. Constans was enjoying himself nearby when he was notified of the elevation of Magnentius. Lacking any support beyond his immediate household, he was forced to flee for his life. As he was trying to reach Hispania, supporters of Magnentius cornered him in a fortification in Helena (Elne) in the eastern Pyrenees of southwestern Gaul, where he was killed after seeking sanctuary in a temple. An alleged prophecy at his birth had said Constans would die "in the arms of his grandmother". His place of death happens to have been named after Helena, mother of Constantine and his own grandmother, thus realizing the prophecy. | [
{
"paragraph_id": 0,
"text": "Flavius Julius Constans (c. 323 – 350), sometimes called Constans I, was Roman emperor from 337 to 350. He held the imperial rank of caesar from 333, and was the youngest son of Constantine the Great.",
"title": ""
},
{
"paragraph_id": 1,
"text": "After his father's death, he was made augustus alongside his brothers in September 337. Constans was given the administration of the praetorian prefectures of Italy, Illyricum, and Africa. He defeated the Sarmatians in a campaign shortly afterwards. Quarrels over the sharing of power led to a civil war with his eldest brother and co-emperor Constantine II, who invaded Italy in 340 and was killed in battle by Constans's forces near Aquileia. Constans gained from him the praetorian prefecture of Gaul. Thereafter there were tensions with his remaining brother and co-augustus Constantius II (r. 337–361), including over the exiled bishop Athanasius of Alexandria, who in turn eulogized Constans as \"the most pious Augustus... of blessed and everlasting memory.\" In the following years he campaigned against the Franks, and in 343 he visited Roman Britain, the last legitimate emperor to do so.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In January 350, Magnentius (r. 350–353) the commander of the Jovians and Herculians, a corps in the Roman army, was acclaimed augustus at Augustodunum (Autun) with the support of Marcellinus, the comes rei privatae. Magnentius overthrew and killed Constans. Surviving sources, possibly influenced by the propaganda of Magnentius's faction, accuse Constans of misrule and of homosexuality.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Constans was probably born in 323. He was the third and youngest son of Constantine I and Fausta, his father's second wife. He was the grandson of both the augusti Constantius I and Maximian. When he was born his father Constantine was the empire's senior augustus, and at war with his colleague and brother-in-law Licinius I (r. 308–324). At the time of Constans's birth, his eldest brother Constantine II and his half-brother Crispus, Constantine's first-born son, already held the rank of caesar. Constans's half-aunt Constantia, a daughter of Constantius I, was Licinius's wife and mother to another caesar, Licinius II.",
"title": "Early life"
},
{
"paragraph_id": 4,
"text": "After the defeat of Licinius by Crispus at the Battle of the Hellespont and at the Battle of Chrysopolis by Constantine, Licinius and his son were spared at Constantine's half-sister's urging. Licinius was executed on a pretext shortly afterwards. In 326, Constans's mother Fausta was also put to death on Constantine's orders, as were Constans's half-brother Crispus and Licinius II. This left Constans's branch of the Constantinian dynasty – descended from Constantius I's relationship with Helena – in control of the imperial college.",
"title": "Early life"
},
{
"paragraph_id": 5,
"text": "According to the works of both Ausonius and Libanius he was educated at Constantinople under the tutelage of the poet Aemilius Magnus Arborius, who instructed him in Latin.",
"title": "Early life"
},
{
"paragraph_id": 6,
"text": "On 25 December 333, his father Constantine I elevated Constans to the imperial rank of caesar at Constantinople. He was nobilissimus caesar alongside his brothers Constantine II and Constantius II. Constans became engaged to Olympias, the daughter of the praetorian prefect Ablabius, but the marriage never came to pass. Official imagery was changed to accommodate an image of Constans as co-caesar beside his brothers and their father the augustus.",
"title": "Reign"
},
{
"paragraph_id": 7,
"text": "It is possible that the occasion of Constans's elevation to the imperial college was timed to coincide with the celebration of the millennium of the city of Byzantium, whose re-foundation as Constantinople Constantine had begun the previous decade. In 248, Rome had celebrated its own millennium, the Secular Games (Latin: ludi saeculares), in the reign of Philip the Arab (r. 244–249). Philip may also have raised his son to co-augustus at the start of the anniversary year. Rome had been calculated by the 1st-century BC Latin author Marcus Terentius Varro to have been founded by Romulus in 753 BC. Byzantium was thought to have been founded in 667 BC by Byzas, according to the reckoning derived from the Histories of Herodotus, the 5th-century BC Greek historian, and the writings of Constantine's court historian Eusebius of Caesarea in his Chronicon.",
"title": "Reign"
},
{
"paragraph_id": 8,
"text": "With Constantine's death in 337, Constans and his two brothers, Constantine II and Constantius II, divided the Roman world among themselves and disposed of virtually all relatives who could possibly have a claim to the throne. The army proclaimed them augusti on 9 September 337. Almost immediately, Constans was required to deal with a Sarmatian invasion in late 337, in which he won a resounding victory.",
"title": "Reign"
},
{
"paragraph_id": 9,
"text": "Constans managed to extract the prefecture of Illyricum and the diocese of Thrace, provinces that were originally to be ruled by his cousin Dalmatius, as per Constantine I's proposed division after his death. Constantine II soon complained that he had not received the amount of territory that was his due as the eldest son.",
"title": "Reign"
},
{
"paragraph_id": 10,
"text": "Annoyed that Constans had received Thrace and Macedonia after the death of Dalmatius, Constantine demanded that Constans hand over the African provinces, which he agreed to do in order to maintain a fragile peace. Soon, however, they began quarreling over which parts of the African provinces belonged to Carthage and Constantine, and which parts belonged to Italy and Constans. This led to growing tensions between the two brothers, which were only heightened by Constans finally coming of age and Constantine refusing to give up his guardianship. In 340 Constantine II invaded Italy. Constans, at that time in Dacia, detached and sent a select and disciplined body of his Illyrian troops, stating that he would follow them in person with the remainder of his forces. Constantine was eventually trapped at Aquileia, where he died, leaving Constans to inherit all of his brother's former territories – Hispania, Britannia and Gaul.",
"title": "Reign"
},
{
"paragraph_id": 11,
"text": "Constans began his reign in an energetic fashion. In 341–342, he led a successful campaign against the Franks, and in the early months of 343 he visited Britain, probably as part of a military campaign.",
"title": "Reign"
},
{
"paragraph_id": 12,
"text": "Regarding religion, Constans was tolerant of Judaism and promulgated an edict banning pagan sacrifices in 341. He suppressed Donatism in Africa and supported Nicene orthodoxy against Arianism, which was championed by his brother Constantius. Although Constans called the Council of Serdica in 343 to settle the conflict, it was a complete failure, and by 346 the two emperors were on the point of open warfare over the dispute.",
"title": "Reign"
},
{
"paragraph_id": 13,
"text": "Surviving sources, possibly influenced by the propaganda of Magnentius's faction, accuse Constans of misrule and of homosexuality. The Roman historian Eutropius says Constans \"indulged in great vices,\" in reference to his homosexuality, and Aurelius Victor stated that Constans had a reputation for scandalous behaviour with \"handsome barbarian hostages.\" Nevertheless, Constans did sponsor a decree alongside Constantius II that ruled that \"unnatural\" sex should be punished meticulously. However, according to John Boswell, it was likely that Constans promulgated the legislation under pressure from Christian leaders, in an attempt to placate public outrage at his own perceived indecencies.",
"title": "Reign"
},
{
"paragraph_id": 14,
"text": "In the final years of his reign, Constans developed a reputation for cruelty and misrule. Dominated by favourites and openly preferring his select bodyguard, he lost the support of the legions. On 18 January 350, the general Magnentius declared himself emperor at Augustodunum (Autun) with the support of the troops on the Rhine frontier and, later, the western provinces of the Empire. Constans was enjoying himself nearby when he was notified of the elevation of Magnentius. Lacking any support beyond his immediate household, he was forced to flee for his life. As he was trying to reach Hispania, supporters of Magnentius cornered him in a fortification in Helena (Elne) in the eastern Pyrenees of southwestern Gaul, where he was killed after seeking sanctuary in a temple. An alleged prophecy at his birth had said Constans would die \"in the arms of his grandmother\". His place of death happens to have been named after Helena, mother of Constantine and his own grandmother, thus realizing the prophecy.",
"title": "Death"
}
] | Flavius Julius Constans, sometimes called Constans I, was Roman emperor from 337 to 350. He held the imperial rank of caesar from 333, and was the youngest son of Constantine the Great. After his father's death, he was made augustus alongside his brothers in September 337. Constans was given the administration of the praetorian prefectures of Italy, Illyricum, and Africa. He defeated the Sarmatians in a campaign shortly afterwards. Quarrels over the sharing of power led to a civil war with his eldest brother and co-emperor Constantine II, who invaded Italy in 340 and was killed in battle by Constans's forces near Aquileia. Constans gained from him the praetorian prefecture of Gaul. Thereafter there were tensions with his remaining brother and co-augustus Constantius II (r. 337–361), including over the exiled bishop Athanasius of Alexandria, who in turn eulogized Constans as "the most pious Augustus... of blessed and everlasting memory." In the following years he campaigned against the Franks, and in 343 he visited Roman Britain, the last legitimate emperor to do so. In January 350, Magnentius (r. 350–353) the commander of the Jovians and Herculians, a corps in the Roman army, was acclaimed augustus at Augustodunum (Autun) with the support of Marcellinus, the comes rei privatae. Magnentius overthrew and killed Constans. Surviving sources, possibly influenced by the propaganda of Magnentius's faction, accuse Constans of misrule and of homosexuality. | 2001-10-09T22:44:58Z | 2023-12-22T19:35:50Z | [
"Template:Chart bottom",
"Template:Cite book",
"Template:Authority control",
"Template:Other uses",
"Template:Break",
"Template:Citation",
"Template:Cite web",
"Template:Short description",
"Template:S-reg",
"Template:S-aft",
"Template:Reign",
"Template:Lang-la",
"Template:Page needed",
"Template:66 PHPC",
"Template:Smallcaps",
"Template:Chart top",
"Template:Commons category-inline",
"Template:S-start",
"Template:S-off",
"Template:Use dmy dates",
"Template:Circa",
"Template:Reflist",
"Template:S-bef",
"Template:S-ttl",
"Template:Infobox Roman emperor",
"Template:Constantinian dynasty family tree",
"Template:Portal",
"Template:Cite journal",
"Template:Tree chart/end",
"Template:S-end",
"Template:Roman emperors",
"Template:Sfn",
"Template:See also",
"Template:Tree chart/start",
"Template:Tree chart"
] | https://en.wikipedia.org/wiki/Constans |
6,749 | Cheerleading | Cheerleading is an activity in which the participants (called cheerleaders) cheer for their team as a form of encouragement. It can range from chanting slogans to intense physical activity. It can be performed to motivate sports teams, to entertain the audience, or for competition. Cheerleading routines typically range anywhere from one to three minutes, and contain components of tumbling, dance, jumps, cheers, and stunting.
Modern cheerleading is very closely associated with American football and basketball. Sports such as association football (soccer), ice hockey, volleyball, baseball, and wrestling will sometimes sponsor cheerleading squads. The ICC Twenty20 Cricket World Cup in South Africa in 2007 was the first international cricket event to have cheerleaders. The Florida Marlins were the first Major League Baseball team to have a cheerleading team.
Cheerleading originated as an all-male activity in the United States, and remains predominantly in America, with an estimated 3.85 million participants as of 2017. The global presentation of cheerleading was led by the 1997 broadcast of ESPN's International cheerleading competition, and the worldwide release of the 2000 film Bring It On. The International Cheer Union (ICU) now claims 116 member nations with an estimated 7.5 million participants worldwide. The sport has gained a lot of traction in Australia, Canada, Mexico, China, Colombia, Finland, France, Germany, Japan, the Netherlands, New Zealand, and the United Kingdom with popularity continuing to grow as sport leaders pursue Olympic status.
Cheerleading carries the highest rate of catastrophic injuries to female athletes in sports, with most injuries associated with stunting, also known as pyramids.
Cheerleading began during the late 18th century with the rebellion of male students. After the American Revolutionary War, students experienced harsh treatment from teachers. In response to faculty's abuse, college students violently acted out. The undergraduates began to riot, burn down buildings located on their college campuses, and assault faculty members. As a more subtle way to gain independence, however, students invented and organized their own extracurricular activities outside their professors' control. This brought about American sports, beginning first with collegiate teams.
In the 1860s, students from Great Britain began to cheer and chant in unison for their favorite athletes at sporting events. Soon, that gesture of support crossed overseas to America.
On November 6, 1869, the United States witnessed its first intercollegiate football game. It took place between Princeton University and Rutgers University, and marked the day the original "Sis Boom Rah!" cheer was shouted out by student fans.
Organized cheerleading began as an all-male activity. As early as 1877, Princeton University had a "Princeton Cheer", documented in the February 22, 1877, March 12, 1880, and November 4, 1881, issues of The Daily Princetonian. This cheer was yelled from the stands by students attending games, as well as by the athletes themselves. The cheer, "Hurrah! Hurrah! Hurrah! Tiger! S-s-s-t! Boom! A-h-h-h!" remains in use with slight modifications today, where it is now referred to as the "Locomotive".
Princeton class of 1882 graduate Thomas Peebles moved to Minnesota in 1884. He transplanted the idea of organized crowds cheering at football games to the University of Minnesota.
The term "Cheer Leader" had been used as early as 1897, with Princeton's football officials having named three students as Cheer Leaders: Thomas, Easton, and Guerin from Princeton's classes of 1897, 1898, and 1899, respectively, on October 26, 1897. These students would cheer for the team also at football practices, and special cheering sections were designated in the stands for the games themselves for both the home and visiting teams.
It was not until 1898 that University of Minnesota student Johnny Campbell directed a crowd in cheering "Rah, Rah, Rah! Ski-u-mah, Hoo-Rah! Hoo-Rah! Varsity! Varsity! Varsity, Minn-e-So-Tah!", making Campbell the very first cheerleader.
November 2, 1898, is the official birth date of organized cheerleading. Soon after, the University of Minnesota organized a "yell leader" squad of six male students, who still use Campbell's original cheer today.
In 1903, the first cheerleading fraternity, Gamma Sigma, was founded.
In 1923, at the University of Minnesota, women were permitted to participate in cheerleading. However, it took time for other schools to follow. In the late 1920s, many school manuals and newspapers that were published still referred to cheerleaders as "chap", "fellow", and "man".
Women cheerleaders were overlooked until the 1940s when collegiate men were drafted for World War II, creating the opportunity for more women to make their way onto sporting event sidelines. As noted by Kieran Scott in Ultimate Cheerleading: "Girls really took over for the first time."
In 1949, Lawrence Herkimer, a former cheerleader at Southern Methodist University and inventor of the herkie jump, founded his first cheerleading camp in Huntsville, Texas. 52 girls were in attendance. The clinic was so popular that Herkimer was asked to hold a second, where 350 young women were in attendance. Herkimer also patented the pom-pom.
In 1951, Herkimer created the National Cheerleading Association to help grow the activity and provide cheerleading education to schools around the country.
During the 1950s, female participation in cheerleading continued to grow. An overview written on behalf of cheerleading in 1955 explained that in larger schools, "occasionally boys as well as girls are included", and in smaller schools, "boys can usually find their place in the athletic program, and cheerleading is likely to remain solely a feminine occupation". Cheerleading could be found at almost every school level across the country; even pee wee and youth leagues began to appear.
In the 1950s, professional cheerleading also began. The first recorded cheer squad in National Football League (NFL) history was for the Baltimore Colts. Professional cheerleaders put a new perspective on American cheerleading. Women were exclusively chosen for dancing ability as well as to conform to the male gaze, as heterosexual men were the targeted marketing group.
By the 1960s, college cheerleaders employed by the NCA were hosting workshops across the nation, teaching fundamental cheer skills to tens of thousands of high-school-age girls. Herkimer also contributed many notable firsts to cheerleading: the founding of a cheerleading uniform supply company, inventing the herkie jump (where one leg is bent towards the ground as if kneeling and the other is out to the side as high as it will stretch in toe-touch position), and creating the "Spirit Stick".
In 1965, Fred Gastoff invented the vinyl pom-pom, which was introduced into competitions by the International Cheerleading Foundation (ICF, now the World Cheerleading Association, or WCA). Organized cheerleading competitions began to pop up with the first ranking of the "Top Ten College Cheerleading Squads" and "Cheerleader All America" awards given out by the ICF in 1967.
The Dallas Cowboys Cheerleaders soon gained the spotlight with their revealing outfits and sophisticated dance moves, debuting in the 1972–1973 season, but were first widely seen in Super Bowl X (1976). These pro squads of the 1970s established cheerleaders as "American icons of wholesome sex appeal."
In 1975, Randy Neil estimated that over 500,000 students actively participated in American cheerleading from elementary school to the collegiate level. Neil also approximated that ninety-five percent of cheerleaders within America were female.
In 1978, America was introduced to competitive cheerleading by the first broadcast of Collegiate Cheerleading Championships on CBS.
The 1980s saw the beginning of modern cheerleading, adding difficult stunt sequences and gymnastics into routines. All-star teams, or those not affiliated with a school, popped up, and eventually led to the creation of the U.S. All Star Federation (USASF). ESPN first broadcast the National High School Cheerleading Competition nationwide in 1983.
By 1981, a total of seventeen Nation Football League teams had their own cheerleaders. The only teams without NFL cheerleaders at this time were New Orleans, New York, Detroit, Cleveland, Denver, Minnesota, Pittsburgh, San Francisco, and San Diego. Professional cheerleading eventually spread to soccer and basketball teams as well.
Cheerleading organizations such as the American Association of Cheerleading Coaches and Advisors (AACCA), founded in 1987, started applying universal safety standards to decrease the number of injuries and prevent dangerous stunts, pyramids, and tumbling passes from being included in the cheerleading routines. In 2003, the National Council for Spirit Safety and Education (NCSSE) was formed to offer safety training for youth, school, all-star, and college coaches. The NCAA now requires college cheer coaches to successfully complete a nationally recognized safety-training program.
Even with its athletic and competitive development, cheerleading at the school level has retained its ties to its spirit leading traditions. Cheerleaders are quite often seen as ambassadors for their schools, and leaders among the student body. At the college level, cheerleaders are often invited to help at university fundraisers and events.
Debuting in 2003, the "Marlin Mermaids" gained national exposure, and have influenced other MLB teams to develop their own cheer/dance squads.
As of 2005, overall statistics show around 97% of all modern cheerleading participants are female, although at the collegiate level, cheerleading is co-ed with about 50% of participants being male. Modern male cheerleaders' stunts focus less on flexibility and more on tumbling, flips, pikes, and handstands. These depend on strong legs and strong core strength.
In 2019, Napoleon Jinnies and Quinton Peron became the first male cheerleaders in the history of the NFL to perform at the Super Bowl.
Kristi Yamaoka, a cheerleader for Southern Illinois University, suffered a fractured vertebra when she hit her head after falling from a human pyramid. She also suffered from a concussion, and a bruised lung. The fall occurred when Yamaoka lost her balance during a basketball game between Southern Illinois University and Bradley University at the Savvis Center in St. Louis on March 5, 2006. The fall gained "national attention", because Yamaoka continued to perform from a stretcher as she was moved away from the game.
The accident caused the Missouri Valley Conference to ban its member schools from allowing cheerleaders to be "launched or tossed and from taking part in formations higher than two levels" for one week during a women's basketball conference tournament, and also resulted in a recommendation by the NCAA that conferences and tournaments do not allow pyramids two and one half levels high or higher, and a stunt known as basket tosses, during the rest of the men's and women's basketball season. On July 11, 2006, the bans were made permanent by the AACCA rules committee:
The committee unanimously voted for sweeping revisions to cheerleading safety rules, the most major of which restricts specific upper-level skills during basketball games. Basket tosses, 2+1⁄2 high pyramids, one-arm stunts, stunts that involve twisting or flipping, and twisting tumbling skills may be performed only during halftime and post-game on a matted surface and are prohibited during game play or time-outs.
Most American elementary schools, middle schools, high schools, and colleges have organized cheerleading squads. Some colleges even offer cheerleading scholarships for students. A school cheerleading team may compete locally, regionally, or nationally, but their main purpose is typically to cheer for sporting events and encourage audience participation. Cheerleading is quickly becoming a year-round activity, starting with tryouts during the spring semester of the preceding school year. Teams may attend organized summer cheerleading camps and practices to improve skills and create routines for competition.
In addition to supporting their schools' football or other sports teams, student cheerleaders may compete with recreational-style routine at competitions year-round.
In far more recent years, it has become more common for elementary schools to have an organized cheerleading team. This is a great way to get younger children introduced to the sport and used to being crowd leaders. Also, with young children learning so much so quickly, tumbling can come very easy to a child in elementary school.
Middle school cheerleading evolved shortly after high school squads were created and is set at the district level. In middle school, cheerleading squads serve the same purpose, but often follow a modified set of rules from high school squads with possible additional rules. Squads can cheer for basketball teams, football teams, and other sports teams in their school. Squads may also perform at pep rallies and compete against other local schools from the area. Cheerleading in middle school sometimes can be a two-season activity: fall and winter. However, many middle school cheer squads will go year-round like high school squads. Middle school cheerleaders use the same cheerleading movements as their older counterparts, yet may perform less extreme stunts and tumbling elements, depending on the rules in their area..
In high school, there are usually two squads per school: varsity and a junior varsity. High school cheerleading contains aspects of school spirit as well as competition. These squads have become part of a year-round cycle. Starting with tryouts in the spring, year-round practice, cheering on teams in the fall and winter, and participating in cheerleading competitions. Most squads practice at least three days a week for about two hours each practice during the summer. Many teams also attend separate tumbling sessions outside of practice. During the school year, cheerleading is usually practiced five- to six-days-a-week. During competition season, it often becomes seven days with practice twice a day sometimes. The school spirit aspect of cheerleading involves cheering, supporting, and "hyping up" the crowd at football games, basketball games, and even at wrestling meets. Along with this, cheerleaders usually perform at pep rallies, and bring school spirit to other students. In May 2009, the National Federation of State High School Associations released the results of their first true high school participation study. They estimated that the number of high school cheerleaders from public high schools is around 394,700.
There are different cheerleading organizations that put on competitions; some of the major ones include state and regional competitions. Many high schools will often host cheerleading competitions, bringing in IHSA judges. The regional competitions are qualifiers for national competitions, such as the UCA (Universal Cheerleaders Association) Archived 2009-09-20 at the Wayback Machine in Orlando, Florida, every year. Many teams have a professional choreographer that choreographs their routine in order to ensure they are not breaking rules or regulations and to give the squad creative elements.
Most American universities have a cheerleading squad to cheer for football, basketball, volleyball, wrestling, and soccer. Most college squads tend to be larger coed teams, although in recent years; all-girl squads and smaller college squads have increased rapidly. Cheerleading is not recognized by NCAA, NAIA, and NJCAA as athletics; therefore, there are few to no scholarships offered to athletes wanting to pursue cheerleading at the collegiate level. However, some community colleges and universities offer scholarships directly from the program or sponsorship funds. Some colleges offer scholarships for an athlete's talents, academic excellence, and/or involvement in community events.
College squads perform more difficult stunts which include multi-level pyramids, as well as flipping and twisting basket tosses.
Not only do college cheerleaders cheer on the other sports at their university, many teams at universities compete with other schools at either UCA College Nationals or NCA College Nationals. This requires the teams to choreograph a 2-minute and 30 second routine that includes elements of jumps, tumbling, stunting, basket tosses, pyramids, and a crowd involvement section. Winning one of these competitions is a very prestigious accomplishment, and is seen as another national title for most schools.
Organizations that sponsor youth cheer teams usually sponsor either youth league football or basketball teams as well. This allows for the two, under the same sponsor, to be intermingled. Both teams have the same mascot name and the cheerleaders will perform at their football or basketball games. Examples of such sponsors include Pop Warner, American Youth Football, and the YMCA. The purpose of these squads is primarily to support their associated football or basketball players, but some teams do compete at local or regional competitions. The Pop Warner Association even hosts a national championship each December for teams in their program who qualify.
"All-star" or club cheerleading differs from school or sideline cheerleading because all-star teams focus solely on performing a competition routine and not on leading cheers for other sports teams. All-star cheerleaders are members of a privately owned gym or club which they typically pay dues or tuition to, similar to a gymnastics gym.
During the early 1980s, cheerleading squads not associated with a school or sports league, whose main objective was competition, began to emerge. The first organization to call themselves all-stars were the Q94 Rockers from Richmond, Virginia, founded in 1982. All-star teams competing prior to 1987 were placed into the same divisions as teams that represented schools and sports leagues. In 1986, the National Cheerleaders Association (NCA) addressed this situation by creating a separate division for teams lacking a sponsoring school or athletic association, calling it the All-Star Division and debuting it at their 1987 competitions. As the popularity of this type of team grew, more and more of them were formed, attending competitions sponsored by many different organizations and companies, each using its own set of rules, regulations, and divisions. This situation became a concern to coaches and gym owners, as the inconsistencies caused coaches to keep their routines in a constant state of flux, detracting from time that could be better utilized for developing skills and providing personal attention to their athletes. More importantly, because the various companies were constantly vying for a competitive edge, safety standards had become more and more lax. In some cases, unqualified coaches and inexperienced squads were attempting dangerous stunts as a result of these expanded sets of rules.
The United States All Star Federation (USASF) was formed in 2003 by the competition companies to act as the national governing body for all star cheerleading and to create a standard set of rules and judging criteria to be followed by all competitions sanctioned by the Federation. Eager to grow the sport and create more opportunities for high-level teams, The USASF hosted the first Cheerleading Worlds on April 24, 2004. At the same time, cheerleading coaches from all over the country organized themselves for the same rule making purpose, calling themselves the National All Star Cheerleading Coaches Congress (NACCC). In 2005, the NACCC was absorbed by the USASF to become their rule making body. In late 2006, the USASF facilitated the creation of the International All-Star Federation (IASF), which now governs club cheerleading worldwide.
As of 2020, all-star cheerleading, as sanctioned by the USASF, involves a squad of 5–36 females and males. All-star cheerleaders are placed into divisions, which are grouped based upon age, size of the team, gender of participants, and ability level. The age groups vary from under 4 years of age to 18 years and over. The squad prepares year-round for many different competition appearances, but they actually perform only for up to 2+1⁄2 minutes during their team's routine. The numbers of competitions a team participates in varies from team to team, but generally, most teams tend to participate in six to ten competitions a year. These competitions include locals or regionals, which normally take place in school gymnasiums or local venues, nationals, hosted in large venues all around the U.S., and the Cheerleading Worlds, which takes place at Walt Disney World in Orlando, Florida. During a competition routine, a squad performs carefully choreographed stunting, tumbling, jumping, and dancing to their own custom music. Teams create their routines to an eight-count system and apply that to the music so that the team members execute the elements with precise timing and synchronization.
All-star cheerleaders compete at competitions hosted by private event production companies, the foremost of these being Varsity Spirit. Varsity Spirit is the parent company for many subsidiaries including The National Cheerleader's Association, The Universal Cheerleader's Association, AmeriCheer, Allstar Challenge, and JamFest, among others. Each separate company or subsidiary typically hosts their own local and national level competitions. This means that many gyms within the same area could be state and national champions for the same year and never have competed against each other. Currently, there is no system in place that awards only one state or national title.
Judges at a competition watch closely for illegal skills from the group or any individual member. Here, an illegal skill is something that is not allowed in that division due to difficulty or safety restrictions. They look out for deductions, or things that go wrong, such as a dropped stunt or a tumbler who does not stick a landing. More generally, judges look at the difficulty and execution of jumps, stunts and tumbling, synchronization, creativity, the sharpness of the motions, showmanship, and overall routine execution.
If a level 6 or 7 team places high enough at selected USASF/IASF sanctioned national competitions, they could earn a place at the Cheerleading Worlds and compete against teams from all over the world, as well as receive money for placing. For elite level cheerleaders, The Cheerleading Worlds is the highest level of competition to which they can aspire, and winning a world championship title is an incredible honor.
Professional cheerleaders and dancers cheer for sports such as football, basketball, baseball, wrestling, or hockey. There are only a small handful of professional cheerleading leagues around the world; some professional leagues include the NBA Cheerleading League, the NFL Cheerleading League, the CFL Cheerleading League, the MLS Cheerleading League, the MLB Cheerleading League, and the NHL Ice Girls. Although professional cheerleading leagues exist in multiple countries, there are no Olympic teams.
In addition to cheering at games and competing, professional cheerleaders often do a lot of philanthropy and charity work, modeling, motivational speaking, television performances, and advertising.
Cheerleading carries the highest rate of catastrophic injuries to female athletes in high school and collegiate sports. Of the United States' 2.9 million female high school athletes, only 3% are cheerleaders, yet cheerleading accounts for nearly 65% of all catastrophic injuries in girls' high school athletics. In data covering the 1982-83 academic year through the 2018-19 academic year in the US, the rate of serious, direct traumatic injury per 100,000 participants was 1.68 for female cheerleaders at the high school level, the highest for all high school sports surveyed. (table 9a) The college rate could not be determined, as the total number of collegiate cheerleaders was unknown, but the total number of traumatic, direct catastrophic injuries over this period was 33 (28 female, 5 male), higher than all sports at this level aside from football. (table 5a) Another study found that between 1982 and 2007, there were 103 fatal, disabling, or serious injuries recorded among female high school athletes, with the vast majority (67) occurring in cheerleading.
The main source of injuries comes from stunting, also known as pyramids. These stunts are performed at games and pep rallies, as well as competitions. Sometimes competition routines are focused solely around the use of difficult and risky stunts. These stunts usually include a flyer (the person on top), along with one or two bases (the people on the bottom), and one or two spotters in the front and back on the bottom. The most common cheerleading related injury is a concussion. 96% of those concussions are stunt related. Other injuries include: sprained ankles, sprained wrists, back injuries, head injuries (sometimes concussions), broken arms, elbow injuries, knee injuries, broken noses, and broken collarbones. Sometimes, however, injuries can be as serious as whiplash, broken necks, broken vertebrae, and death.
The journal Pediatrics has reportedly said that the number of cheerleaders suffering from broken bones, concussions, and sprains has increased by over 100 percent between the years of 1990 and 2002, and that in 2001, there were 25,000 hospital visits reported for cheerleading injuries dealing with the shoulder, ankle, head, and neck. Meanwhile, in the US, cheerleading accounted for 65.1% of all major physical injuries to high school females, and to 66.7% of major injuries to college students due to physical activity from 1982 to 2007, with 22,900 minors being admitted to hospital with cheerleading-related injuries in 2002.
The risks of cheerleading were highlighted at the death of Lauren Chang. Chang died on April 14, 2008, after competing in a competition where her teammate had kicked her so hard in the chest that her lungs collapsed.
Cheerleading (for both girls and boys) was one of the sports studied in the Pediatric Injury Prevention, Education and Research Program of the Colorado School of Public Health in 2009/10–2012/13. Data on cheerleading injuries is included in the report for 2012–13.
International Cheer Union (ICU): Established on April 26, 2004, the ICU is recognized by the SportAccord as the world governing body of cheerleading and the authority on all matters with relation to it. Including participation from its 105-member national federations reaching 3.5 million athletes globally, the ICU continues to serve as the unified voice for those dedicated to cheerleading's positive development around the world.
Following a positive vote by the SportAccord General Assembly on May 31, 2013, in Saint Petersburg, the International Cheer Union (ICU) became SportAccord's 109th member, and SportAccord's 93rd international sports federation to join the international sports family. In accordance with the SportAccord statutes, the ICU is recognized as the world governing body of cheerleading and the authority on all matters related to it.
As of the 2016–17 season, the ICU has introduced a Junior aged team (12-16) to compete at the Cheerleading Worlds, because cheerleading is now in provisional status to become a sport in the Olympics. For cheerleading to one day be in the Olympics, there must be a junior and senior team that competes at the world championships. The first junior cheerleading team that was selected to become the junior national team was Eastside Middle School, located in Mount Washington Kentucky and will represent the United States in the inaugural junior division at the world championships.
The ICU holds training seminars for judges and coaches, global events and the World Cheerleading Championships. The ICU is also fully applied to the International Olympic Committee (IOC) and is compliant under the code set by the World Anti-Doping Agency (WADA).
International Federation of Cheerleading (IFC): Established on July 5, 1998, the International Federation of Cheerleading (IFC) is a non-profit federation based in Tokyo, Japan, and is a world governing body of cheerleading, primarily in Asia. The IFC objectives are to promote cheerleading worldwide, to spread knowledge of cheerleading, and to develop friendly relations among the member associations and federations.
USA Cheer: The USA Federation for Sport Cheering (USA Cheer) was established in 2007 to serve as the national governing body for all types of cheerleading in the United States and is recognized by the ICU. "The USA Federation for Sport Cheering is a not-for profit 501(c)(6) organization that was established in 2007 to serve as the National Governing Body for Sport Cheering in the United States. USA Cheer exists to serve the cheer community, including club cheering (all star) and traditional school based cheer programs, and the growing sport of STUNT. USA Cheer has three primary objectives: help grow and develop interest and participation in cheer throughout the United States; promote safety and safety education for cheer in the United States; and represent the United States of America in international cheer competitions." In March 2018, they absorbed the American Association of Cheerleading Coaches and Advisors (AACCA) and now provide safety guidelines and training for all levels of cheerleading. Additionally, they organize the USA National Team.
Universal Cheerleading Association: UCA is an association owned by the company brand Varsity. "Universal Cheerleaders Association was founded in 1974 by Jeff Webb to provide the best educational training for cheerleaders with the goal of incorporating high-level skills with traditional crowd leading. It was Jeff's vision that would transform cheerleading into the dynamic, athletic combination of high energy entertainment and school leadership that is loved by so many." "Today, UCA is the largest cheerleading camp company in the world, offering the widest array of dates and locations of any camp company. We also celebrate cheerleader's incredible hard work and athleticism through the glory of competition at over 50 regional events across the country and our Championships at the Walt Disney World Resort every year." "UCA has instilled leadership skills and personal confidence in more than 4.5 million athletes on and off the field while continuing to be the industry's leader for more than forty-five years. UCA has helped many cheerleaders get the training they need to succeed.
Asian Thailand Cheerleading Invitational (ATCI): Organised by the Cheerleading Association of Thailand (CAT) in accordance with the rules and regulations of the International Federation of Cheerleading (IFC). The ATCI is held every year since 2009. At the ATCI, many teams from all over Thailand compete, joining them are many invited neighbouring nations who also send cheer squads.
Cheerleading Asia International Open Championships (CAIOC): Hosted by the Foundation of Japan Cheerleading Association (FJCA) in accordance with the rules and regulations of the IFC. The CAIOC has been a yearly event since 2007. Every year, many teams from all over Asia converge in Tokyo to compete.
Cheerleading World Championships (CWC): Organised by the IFC. The IFC is a non-profit organisation founded in 1998 and based in Tokyo, Japan. The CWC has been held every two years since 2001, and to date, the competition has been held in Japan, the United Kingdom, Finland, Germany, and Hong Kong. The 6th CWC was held at the Hong Kong Coliseum on November 26–27, 2011.
ICU World Championships: The International Cheer Union currently encompasses 105 National Federations from countries across the globe. Every year, the ICU host the World Cheerleading Championship. This competition uses a more collegiate style performance and rulebook. Countries assemble and send only one team to represent them.
National Cheerleading Championships (NCC): The NCC is the annual IFC-sanctioned national cheerleading competition in Indonesia organised by the Indonesian Cheerleading Community (ICC). Since NCC 2010, the event is now open to international competition, representing a significant step forward for the ICC. Teams from many countries such as Japan, Thailand, the Philippines, and Singapore participated in the ground breaking event.
Pan-American Cheerleading Championships (PCC): The PCC was held for the first time in 2009 in the city of Latacunga, Ecuador and is the continental championship organised by the Pan-American Federation of Cheerleading (PFC). The PFC, operating under the umbrella of the IFC, is the non-profit continental body of cheerleading whose aim it is to promote and develop cheerleading in the Americas. The PCC is a biennial event, and was held for the second time in Lima, Peru, in November 2010.
USASF/IASF Worlds: Many United States cheerleading organizations form and register the not-for-profit entity the United States All Star Federation (USASF) and also the International All Star Federation (IASF) to support international club cheerleading and the World Cheerleading Club Championships. The first World Cheerleading Championships, or Cheerleading Worlds, were hosted by the USASF/IASF at the Walt Disney World Resort and taped for an ESPN global broadcast in 2004. This competition is only for All-Star/Club cheer. Only level 6 and 7 teams may attend and must receive a bid from a partner company.
Varsity: Varsity Spirit, a branch of Varsity Brands is a parent company which, over the past 10 years, has absorbed or bought most other cheerleading event production companies. The following is a list of subsidiary competition companies owned by Varsity Spirit:
In the United States, the designation of a "sport" is important because of Title IX. There is a large debate on whether or not cheerleading should be considered a sport for Title IX (a portion of the United States Education Amendments of 1972 forbidding discrimination under any education program on the basis of sex) purposes. These arguments have varied from institution to institution and are reflected in how they treat and organize cheerleading within their schools. Some institutions have been accused of not providing equal opportunities to their male students or for not treating cheerleading as a sport, which reflects on the opportunities they provide to their athletes.
The Office for Civil Rights (OCR) issued memos and letters to schools that cheerleading, both sideline and competitive, may not be considered "athletic programs" for the purposes of Title IX. Supporters consider cheerleading, as a whole, a sport, citing the heavy use of athletic talents while critics see it as a physical activity because a "sport" implies a competition among all squads and not all squads compete, along with subjectivity of competitions where—as with gymnastics, diving, and figure skating—scores are assessed based on human judgment and not an objective goal or measurement of time.
The Office for Civil Rights' primary concern was ensuring that institutions complied with Title IX, which means offering equal opportunities to all students despite their gender. In their memos, their main point against cheerleading being a sport was that the activity is underdeveloped and unorganized to have varsity-level athletic standing among students. This claim was not universal and the Office for Civil Rights would review cheerleading on a case-by-case basis. Due to this the status of cheerleading under Title IX has varied from region to region based on the institution and how they organize their teams. However, within their decisions, the Office for Civil Rights never clearly stated any guidelines on what was and was not considered a sport under Title IX.
On January 27, 2009, in a lawsuit involving an accidental injury sustained during a cheerleading practice, the Wisconsin Supreme Court ruled that cheerleading is a full-contact sport in that state, not allowing any participants to be sued for accidental injury. In contrast, on July 21, 2010, in a lawsuit involving whether college cheerleading qualified as a sport for purposes of Title IX, a federal court, citing a current lack of program development and organization, ruled that it is not a sport at all.
The National Collegiate Athletic Association (NCAA) does not recognize cheerleading as a sport. In 2014, the American Medical Association adopted a policy that, as the leading cause of catastrophic injuries of female athletes both in high school and college, cheerleading should be considered a sport. While there are cheerleading teams at the majority of the NCAA's Division I schools, they are still not recognized as a sport. This results in many teams not being properly funded. Additionally, there are little to no college programs offering scholarships because their universities cannot offer athletic scholarships to "spirit" team members.
In 2010, Quinnipiac University was sued for not providing equal opportunities for female athletes as required by Title IX. The university disbanded its volleyball team and created a new competitive cheerleading sports team. The issue with Biediger v. Quinnipiac University is centered around whether competitive cheerleading could be considered a sport for Title IX. The university had not provided additional opportunities for their female athletes which led to the court ruling in favor that cheerleading could not count as a varsity sport. This case established clear guidelines on what qualifies as a sport under Title IX, these guidelines are known as the three-pronged approach. The three-pronged approach is as follows:
The three-pronged approach was the first official guideline that clearly stated what criteria were necessary when deciding what activity was considered a sport or not under Title IX. This approach was used and is still continued to be used by the Office for Civil Rights. Based on this approach the Office for Civil Rights still considers cheerleading, including both sideline and competitive, not a sport under Title IX.
Cheerleading in Canada is rising in popularity among the youth in co-curricular programs. Cheerleading has grown from the sidelines to a competitive activity throughout the world and in particular Canada. Cheerleading has a few streams in Canadian sports culture. It is available at the middle-school, high-school, collegiate, and best known for all-star. There are multiple regional, provincial, and national championship opportunities for all athletes participating in cheerleading. Canada does not have provincial teams, just a national program referred to as Team Canada, facilitated by Cheer Canada. Their first year as a national team was in 2009 when they represented Canada at the International Cheer Union World Cheerleading Championships International Cheer Union (ICU).
Cheer Canada acts as the Canadian national governing body for cheer, as recognised by the International Cheer Union. There are a number of provincial sports organizations that also exist in Canada under Cheer Canada, each governing cheer within their province which BC Sport Cheer, Alberta Cheerleading Association, Saskatchewan Cheerleading Association, Cheer Manitoba, Ontario Cheerleading Federation, Federation de Cheerleading du Quebec, Newfoundland and Labrador Cheerleading Athletics, Cheer New Brunswick and Cheer Nova Scotia. Cheer Canada and the provincial organizations utilise the IASF divisions and rules for all star cheer and performance cheer (all star dance) and the ICU divisions and rules for scholastic cheer. Canadian Cheer (previously known as Cheer Evolution) is the largest cheer and dance organization for Canada, and currently comply to Cheer Canada's rules and guidelines for their 15 events. Varsity Spirit also hosts events within Canada utilising the Cheer Canada/IASF rules. There are currently over 400 clubs and schools recognised by Cheer Canada, with over 25,000 participants as of 2023.
There are two world championship competitions that Canada participates in. The first is the ICU World Championships where the Canadian National Teams compete against other countries. The second is The Cheerleading Worlds where Canadian club teams, referred to as "all-star" teams, compete within the IASF divisions. National team members who compete at the ICU Worlds can also compete with their "all-star club" teams at the IASF World Championships. Although athletes can compete in both International Cheer Union (ICU) and IASF championships, crossovers between teams at each individual competition are not permitted. Teams compete against the other teams from their countries on the first day of competition and the top three teams from each country in each division continue to finals. At the end of finals, the top team scoring the highest for their country earns the "Nations Cup". Canada has multiple teams across their country that compete in the IASF Cheerleading Worlds Championship. In total, Canada has had 98 International podium finishes at cheer events.
The International Cheer Union (ICU) is built of 119 member nations, who are eligible to field teams to compete against one another at the ICU World Championships in a number of divisions in both cheerleading and performance cheer, with special divisions for youth, junior and adaptive abilities athletes. Cheer Canada fields a national team, with up to 40 athletes from around the country for both a senior national all girl and senior national coed team training at three training camps across the season in Canada before 28 athletes per team are selected to train in Florida, with 24 athletes going on to compete on the competition floor at ICU Worlds. In the 2023 ICU World Championships, Canada won a total of 4 medals (1 gold and 3 silver) with teams entered in the Youth All Girl, Youth Coed, Unified Median, Unified Advanced, Premier All Girl, Premier Coed, Performance Cheer Hip Hop doubles, Performance Cheer Pom Doubles and Performance Cheer Pom divisions.
In total, Team Canada holds podium placements at the ICU World Championships from the following years/divisions:
Cheerleading in Mexico is a popular sport commonly seen in Mexican College Football and Professional Mexican Soccer sporting events. Cheerleading emerged within the National Autonomous University of Mexico (UNAM), the highest House of Studies in the country, during the 1930s, almost immediately after it was granted its autonomy. Since then, this phenomenon has been evolving to become what it is now. Firstly, it was developed only in the UNAM, later in other secondary and higher education institutions in Mexico City, and currently in practically the entire country.
In Mexico, this sport is endorsed by the Mexican Federation of Cheerleaders and Cheerleading Groups (Federación Mexicana de Porristas y Grupos de Animación) (FMPGA), a body that regulates competitions in Mexico and subdivisions such as the Olympic Confederation of Cheerleaders (COP Brands), National Organization of Cheerleaders (Organización Nacional de Porristas) (ONP) and the Mexican Organization of Trainers and Animation Groups (Organización Mexicana de Entrenadores y Grupos de Animación) (OMEGA Mexico), these being the largest in the country.
In 2021, the third edition of the National Championship of State Teams was held and organized by The Mexican Federation of Cheerleaders and Cheerleading Groups, on this occasion, the event was held virtually, broadcasting live, through the Vimeo platform.
In Mexico there are more than 500 teams and almost 10,000 athletes who practice this sport, in addition to a representative national team of Mexico, which won first place in the cheerleading world championship organized by the ICU (International Cheer Union) on April 24, 2015, receiving a gold medal; In 2016, Mexico became the second country with the most medals in the world in this sport. With 27 medals, it is considered the second world power in this sport, only behind the United States. In the 2019 Coed Premier World Cheerleading Championship Mexico ranked 4th just behind the United States, Canada and Taiwan. In 2021, the Mexican team won 3rd place at the Junior Boom category in World Cheerleading Championship 2021 hosted by international cheerleading federation.
This section has a link to a separate Wikipedia page that talks about the history and growth of cheerleading in the United Kingdom. This can be used to compare and contrast the activity in the U.S and in Australia.
This section has a link to a separate Wikipedia page that talks about the history and growth of cheerleading in Australia. This can be used to compare and contrast the activity in the U.S and in Australia.
This section has a link to a separate Wikipedia page that lists former cheerleaders and well-known cheerleading squads. | [
{
"paragraph_id": 0,
"text": "Cheerleading is an activity in which the participants (called cheerleaders) cheer for their team as a form of encouragement. It can range from chanting slogans to intense physical activity. It can be performed to motivate sports teams, to entertain the audience, or for competition. Cheerleading routines typically range anywhere from one to three minutes, and contain components of tumbling, dance, jumps, cheers, and stunting.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Modern cheerleading is very closely associated with American football and basketball. Sports such as association football (soccer), ice hockey, volleyball, baseball, and wrestling will sometimes sponsor cheerleading squads. The ICC Twenty20 Cricket World Cup in South Africa in 2007 was the first international cricket event to have cheerleaders. The Florida Marlins were the first Major League Baseball team to have a cheerleading team.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Cheerleading originated as an all-male activity in the United States, and remains predominantly in America, with an estimated 3.85 million participants as of 2017. The global presentation of cheerleading was led by the 1997 broadcast of ESPN's International cheerleading competition, and the worldwide release of the 2000 film Bring It On. The International Cheer Union (ICU) now claims 116 member nations with an estimated 7.5 million participants worldwide. The sport has gained a lot of traction in Australia, Canada, Mexico, China, Colombia, Finland, France, Germany, Japan, the Netherlands, New Zealand, and the United Kingdom with popularity continuing to grow as sport leaders pursue Olympic status.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cheerleading carries the highest rate of catastrophic injuries to female athletes in sports, with most injuries associated with stunting, also known as pyramids.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Cheerleading began during the late 18th century with the rebellion of male students. After the American Revolutionary War, students experienced harsh treatment from teachers. In response to faculty's abuse, college students violently acted out. The undergraduates began to riot, burn down buildings located on their college campuses, and assault faculty members. As a more subtle way to gain independence, however, students invented and organized their own extracurricular activities outside their professors' control. This brought about American sports, beginning first with collegiate teams.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In the 1860s, students from Great Britain began to cheer and chant in unison for their favorite athletes at sporting events. Soon, that gesture of support crossed overseas to America.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "On November 6, 1869, the United States witnessed its first intercollegiate football game. It took place between Princeton University and Rutgers University, and marked the day the original \"Sis Boom Rah!\" cheer was shouted out by student fans.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Organized cheerleading began as an all-male activity. As early as 1877, Princeton University had a \"Princeton Cheer\", documented in the February 22, 1877, March 12, 1880, and November 4, 1881, issues of The Daily Princetonian. This cheer was yelled from the stands by students attending games, as well as by the athletes themselves. The cheer, \"Hurrah! Hurrah! Hurrah! Tiger! S-s-s-t! Boom! A-h-h-h!\" remains in use with slight modifications today, where it is now referred to as the \"Locomotive\".",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Princeton class of 1882 graduate Thomas Peebles moved to Minnesota in 1884. He transplanted the idea of organized crowds cheering at football games to the University of Minnesota.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The term \"Cheer Leader\" had been used as early as 1897, with Princeton's football officials having named three students as Cheer Leaders: Thomas, Easton, and Guerin from Princeton's classes of 1897, 1898, and 1899, respectively, on October 26, 1897. These students would cheer for the team also at football practices, and special cheering sections were designated in the stands for the games themselves for both the home and visiting teams.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "It was not until 1898 that University of Minnesota student Johnny Campbell directed a crowd in cheering \"Rah, Rah, Rah! Ski-u-mah, Hoo-Rah! Hoo-Rah! Varsity! Varsity! Varsity, Minn-e-So-Tah!\", making Campbell the very first cheerleader.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "November 2, 1898, is the official birth date of organized cheerleading. Soon after, the University of Minnesota organized a \"yell leader\" squad of six male students, who still use Campbell's original cheer today.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "In 1903, the first cheerleading fraternity, Gamma Sigma, was founded.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In 1923, at the University of Minnesota, women were permitted to participate in cheerleading. However, it took time for other schools to follow. In the late 1920s, many school manuals and newspapers that were published still referred to cheerleaders as \"chap\", \"fellow\", and \"man\".",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Women cheerleaders were overlooked until the 1940s when collegiate men were drafted for World War II, creating the opportunity for more women to make their way onto sporting event sidelines. As noted by Kieran Scott in Ultimate Cheerleading: \"Girls really took over for the first time.\"",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In 1949, Lawrence Herkimer, a former cheerleader at Southern Methodist University and inventor of the herkie jump, founded his first cheerleading camp in Huntsville, Texas. 52 girls were in attendance. The clinic was so popular that Herkimer was asked to hold a second, where 350 young women were in attendance. Herkimer also patented the pom-pom.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In 1951, Herkimer created the National Cheerleading Association to help grow the activity and provide cheerleading education to schools around the country.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "During the 1950s, female participation in cheerleading continued to grow. An overview written on behalf of cheerleading in 1955 explained that in larger schools, \"occasionally boys as well as girls are included\", and in smaller schools, \"boys can usually find their place in the athletic program, and cheerleading is likely to remain solely a feminine occupation\". Cheerleading could be found at almost every school level across the country; even pee wee and youth leagues began to appear.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In the 1950s, professional cheerleading also began. The first recorded cheer squad in National Football League (NFL) history was for the Baltimore Colts. Professional cheerleaders put a new perspective on American cheerleading. Women were exclusively chosen for dancing ability as well as to conform to the male gaze, as heterosexual men were the targeted marketing group.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "By the 1960s, college cheerleaders employed by the NCA were hosting workshops across the nation, teaching fundamental cheer skills to tens of thousands of high-school-age girls. Herkimer also contributed many notable firsts to cheerleading: the founding of a cheerleading uniform supply company, inventing the herkie jump (where one leg is bent towards the ground as if kneeling and the other is out to the side as high as it will stretch in toe-touch position), and creating the \"Spirit Stick\".",
"title": "History"
},
{
"paragraph_id": 20,
"text": "In 1965, Fred Gastoff invented the vinyl pom-pom, which was introduced into competitions by the International Cheerleading Foundation (ICF, now the World Cheerleading Association, or WCA). Organized cheerleading competitions began to pop up with the first ranking of the \"Top Ten College Cheerleading Squads\" and \"Cheerleader All America\" awards given out by the ICF in 1967.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "The Dallas Cowboys Cheerleaders soon gained the spotlight with their revealing outfits and sophisticated dance moves, debuting in the 1972–1973 season, but were first widely seen in Super Bowl X (1976). These pro squads of the 1970s established cheerleaders as \"American icons of wholesome sex appeal.\"",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In 1975, Randy Neil estimated that over 500,000 students actively participated in American cheerleading from elementary school to the collegiate level. Neil also approximated that ninety-five percent of cheerleaders within America were female.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In 1978, America was introduced to competitive cheerleading by the first broadcast of Collegiate Cheerleading Championships on CBS.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "The 1980s saw the beginning of modern cheerleading, adding difficult stunt sequences and gymnastics into routines. All-star teams, or those not affiliated with a school, popped up, and eventually led to the creation of the U.S. All Star Federation (USASF). ESPN first broadcast the National High School Cheerleading Competition nationwide in 1983.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "By 1981, a total of seventeen Nation Football League teams had their own cheerleaders. The only teams without NFL cheerleaders at this time were New Orleans, New York, Detroit, Cleveland, Denver, Minnesota, Pittsburgh, San Francisco, and San Diego. Professional cheerleading eventually spread to soccer and basketball teams as well.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Cheerleading organizations such as the American Association of Cheerleading Coaches and Advisors (AACCA), founded in 1987, started applying universal safety standards to decrease the number of injuries and prevent dangerous stunts, pyramids, and tumbling passes from being included in the cheerleading routines. In 2003, the National Council for Spirit Safety and Education (NCSSE) was formed to offer safety training for youth, school, all-star, and college coaches. The NCAA now requires college cheer coaches to successfully complete a nationally recognized safety-training program.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Even with its athletic and competitive development, cheerleading at the school level has retained its ties to its spirit leading traditions. Cheerleaders are quite often seen as ambassadors for their schools, and leaders among the student body. At the college level, cheerleaders are often invited to help at university fundraisers and events.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "Debuting in 2003, the \"Marlin Mermaids\" gained national exposure, and have influenced other MLB teams to develop their own cheer/dance squads.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "As of 2005, overall statistics show around 97% of all modern cheerleading participants are female, although at the collegiate level, cheerleading is co-ed with about 50% of participants being male. Modern male cheerleaders' stunts focus less on flexibility and more on tumbling, flips, pikes, and handstands. These depend on strong legs and strong core strength.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In 2019, Napoleon Jinnies and Quinton Peron became the first male cheerleaders in the history of the NFL to perform at the Super Bowl.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "Kristi Yamaoka, a cheerleader for Southern Illinois University, suffered a fractured vertebra when she hit her head after falling from a human pyramid. She also suffered from a concussion, and a bruised lung. The fall occurred when Yamaoka lost her balance during a basketball game between Southern Illinois University and Bradley University at the Savvis Center in St. Louis on March 5, 2006. The fall gained \"national attention\", because Yamaoka continued to perform from a stretcher as she was moved away from the game.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "The accident caused the Missouri Valley Conference to ban its member schools from allowing cheerleaders to be \"launched or tossed and from taking part in formations higher than two levels\" for one week during a women's basketball conference tournament, and also resulted in a recommendation by the NCAA that conferences and tournaments do not allow pyramids two and one half levels high or higher, and a stunt known as basket tosses, during the rest of the men's and women's basketball season. On July 11, 2006, the bans were made permanent by the AACCA rules committee:",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The committee unanimously voted for sweeping revisions to cheerleading safety rules, the most major of which restricts specific upper-level skills during basketball games. Basket tosses, 2+1⁄2 high pyramids, one-arm stunts, stunts that involve twisting or flipping, and twisting tumbling skills may be performed only during halftime and post-game on a matted surface and are prohibited during game play or time-outs.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "Most American elementary schools, middle schools, high schools, and colleges have organized cheerleading squads. Some colleges even offer cheerleading scholarships for students. A school cheerleading team may compete locally, regionally, or nationally, but their main purpose is typically to cheer for sporting events and encourage audience participation. Cheerleading is quickly becoming a year-round activity, starting with tryouts during the spring semester of the preceding school year. Teams may attend organized summer cheerleading camps and practices to improve skills and create routines for competition.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 35,
"text": "In addition to supporting their schools' football or other sports teams, student cheerleaders may compete with recreational-style routine at competitions year-round.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 36,
"text": "In far more recent years, it has become more common for elementary schools to have an organized cheerleading team. This is a great way to get younger children introduced to the sport and used to being crowd leaders. Also, with young children learning so much so quickly, tumbling can come very easy to a child in elementary school.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 37,
"text": "Middle school cheerleading evolved shortly after high school squads were created and is set at the district level. In middle school, cheerleading squads serve the same purpose, but often follow a modified set of rules from high school squads with possible additional rules. Squads can cheer for basketball teams, football teams, and other sports teams in their school. Squads may also perform at pep rallies and compete against other local schools from the area. Cheerleading in middle school sometimes can be a two-season activity: fall and winter. However, many middle school cheer squads will go year-round like high school squads. Middle school cheerleaders use the same cheerleading movements as their older counterparts, yet may perform less extreme stunts and tumbling elements, depending on the rules in their area..",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 38,
"text": "In high school, there are usually two squads per school: varsity and a junior varsity. High school cheerleading contains aspects of school spirit as well as competition. These squads have become part of a year-round cycle. Starting with tryouts in the spring, year-round practice, cheering on teams in the fall and winter, and participating in cheerleading competitions. Most squads practice at least three days a week for about two hours each practice during the summer. Many teams also attend separate tumbling sessions outside of practice. During the school year, cheerleading is usually practiced five- to six-days-a-week. During competition season, it often becomes seven days with practice twice a day sometimes. The school spirit aspect of cheerleading involves cheering, supporting, and \"hyping up\" the crowd at football games, basketball games, and even at wrestling meets. Along with this, cheerleaders usually perform at pep rallies, and bring school spirit to other students. In May 2009, the National Federation of State High School Associations released the results of their first true high school participation study. They estimated that the number of high school cheerleaders from public high schools is around 394,700.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 39,
"text": "There are different cheerleading organizations that put on competitions; some of the major ones include state and regional competitions. Many high schools will often host cheerleading competitions, bringing in IHSA judges. The regional competitions are qualifiers for national competitions, such as the UCA (Universal Cheerleaders Association) Archived 2009-09-20 at the Wayback Machine in Orlando, Florida, every year. Many teams have a professional choreographer that choreographs their routine in order to ensure they are not breaking rules or regulations and to give the squad creative elements.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 40,
"text": "Most American universities have a cheerleading squad to cheer for football, basketball, volleyball, wrestling, and soccer. Most college squads tend to be larger coed teams, although in recent years; all-girl squads and smaller college squads have increased rapidly. Cheerleading is not recognized by NCAA, NAIA, and NJCAA as athletics; therefore, there are few to no scholarships offered to athletes wanting to pursue cheerleading at the collegiate level. However, some community colleges and universities offer scholarships directly from the program or sponsorship funds. Some colleges offer scholarships for an athlete's talents, academic excellence, and/or involvement in community events.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 41,
"text": "College squads perform more difficult stunts which include multi-level pyramids, as well as flipping and twisting basket tosses.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 42,
"text": "Not only do college cheerleaders cheer on the other sports at their university, many teams at universities compete with other schools at either UCA College Nationals or NCA College Nationals. This requires the teams to choreograph a 2-minute and 30 second routine that includes elements of jumps, tumbling, stunting, basket tosses, pyramids, and a crowd involvement section. Winning one of these competitions is a very prestigious accomplishment, and is seen as another national title for most schools.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 43,
"text": "Organizations that sponsor youth cheer teams usually sponsor either youth league football or basketball teams as well. This allows for the two, under the same sponsor, to be intermingled. Both teams have the same mascot name and the cheerleaders will perform at their football or basketball games. Examples of such sponsors include Pop Warner, American Youth Football, and the YMCA. The purpose of these squads is primarily to support their associated football or basketball players, but some teams do compete at local or regional competitions. The Pop Warner Association even hosts a national championship each December for teams in their program who qualify.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 44,
"text": "\"All-star\" or club cheerleading differs from school or sideline cheerleading because all-star teams focus solely on performing a competition routine and not on leading cheers for other sports teams. All-star cheerleaders are members of a privately owned gym or club which they typically pay dues or tuition to, similar to a gymnastics gym.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 45,
"text": "During the early 1980s, cheerleading squads not associated with a school or sports league, whose main objective was competition, began to emerge. The first organization to call themselves all-stars were the Q94 Rockers from Richmond, Virginia, founded in 1982. All-star teams competing prior to 1987 were placed into the same divisions as teams that represented schools and sports leagues. In 1986, the National Cheerleaders Association (NCA) addressed this situation by creating a separate division for teams lacking a sponsoring school or athletic association, calling it the All-Star Division and debuting it at their 1987 competitions. As the popularity of this type of team grew, more and more of them were formed, attending competitions sponsored by many different organizations and companies, each using its own set of rules, regulations, and divisions. This situation became a concern to coaches and gym owners, as the inconsistencies caused coaches to keep their routines in a constant state of flux, detracting from time that could be better utilized for developing skills and providing personal attention to their athletes. More importantly, because the various companies were constantly vying for a competitive edge, safety standards had become more and more lax. In some cases, unqualified coaches and inexperienced squads were attempting dangerous stunts as a result of these expanded sets of rules.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 46,
"text": "The United States All Star Federation (USASF) was formed in 2003 by the competition companies to act as the national governing body for all star cheerleading and to create a standard set of rules and judging criteria to be followed by all competitions sanctioned by the Federation. Eager to grow the sport and create more opportunities for high-level teams, The USASF hosted the first Cheerleading Worlds on April 24, 2004. At the same time, cheerleading coaches from all over the country organized themselves for the same rule making purpose, calling themselves the National All Star Cheerleading Coaches Congress (NACCC). In 2005, the NACCC was absorbed by the USASF to become their rule making body. In late 2006, the USASF facilitated the creation of the International All-Star Federation (IASF), which now governs club cheerleading worldwide.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 47,
"text": "As of 2020, all-star cheerleading, as sanctioned by the USASF, involves a squad of 5–36 females and males. All-star cheerleaders are placed into divisions, which are grouped based upon age, size of the team, gender of participants, and ability level. The age groups vary from under 4 years of age to 18 years and over. The squad prepares year-round for many different competition appearances, but they actually perform only for up to 2+1⁄2 minutes during their team's routine. The numbers of competitions a team participates in varies from team to team, but generally, most teams tend to participate in six to ten competitions a year. These competitions include locals or regionals, which normally take place in school gymnasiums or local venues, nationals, hosted in large venues all around the U.S., and the Cheerleading Worlds, which takes place at Walt Disney World in Orlando, Florida. During a competition routine, a squad performs carefully choreographed stunting, tumbling, jumping, and dancing to their own custom music. Teams create their routines to an eight-count system and apply that to the music so that the team members execute the elements with precise timing and synchronization.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 48,
"text": "All-star cheerleaders compete at competitions hosted by private event production companies, the foremost of these being Varsity Spirit. Varsity Spirit is the parent company for many subsidiaries including The National Cheerleader's Association, The Universal Cheerleader's Association, AmeriCheer, Allstar Challenge, and JamFest, among others. Each separate company or subsidiary typically hosts their own local and national level competitions. This means that many gyms within the same area could be state and national champions for the same year and never have competed against each other. Currently, there is no system in place that awards only one state or national title.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 49,
"text": "Judges at a competition watch closely for illegal skills from the group or any individual member. Here, an illegal skill is something that is not allowed in that division due to difficulty or safety restrictions. They look out for deductions, or things that go wrong, such as a dropped stunt or a tumbler who does not stick a landing. More generally, judges look at the difficulty and execution of jumps, stunts and tumbling, synchronization, creativity, the sharpness of the motions, showmanship, and overall routine execution.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 50,
"text": "If a level 6 or 7 team places high enough at selected USASF/IASF sanctioned national competitions, they could earn a place at the Cheerleading Worlds and compete against teams from all over the world, as well as receive money for placing. For elite level cheerleaders, The Cheerleading Worlds is the highest level of competition to which they can aspire, and winning a world championship title is an incredible honor.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 51,
"text": "Professional cheerleaders and dancers cheer for sports such as football, basketball, baseball, wrestling, or hockey. There are only a small handful of professional cheerleading leagues around the world; some professional leagues include the NBA Cheerleading League, the NFL Cheerleading League, the CFL Cheerleading League, the MLS Cheerleading League, the MLB Cheerleading League, and the NHL Ice Girls. Although professional cheerleading leagues exist in multiple countries, there are no Olympic teams.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 52,
"text": "In addition to cheering at games and competing, professional cheerleaders often do a lot of philanthropy and charity work, modeling, motivational speaking, television performances, and advertising.",
"title": "Types of teams in the United States today"
},
{
"paragraph_id": 53,
"text": "Cheerleading carries the highest rate of catastrophic injuries to female athletes in high school and collegiate sports. Of the United States' 2.9 million female high school athletes, only 3% are cheerleaders, yet cheerleading accounts for nearly 65% of all catastrophic injuries in girls' high school athletics. In data covering the 1982-83 academic year through the 2018-19 academic year in the US, the rate of serious, direct traumatic injury per 100,000 participants was 1.68 for female cheerleaders at the high school level, the highest for all high school sports surveyed. (table 9a) The college rate could not be determined, as the total number of collegiate cheerleaders was unknown, but the total number of traumatic, direct catastrophic injuries over this period was 33 (28 female, 5 male), higher than all sports at this level aside from football. (table 5a) Another study found that between 1982 and 2007, there were 103 fatal, disabling, or serious injuries recorded among female high school athletes, with the vast majority (67) occurring in cheerleading.",
"title": "Injuries and accidents"
},
{
"paragraph_id": 54,
"text": "The main source of injuries comes from stunting, also known as pyramids. These stunts are performed at games and pep rallies, as well as competitions. Sometimes competition routines are focused solely around the use of difficult and risky stunts. These stunts usually include a flyer (the person on top), along with one or two bases (the people on the bottom), and one or two spotters in the front and back on the bottom. The most common cheerleading related injury is a concussion. 96% of those concussions are stunt related. Other injuries include: sprained ankles, sprained wrists, back injuries, head injuries (sometimes concussions), broken arms, elbow injuries, knee injuries, broken noses, and broken collarbones. Sometimes, however, injuries can be as serious as whiplash, broken necks, broken vertebrae, and death.",
"title": "Injuries and accidents"
},
{
"paragraph_id": 55,
"text": "The journal Pediatrics has reportedly said that the number of cheerleaders suffering from broken bones, concussions, and sprains has increased by over 100 percent between the years of 1990 and 2002, and that in 2001, there were 25,000 hospital visits reported for cheerleading injuries dealing with the shoulder, ankle, head, and neck. Meanwhile, in the US, cheerleading accounted for 65.1% of all major physical injuries to high school females, and to 66.7% of major injuries to college students due to physical activity from 1982 to 2007, with 22,900 minors being admitted to hospital with cheerleading-related injuries in 2002.",
"title": "Injuries and accidents"
},
{
"paragraph_id": 56,
"text": "The risks of cheerleading were highlighted at the death of Lauren Chang. Chang died on April 14, 2008, after competing in a competition where her teammate had kicked her so hard in the chest that her lungs collapsed.",
"title": "Injuries and accidents"
},
{
"paragraph_id": 57,
"text": "Cheerleading (for both girls and boys) was one of the sports studied in the Pediatric Injury Prevention, Education and Research Program of the Colorado School of Public Health in 2009/10–2012/13. Data on cheerleading injuries is included in the report for 2012–13.",
"title": "Injuries and accidents"
},
{
"paragraph_id": 58,
"text": "International Cheer Union (ICU): Established on April 26, 2004, the ICU is recognized by the SportAccord as the world governing body of cheerleading and the authority on all matters with relation to it. Including participation from its 105-member national federations reaching 3.5 million athletes globally, the ICU continues to serve as the unified voice for those dedicated to cheerleading's positive development around the world.",
"title": "Associations, federations, and organizations"
},
{
"paragraph_id": 59,
"text": "Following a positive vote by the SportAccord General Assembly on May 31, 2013, in Saint Petersburg, the International Cheer Union (ICU) became SportAccord's 109th member, and SportAccord's 93rd international sports federation to join the international sports family. In accordance with the SportAccord statutes, the ICU is recognized as the world governing body of cheerleading and the authority on all matters related to it.",
"title": "Associations, federations, and organizations"
},
{
"paragraph_id": 60,
"text": "As of the 2016–17 season, the ICU has introduced a Junior aged team (12-16) to compete at the Cheerleading Worlds, because cheerleading is now in provisional status to become a sport in the Olympics. For cheerleading to one day be in the Olympics, there must be a junior and senior team that competes at the world championships. The first junior cheerleading team that was selected to become the junior national team was Eastside Middle School, located in Mount Washington Kentucky and will represent the United States in the inaugural junior division at the world championships.",
"title": "Associations, federations, and organizations"
},
{
"paragraph_id": 61,
"text": "The ICU holds training seminars for judges and coaches, global events and the World Cheerleading Championships. The ICU is also fully applied to the International Olympic Committee (IOC) and is compliant under the code set by the World Anti-Doping Agency (WADA).",
"title": "Associations, federations, and organizations"
},
{
"paragraph_id": 62,
"text": "International Federation of Cheerleading (IFC): Established on July 5, 1998, the International Federation of Cheerleading (IFC) is a non-profit federation based in Tokyo, Japan, and is a world governing body of cheerleading, primarily in Asia. The IFC objectives are to promote cheerleading worldwide, to spread knowledge of cheerleading, and to develop friendly relations among the member associations and federations.",
"title": "Associations, federations, and organizations"
},
{
"paragraph_id": 63,
"text": "USA Cheer: The USA Federation for Sport Cheering (USA Cheer) was established in 2007 to serve as the national governing body for all types of cheerleading in the United States and is recognized by the ICU. \"The USA Federation for Sport Cheering is a not-for profit 501(c)(6) organization that was established in 2007 to serve as the National Governing Body for Sport Cheering in the United States. USA Cheer exists to serve the cheer community, including club cheering (all star) and traditional school based cheer programs, and the growing sport of STUNT. USA Cheer has three primary objectives: help grow and develop interest and participation in cheer throughout the United States; promote safety and safety education for cheer in the United States; and represent the United States of America in international cheer competitions.\" In March 2018, they absorbed the American Association of Cheerleading Coaches and Advisors (AACCA) and now provide safety guidelines and training for all levels of cheerleading. Additionally, they organize the USA National Team.",
"title": "Associations, federations, and organizations"
},
{
"paragraph_id": 64,
"text": "Universal Cheerleading Association: UCA is an association owned by the company brand Varsity. \"Universal Cheerleaders Association was founded in 1974 by Jeff Webb to provide the best educational training for cheerleaders with the goal of incorporating high-level skills with traditional crowd leading. It was Jeff's vision that would transform cheerleading into the dynamic, athletic combination of high energy entertainment and school leadership that is loved by so many.\" \"Today, UCA is the largest cheerleading camp company in the world, offering the widest array of dates and locations of any camp company. We also celebrate cheerleader's incredible hard work and athleticism through the glory of competition at over 50 regional events across the country and our Championships at the Walt Disney World Resort every year.\" \"UCA has instilled leadership skills and personal confidence in more than 4.5 million athletes on and off the field while continuing to be the industry's leader for more than forty-five years. UCA has helped many cheerleaders get the training they need to succeed.",
"title": "Associations, federations, and organizations"
},
{
"paragraph_id": 65,
"text": "Asian Thailand Cheerleading Invitational (ATCI): Organised by the Cheerleading Association of Thailand (CAT) in accordance with the rules and regulations of the International Federation of Cheerleading (IFC). The ATCI is held every year since 2009. At the ATCI, many teams from all over Thailand compete, joining them are many invited neighbouring nations who also send cheer squads.",
"title": "Competitions and companies"
},
{
"paragraph_id": 66,
"text": "Cheerleading Asia International Open Championships (CAIOC): Hosted by the Foundation of Japan Cheerleading Association (FJCA) in accordance with the rules and regulations of the IFC. The CAIOC has been a yearly event since 2007. Every year, many teams from all over Asia converge in Tokyo to compete.",
"title": "Competitions and companies"
},
{
"paragraph_id": 67,
"text": "Cheerleading World Championships (CWC): Organised by the IFC. The IFC is a non-profit organisation founded in 1998 and based in Tokyo, Japan. The CWC has been held every two years since 2001, and to date, the competition has been held in Japan, the United Kingdom, Finland, Germany, and Hong Kong. The 6th CWC was held at the Hong Kong Coliseum on November 26–27, 2011.",
"title": "Competitions and companies"
},
{
"paragraph_id": 68,
"text": "ICU World Championships: The International Cheer Union currently encompasses 105 National Federations from countries across the globe. Every year, the ICU host the World Cheerleading Championship. This competition uses a more collegiate style performance and rulebook. Countries assemble and send only one team to represent them.",
"title": "Competitions and companies"
},
{
"paragraph_id": 69,
"text": "National Cheerleading Championships (NCC): The NCC is the annual IFC-sanctioned national cheerleading competition in Indonesia organised by the Indonesian Cheerleading Community (ICC). Since NCC 2010, the event is now open to international competition, representing a significant step forward for the ICC. Teams from many countries such as Japan, Thailand, the Philippines, and Singapore participated in the ground breaking event.",
"title": "Competitions and companies"
},
{
"paragraph_id": 70,
"text": "Pan-American Cheerleading Championships (PCC): The PCC was held for the first time in 2009 in the city of Latacunga, Ecuador and is the continental championship organised by the Pan-American Federation of Cheerleading (PFC). The PFC, operating under the umbrella of the IFC, is the non-profit continental body of cheerleading whose aim it is to promote and develop cheerleading in the Americas. The PCC is a biennial event, and was held for the second time in Lima, Peru, in November 2010.",
"title": "Competitions and companies"
},
{
"paragraph_id": 71,
"text": "USASF/IASF Worlds: Many United States cheerleading organizations form and register the not-for-profit entity the United States All Star Federation (USASF) and also the International All Star Federation (IASF) to support international club cheerleading and the World Cheerleading Club Championships. The first World Cheerleading Championships, or Cheerleading Worlds, were hosted by the USASF/IASF at the Walt Disney World Resort and taped for an ESPN global broadcast in 2004. This competition is only for All-Star/Club cheer. Only level 6 and 7 teams may attend and must receive a bid from a partner company.",
"title": "Competitions and companies"
},
{
"paragraph_id": 72,
"text": "Varsity: Varsity Spirit, a branch of Varsity Brands is a parent company which, over the past 10 years, has absorbed or bought most other cheerleading event production companies. The following is a list of subsidiary competition companies owned by Varsity Spirit:",
"title": "Competitions and companies"
},
{
"paragraph_id": 73,
"text": "In the United States, the designation of a \"sport\" is important because of Title IX. There is a large debate on whether or not cheerleading should be considered a sport for Title IX (a portion of the United States Education Amendments of 1972 forbidding discrimination under any education program on the basis of sex) purposes. These arguments have varied from institution to institution and are reflected in how they treat and organize cheerleading within their schools. Some institutions have been accused of not providing equal opportunities to their male students or for not treating cheerleading as a sport, which reflects on the opportunities they provide to their athletes.",
"title": "Title IX sports status"
},
{
"paragraph_id": 74,
"text": "The Office for Civil Rights (OCR) issued memos and letters to schools that cheerleading, both sideline and competitive, may not be considered \"athletic programs\" for the purposes of Title IX. Supporters consider cheerleading, as a whole, a sport, citing the heavy use of athletic talents while critics see it as a physical activity because a \"sport\" implies a competition among all squads and not all squads compete, along with subjectivity of competitions where—as with gymnastics, diving, and figure skating—scores are assessed based on human judgment and not an objective goal or measurement of time.",
"title": "Title IX sports status"
},
{
"paragraph_id": 75,
"text": "The Office for Civil Rights' primary concern was ensuring that institutions complied with Title IX, which means offering equal opportunities to all students despite their gender. In their memos, their main point against cheerleading being a sport was that the activity is underdeveloped and unorganized to have varsity-level athletic standing among students. This claim was not universal and the Office for Civil Rights would review cheerleading on a case-by-case basis. Due to this the status of cheerleading under Title IX has varied from region to region based on the institution and how they organize their teams. However, within their decisions, the Office for Civil Rights never clearly stated any guidelines on what was and was not considered a sport under Title IX.",
"title": "Title IX sports status"
},
{
"paragraph_id": 76,
"text": "On January 27, 2009, in a lawsuit involving an accidental injury sustained during a cheerleading practice, the Wisconsin Supreme Court ruled that cheerleading is a full-contact sport in that state, not allowing any participants to be sued for accidental injury. In contrast, on July 21, 2010, in a lawsuit involving whether college cheerleading qualified as a sport for purposes of Title IX, a federal court, citing a current lack of program development and organization, ruled that it is not a sport at all.",
"title": "Title IX sports status"
},
{
"paragraph_id": 77,
"text": "The National Collegiate Athletic Association (NCAA) does not recognize cheerleading as a sport. In 2014, the American Medical Association adopted a policy that, as the leading cause of catastrophic injuries of female athletes both in high school and college, cheerleading should be considered a sport. While there are cheerleading teams at the majority of the NCAA's Division I schools, they are still not recognized as a sport. This results in many teams not being properly funded. Additionally, there are little to no college programs offering scholarships because their universities cannot offer athletic scholarships to \"spirit\" team members.",
"title": "Title IX sports status"
},
{
"paragraph_id": 78,
"text": "In 2010, Quinnipiac University was sued for not providing equal opportunities for female athletes as required by Title IX. The university disbanded its volleyball team and created a new competitive cheerleading sports team. The issue with Biediger v. Quinnipiac University is centered around whether competitive cheerleading could be considered a sport for Title IX. The university had not provided additional opportunities for their female athletes which led to the court ruling in favor that cheerleading could not count as a varsity sport. This case established clear guidelines on what qualifies as a sport under Title IX, these guidelines are known as the three-pronged approach. The three-pronged approach is as follows:",
"title": "Title IX sports status"
},
{
"paragraph_id": 79,
"text": "The three-pronged approach was the first official guideline that clearly stated what criteria were necessary when deciding what activity was considered a sport or not under Title IX. This approach was used and is still continued to be used by the Office for Civil Rights. Based on this approach the Office for Civil Rights still considers cheerleading, including both sideline and competitive, not a sport under Title IX.",
"title": "Title IX sports status"
},
{
"paragraph_id": 80,
"text": "Cheerleading in Canada is rising in popularity among the youth in co-curricular programs. Cheerleading has grown from the sidelines to a competitive activity throughout the world and in particular Canada. Cheerleading has a few streams in Canadian sports culture. It is available at the middle-school, high-school, collegiate, and best known for all-star. There are multiple regional, provincial, and national championship opportunities for all athletes participating in cheerleading. Canada does not have provincial teams, just a national program referred to as Team Canada, facilitated by Cheer Canada. Their first year as a national team was in 2009 when they represented Canada at the International Cheer Union World Cheerleading Championships International Cheer Union (ICU).",
"title": "Cheerleading in Canada"
},
{
"paragraph_id": 81,
"text": "Cheer Canada acts as the Canadian national governing body for cheer, as recognised by the International Cheer Union. There are a number of provincial sports organizations that also exist in Canada under Cheer Canada, each governing cheer within their province which BC Sport Cheer, Alberta Cheerleading Association, Saskatchewan Cheerleading Association, Cheer Manitoba, Ontario Cheerleading Federation, Federation de Cheerleading du Quebec, Newfoundland and Labrador Cheerleading Athletics, Cheer New Brunswick and Cheer Nova Scotia. Cheer Canada and the provincial organizations utilise the IASF divisions and rules for all star cheer and performance cheer (all star dance) and the ICU divisions and rules for scholastic cheer. Canadian Cheer (previously known as Cheer Evolution) is the largest cheer and dance organization for Canada, and currently comply to Cheer Canada's rules and guidelines for their 15 events. Varsity Spirit also hosts events within Canada utilising the Cheer Canada/IASF rules. There are currently over 400 clubs and schools recognised by Cheer Canada, with over 25,000 participants as of 2023.",
"title": "Cheerleading in Canada"
},
{
"paragraph_id": 82,
"text": "There are two world championship competitions that Canada participates in. The first is the ICU World Championships where the Canadian National Teams compete against other countries. The second is The Cheerleading Worlds where Canadian club teams, referred to as \"all-star\" teams, compete within the IASF divisions. National team members who compete at the ICU Worlds can also compete with their \"all-star club\" teams at the IASF World Championships. Although athletes can compete in both International Cheer Union (ICU) and IASF championships, crossovers between teams at each individual competition are not permitted. Teams compete against the other teams from their countries on the first day of competition and the top three teams from each country in each division continue to finals. At the end of finals, the top team scoring the highest for their country earns the \"Nations Cup\". Canada has multiple teams across their country that compete in the IASF Cheerleading Worlds Championship. In total, Canada has had 98 International podium finishes at cheer events.",
"title": "Cheerleading in Canada"
},
{
"paragraph_id": 83,
"text": "The International Cheer Union (ICU) is built of 119 member nations, who are eligible to field teams to compete against one another at the ICU World Championships in a number of divisions in both cheerleading and performance cheer, with special divisions for youth, junior and adaptive abilities athletes. Cheer Canada fields a national team, with up to 40 athletes from around the country for both a senior national all girl and senior national coed team training at three training camps across the season in Canada before 28 athletes per team are selected to train in Florida, with 24 athletes going on to compete on the competition floor at ICU Worlds. In the 2023 ICU World Championships, Canada won a total of 4 medals (1 gold and 3 silver) with teams entered in the Youth All Girl, Youth Coed, Unified Median, Unified Advanced, Premier All Girl, Premier Coed, Performance Cheer Hip Hop doubles, Performance Cheer Pom Doubles and Performance Cheer Pom divisions.",
"title": "Cheerleading in Canada"
},
{
"paragraph_id": 84,
"text": "In total, Team Canada holds podium placements at the ICU World Championships from the following years/divisions:",
"title": "Cheerleading in Canada"
},
{
"paragraph_id": 85,
"text": "Cheerleading in Mexico is a popular sport commonly seen in Mexican College Football and Professional Mexican Soccer sporting events. Cheerleading emerged within the National Autonomous University of Mexico (UNAM), the highest House of Studies in the country, during the 1930s, almost immediately after it was granted its autonomy. Since then, this phenomenon has been evolving to become what it is now. Firstly, it was developed only in the UNAM, later in other secondary and higher education institutions in Mexico City, and currently in practically the entire country.",
"title": "Cheerleading in Mexico"
},
{
"paragraph_id": 86,
"text": "In Mexico, this sport is endorsed by the Mexican Federation of Cheerleaders and Cheerleading Groups (Federación Mexicana de Porristas y Grupos de Animación) (FMPGA), a body that regulates competitions in Mexico and subdivisions such as the Olympic Confederation of Cheerleaders (COP Brands), National Organization of Cheerleaders (Organización Nacional de Porristas) (ONP) and the Mexican Organization of Trainers and Animation Groups (Organización Mexicana de Entrenadores y Grupos de Animación) (OMEGA Mexico), these being the largest in the country.",
"title": "Cheerleading in Mexico"
},
{
"paragraph_id": 87,
"text": "In 2021, the third edition of the National Championship of State Teams was held and organized by The Mexican Federation of Cheerleaders and Cheerleading Groups, on this occasion, the event was held virtually, broadcasting live, through the Vimeo platform.",
"title": "Cheerleading in Mexico"
},
{
"paragraph_id": 88,
"text": "In Mexico there are more than 500 teams and almost 10,000 athletes who practice this sport, in addition to a representative national team of Mexico, which won first place in the cheerleading world championship organized by the ICU (International Cheer Union) on April 24, 2015, receiving a gold medal; In 2016, Mexico became the second country with the most medals in the world in this sport. With 27 medals, it is considered the second world power in this sport, only behind the United States. In the 2019 Coed Premier World Cheerleading Championship Mexico ranked 4th just behind the United States, Canada and Taiwan. In 2021, the Mexican team won 3rd place at the Junior Boom category in World Cheerleading Championship 2021 hosted by international cheerleading federation.",
"title": "Cheerleading in Mexico"
},
{
"paragraph_id": 89,
"text": "This section has a link to a separate Wikipedia page that talks about the history and growth of cheerleading in the United Kingdom. This can be used to compare and contrast the activity in the U.S and in Australia.",
"title": "Cheerleading in the United Kingdom"
},
{
"paragraph_id": 90,
"text": "This section has a link to a separate Wikipedia page that talks about the history and growth of cheerleading in Australia. This can be used to compare and contrast the activity in the U.S and in Australia.",
"title": "Cheerleading in Australia"
},
{
"paragraph_id": 91,
"text": "This section has a link to a separate Wikipedia page that lists former cheerleaders and well-known cheerleading squads.",
"title": "Notable former cheerleaders"
}
] | Cheerleading is an activity in which the participants cheer for their team as a form of encouragement. It can range from chanting slogans to intense physical activity. It can be performed to motivate sports teams, to entertain the audience, or for competition. Cheerleading routines typically range anywhere from one to three minutes, and contain components of tumbling, dance, jumps, cheers, and stunting. Modern cheerleading is very closely associated with American football and basketball. Sports such as association football (soccer), ice hockey, volleyball, baseball, and wrestling will sometimes sponsor cheerleading squads. The ICC Twenty20 Cricket World Cup in South Africa in 2007 was the first international cricket event to have cheerleaders. The Florida Marlins were the first Major League Baseball team to have a cheerleading team. Cheerleading originated as an all-male activity in the United States, and remains predominantly in America, with an estimated 3.85 million participants as of 2017. The global presentation of cheerleading was led by the 1997 broadcast of ESPN's International cheerleading competition, and the worldwide release of the 2000 film Bring It On. The International Cheer Union (ICU) now claims 116 member nations with an estimated 7.5 million participants worldwide. The sport has gained a lot of traction in Australia, Canada, Mexico, China, Colombia, Finland, France, Germany, Japan, the Netherlands, New Zealand, and the United Kingdom with popularity continuing to grow as sport leaders pursue Olympic status. Cheerleading carries the highest rate of catastrophic injuries to female athletes in sports, with most injuries associated with stunting, also known as pyramids. | 2001-10-10T03:05:32Z | 2023-11-19T20:59:40Z | [
"Template:Pp-semi-protected",
"Template:Webarchive",
"Template:Short description",
"Template:Cn",
"Template:More citations needed section",
"Template:Commons category",
"Template:Redirect",
"Template:Frac",
"Template:Citation needed",
"Template:Gold01",
"Template:Dead link",
"Template:More footnotes",
"Template:USS",
"Template:See also",
"Template:ISBN",
"Template:About",
"Template:As of",
"Template:Div col",
"Template:Div col end",
"Template:Cite web",
"Template:Deadlink",
"Template:Citation",
"Template:Main",
"Template:Cite news",
"Template:Curlie",
"Template:Supporter Culture",
"Template:Use mdy dates",
"Template:Multiple image",
"Template:Bronze03",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite press release",
"Template:Wiktionary",
"Template:Silver02",
"Template:Cite book",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Cheerleading |
6,751 | Cottingley Fairies | The Cottingley Fairies appear in a series of five photographs taken by Elsie Wright (1901–1988) and Frances Griffiths (1907–1986), two young cousins who lived in Cottingley, near Bradford in England. In 1917, when the first two photographs were taken, Elsie was 16 years old and Frances was 9. The pictures came to the attention of writer Sir Arthur Conan Doyle, who used them to illustrate an article on fairies he had been commissioned to write for the Christmas 1920 edition of The Strand Magazine. Doyle, as a spiritualist, was enthusiastic about the photographs, and interpreted them as clear and visible evidence of psychic phenomena. Public reaction was mixed; some accepted the images as genuine, others believed that they had been faked.
Interest in the Cottingley Fairies gradually declined after 1921. Both girls married and lived abroad for a time after they grew up, and yet the photographs continued to hold the public imagination. In 1966 a reporter from the Daily Express newspaper traced Elsie, who had by then returned to the United Kingdom. Elsie left open the possibility that she believed she had photographed her thoughts, and the media once again became interested in the story.
In the early 1980s Elsie and Frances admitted that the photographs were faked, using cardboard cutouts of fairies copied from a popular children's book of the time, but Frances maintained that the fifth and final photograph was genuine. As of 2019 the photographs and the cameras used are in the collections of the National Science and Media Museum in Bradford, England.
In mid-1917 nine-year-old Frances Griffiths and her mother – both newly arrived in the UK from South Africa – were staying with Frances's aunt, Elsie Wright's mother, in the village of Cottingley in West Yorkshire; Elsie was then 16 years old. The two girls often played together beside the beck at the bottom of the garden, much to their mothers' annoyance, because they frequently came back with wet feet and clothes. Frances and Elsie said they only went to the beck to see the fairies, and to prove it, Elsie borrowed her father's camera, a Midg quarter-plate. The girls returned about 30 minutes later, "triumphant".
Elsie's father, Arthur, was a keen amateur photographer, and had set up his own darkroom. The picture on the photographic plate he developed showed Frances behind a bush in the foreground, on which four fairies appeared to be dancing. Knowing his daughter's artistic ability, and that she had spent some time working in a photographer's studio, he dismissed the figures as cardboard cutouts. Two months later the girls borrowed his camera again, and this time returned with a photograph of Elsie sitting on the lawn holding out her hand to a 1-foot-tall (30 cm) gnome. Exasperated by what he believed to be "nothing but a prank", and convinced that the girls must have tampered with his camera in some way, Arthur Wright refused to lend it to them again. His wife Polly, however, believed the photographs to be authentic.
I am learning French, Geometry, Cookery and Algebra at school now. Dad came home from France the other week after being there ten months, and we all think the war will be over in a few days ... I am sending two photos, both of me, one of me in a bathing costume in our back yard, while the other is me with some fairies. Elsie took that one.
Letter from Frances Griffiths to a friend in South Africa
Towards the end of 1918, Frances sent a letter to Johanna Parvin, a friend in Cape Town, South Africa, where Frances had lived for most of her life, enclosing the photograph of herself with the fairies. On the back she wrote "It is funny, I never used to see them in Africa. It must be too hot for them there."
The photographs became public in mid-1919, after Elsie's mother attended a meeting of the Theosophical Society in Bradford. The lecture that evening was on "fairy life", and at the end of the meeting Polly Wright showed the two fairy photographs taken by her daughter and niece to the speaker. As a result, the photographs were displayed at the society's annual conference in Harrogate, held a few months later. There they came to the attention of a leading member of the society, Edward Gardner. One of the central beliefs of theosophy is that humanity is undergoing a cycle of evolution, towards increasing "perfection", and Gardner recognised the potential significance of the photographs for the movement:
the fact that two young girls had not only been able to see fairies, which others had done, but had actually for the first time ever been able to materialise them at a density sufficient for their images to be recorded on a photographic plate, meant that it was possible that the next cycle of evolution was underway.
Gardner sent the prints along with the original glass-plate negatives to Harold Snelling, a photography expert. Snelling's opinion was that "the two negatives are entirely genuine, unfaked photographs ... [with] no trace whatsoever of studio work involving card or paper models". He did not go so far as to say that the photographs showed fairies, stating only that "these are straight forward photographs of whatever was in front of the camera at the time". Gardner had the prints "clarified" by Snelling, and new negatives produced, "more conducive to printing", for use in the illustrated lectures he gave around the UK. Snelling supplied the photographic prints which were available for sale at Gardner's lectures.
Author and prominent spiritualist Sir Arthur Conan Doyle learned of the photographs from the editor of the spiritualist publication Light. Doyle had been commissioned by The Strand Magazine to write an article on fairies for their Christmas issue, and the fairy photographs "must have seemed like a godsend" according to broadcaster and historian Magnus Magnusson. Doyle contacted Gardner in June 1920 to determine the background to the photographs, and wrote to Elsie and her father to request permission from the latter to use the prints in his article. Arthur Wright was "obviously impressed" that Doyle was involved, and gave his permission for publication, but he refused payment on the grounds that, if genuine, the images should not be "soiled" by money.
Gardner and Doyle sought a second expert opinion from the photographic company Kodak. Several of the company's technicians examined the enhanced prints, and although they agreed with Snelling that the pictures "showed no signs of being faked", they concluded that "this could not be taken as conclusive evidence ... that they were authentic photographs of fairies". Kodak declined to issue a certificate of authenticity. Gardner believed that the Kodak technicians might not have examined the photographs entirely objectively, observing that one had commented "after all, as fairies couldn't be true, the photographs must have been faked somehow". The prints were also examined by another photographic company, Ilford, who reported unequivocally that there was "some evidence of faking". Gardner and Doyle, perhaps rather optimistically, interpreted the results of the three expert evaluations as two in favour of the photographs' authenticity and one against.
Doyle also showed the photographs to the physicist and pioneering psychical researcher Sir Oliver Lodge, who believed the photographs to be fake. He suggested that a troupe of dancers had masqueraded as fairies, and expressed doubt as to their "distinctly 'Parisienne'" hairstyles.
On 4 October 2018 the first two of the photographs, Alice and the Fairies and Iris and the Gnome, were to be sold by Dominic Winter Auctioneers, in Gloucestershire. The prints, suspected to have been made in 1920 to sell at theosophical lectures, were expected to bring £700–£1000 each. As it turned out, 'Iris with the Gnome' sold for a hammer price of £5,400 (plus 24% buyer's premium incl. VAT), while 'Alice and the Fairies' sold for a hammer price of £15,000 (plus 24% buyer's premium incl. VAT).
Doyle was preoccupied with organising an imminent lecture tour of Australia, and in July 1920, sent Gardner to meet the Wright family. By this point, Frances was living with her parents in Scarborough, but Elsie's father told Gardner that he had been so certain the photographs were fakes that while the girls were away he searched their bedroom and the area around the beck (stream), looking for scraps of pictures or cutouts, but found nothing "incriminating".
Gardner believed the Wright family to be honest and respectable. To place the matter of the photographs' authenticity beyond doubt, he returned to Cottingley at the end of July with two W. Butcher & Sons Cameo folding plate cameras and 24 secretly marked photographic plates. Frances was invited to stay with the Wright family during the school summer holiday so that she and Elsie could take more pictures of the fairies. Gardner described his briefing in his 1945 Fairies: A Book of Real Fairies:
I went off, to Cottingley again, taking the two cameras and plates from London, and met the family and explained to the two girls the simple working of the cameras, giving one each to keep. The cameras were loaded, and my final advice was that they need go up to the glen only on fine days as they had been accustomed to do before and tice the fairies, as they called their way of attracting them, and see what they could get. I suggested only the most obvious and easy precautions about lighting and distance, for I knew it was essential they should feel free and unhampered and have no burden of responsibility. If nothing came of it all, I told them, they were not to mind a bit.
Until 19 August the weather was unsuitable for photography. Because Frances and Elsie insisted that the fairies would not show themselves if others were watching, Elsie's mother was persuaded to visit her sister's for tea, leaving the girls alone. In her absence the girls took several photographs, two of which appeared to show fairies. In the first, Frances and the Leaping Fairy, Frances is shown in profile with a winged fairy close by her nose. The second, Fairy offering Posy of Harebells to Elsie, shows a fairy either hovering or tiptoeing on a branch, and offering Elsie a flower. Two days later the girls took the last picture, Fairies and Their Sun-Bath.
The plates were packed in cotton wool and returned to Gardner in London, who sent an "ecstatic" telegram to Doyle, by then in Melbourne. Doyle wrote back:
My heart was gladdened when out here in far Australia I had your note and the three wonderful pictures which are confirmatory of our published results. When our fairies are admitted other psychic phenomena will find a more ready acceptance ... We have had continued messages at seances for some time that a visible sign was coming through.
Doyle's article in the December 1920 issue of The Strand contained two higher-resolution prints of the 1917 photographs, and sold out within days of publication. To protect the girls' anonymity, Frances and Elsie were called Alice and Iris respectively, and the Wright family was referred to as the "Carpenters". An enthusiastic and committed spiritualist, Doyle hoped that if the photographs convinced the public of the existence of fairies then they might more readily accept other psychic phenomena. He ended his article with the words:
The recognition of their existence will jolt the material twentieth-century mind out of its heavy ruts in the mud, and will make it admit that there is a glamour and mystery to life. Having discovered this, the world will not find it so difficult to accept that spiritual message supported by physical facts which have already been put before it.
Early press coverage was "mixed", generally a combination of "embarrassment and puzzlement". The historical novelist and poet Maurice Hewlett published a series of articles in the literary journal John O' London's Weekly, in which he concluded: "And knowing children, and knowing that Sir Arthur Conan Doyle has legs, I decide that the Miss Carpenters have pulled one of them." The London newspaper Truth on 5 January 1921 expressed a similar view; "For the true explanation of these fairy photographs what is wanted is not a knowledge of occult phenomena but a knowledge of children." Some public figures were more sympathetic. Margaret McMillan, the educational and social reformer, wrote: "How wonderful that to these dear children such a wonderful gift has been vouchsafed." The novelist Henry De Vere Stacpoole decided to take the fairy photographs and the girls at face value. In a letter to Gardner he wrote: "Look at Alice's [Frances'] face. Look at Iris's [Elsie's] face. There is an extraordinary thing called Truth which has 10 million faces and forms – it is God's currency and the cleverest coiner or forger can't imitate it."
Major John Hall-Edwards, a keen photographer and pioneer of medical X-ray treatments in Britain, was a particularly vigorous critic:
On the evidence I have no hesitation in saying that these photographs could have been "faked". I criticize the attitude of those who declared there is something supernatural in the circumstances attending to the taking of these pictures because, as a medical man, I believe that the inculcation of such absurd ideas into the minds of children will result in later life in manifestations and nervous disorder and mental disturbances.
Doyle used the later photographs in 1921 to illustrate a second article in The Strand, in which he described other accounts of fairy sightings. The article formed the foundation for his 1922 book The Coming of the Fairies. As before, the photographs were received with mixed credulity. Sceptics noted that the fairies "looked suspiciously like the traditional fairies of nursery tales" and that they had "very fashionable hairstyles".
Gardner made a final visit to Cottingley in August 1921. He again brought cameras and photographic plates for Frances and Elsie, but was accompanied by the occultist Geoffrey Hodson. Although neither of the girls claimed to see any fairies, and there were no more photographs, "on the contrary, he [Hodson] saw them [fairies] everywhere" and wrote voluminous notes on his observations.
By now Elsie and Frances were tired of the whole fairy business. Years later Elsie looked at a photograph of herself and Frances taken with Hodson and said: "Look at that, fed up with fairies." Both Elsie and Frances later admitted that they "played along" with Hodson "out of mischief", and that they considered him "a fake".
Public interest in the Cottingley Fairies gradually subsided after 1921. Elsie and Frances both eventually married, moved away from the area and each lived overseas for varying periods of time. In 1966, a reporter from the Daily Express newspaper traced Elsie, who was by then back in England. She admitted in an interview given that year that the fairies might have been "figments of my imagination", but left open the possibility she believed that she had somehow managed to photograph her thoughts. The media subsequently became interested in Frances and Elsie's photographs once again. BBC television's Nationwide programme investigated the case in 1971, but Elsie stuck to her story: "I've told you that they're photographs of figments of our imagination, and that's what I'm sticking to".
Elsie and Frances were interviewed by journalist Austin Mitchell in September 1976, for a programme broadcast on Yorkshire Television. When pressed, both women agreed that "a rational person doesn't see fairies", but they denied having fabricated the photographs. In 1978 the magician and scientific sceptic James Randi and a team from the Committee for the Scientific Investigation of Claims of the Paranormal examined the photographs, using a "computer enhancement process". They concluded that the photographs were fakes, and that strings could be seen supporting the fairies. Geoffrey Crawley, editor of the British Journal of Photography, undertook a "major scientific investigation of the photographs and the events surrounding them", published between 1982 and 1983, "the first major postwar analysis of the affair". He also concluded that the pictures were fakes.
In 1983, the cousins admitted in an article published in the magazine The Unexplained that the photographs had been faked, although both maintained that they really had seen fairies. Elsie had copied illustrations of dancing girls from a popular children's book of the time, Princess Mary's Gift Book, published in 1914, and drew wings on them. They said they had then cut out the cardboard figures and supported them with hatpins, disposing of their props in the beck once the photograph had been taken. But the cousins disagreed about the fifth and final photograph, which Doyle in his The Coming of the Fairies described in this way:
Seated on the upper left hand edge with wing well displayed is an undraped fairy apparently considering whether it is time to get up. An earlier riser of more mature age is seen on the right possessing abundant hair and wonderful wings. Her slightly denser body can be glimpsed within her fairy dress.
Elsie maintained it was a fake, just like all the others, but Frances insisted that it was genuine. In an interview given in the early 1980s Frances said:
It was a wet Saturday afternoon and we were just mooching about with our cameras and Elsie had nothing prepared. I saw these fairies building up in the grasses and just aimed the camera and took a photograph.
Both Frances and Elsie claimed to have taken the fifth photograph. In a letter published in The Times newspaper on 9 April 1983, Geoffrey Crawley explained the discrepancy by suggesting that the photograph was "an unintended double exposure of fairy cutouts in the grass", and thus "both ladies can be quite sincere in believing that they each took it".
In a 1985 interview on Yorkshire Television's Arthur C. Clarke's World of Strange Powers, Elsie said that she and Frances were too embarrassed to admit the truth after fooling Doyle, the author of Sherlock Holmes: "Two village kids and a brilliant man like Conan Doyle – well, we could only keep quiet." In the same interview Frances said: "I never even thought of it as being a fraud – it was just Elsie and I having a bit of fun and I can't understand to this day why they were taken in – they wanted to be taken in."
Frances died in 1986, and Elsie in 1988. Prints of their photographs of the fairies, along with a few other items including a first edition of Doyle's book The Coming of the Fairies, were sold at auction in London for £21,620 in 1998. That same year, Geoffrey Crawley sold his Cottingley Fairy material to the National Museum of Film, Photography and Television in Bradford (now the National Science and Media Museum), where it is on display. The collection included prints of the photographs, two of the cameras used by the girls, watercolours of fairies painted by Elsie, and a nine-page letter from Elsie admitting to the hoax. The glass photographic plates were bought for £6,000 by an unnamed buyer at a London auction held in 2001.
Frances's daughter, Christine Lynch, appeared in an episode of the television programme Antiques Roadshow in Belfast, broadcast on BBC One in January 2009, with the photographs and one of the cameras given to the girls by Doyle. Christine told the expert, Paul Atterbury, that she believed, as her mother had done, that the fairies in the fifth photograph were genuine. Atterbury estimated the value of the items at between £25,000 and £30,000. The first edition of Frances's memoirs was published a few months later, under the title Reflections on the Cottingley Fairies. The book contains correspondence, sometimes "bitter", between Elsie and Frances. In one letter, dated 1983, Frances wrote:
I hated those photographs from the age of 16 when Mr Gardner presented me with a bunch of flowers and wanted me to sit on the platform [at a Theosophical Society meeting] with him. I realised what I was in for if I did not keep myself hidden.
The 1997 films FairyTale: A True Story and Photographing Fairies were inspired by the events surrounding the Cottingley Fairies. The photographs were parodied in a 1994 book written by Terry Jones and Brian Froud, Lady Cottington's Pressed Fairy Book. In A. J. Elwood's 2021 novel, The Cottingley Cuckoo, a series of letters were written soon after the Cottingley fairy photographs were published claiming further sightings of fairies and proof of their existence.
In 2017 a further two fairy photographs were presented as evidence that the girls' parents were part of the conspiracy. Dating from 1917 and 1918, both photographs are poorly executed copies of two of the original fairy photographs. One was published in 1918 in The Sphere newspaper, which was before the originals had been seen by anyone outside the girls' immediate family.
In 2019, a print of the first of the five photographs sold for £1,050. A print of the second was also put up for sale but failed to sell as it did not meet its £500 reserve price. The pictures previously belonged to the Reverend George Vale Owen. In December 2019, the third camera used to take the images was acquired by the National Science and Media Museum. | [
{
"paragraph_id": 0,
"text": "The Cottingley Fairies appear in a series of five photographs taken by Elsie Wright (1901–1988) and Frances Griffiths (1907–1986), two young cousins who lived in Cottingley, near Bradford in England. In 1917, when the first two photographs were taken, Elsie was 16 years old and Frances was 9. The pictures came to the attention of writer Sir Arthur Conan Doyle, who used them to illustrate an article on fairies he had been commissioned to write for the Christmas 1920 edition of The Strand Magazine. Doyle, as a spiritualist, was enthusiastic about the photographs, and interpreted them as clear and visible evidence of psychic phenomena. Public reaction was mixed; some accepted the images as genuine, others believed that they had been faked.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Interest in the Cottingley Fairies gradually declined after 1921. Both girls married and lived abroad for a time after they grew up, and yet the photographs continued to hold the public imagination. In 1966 a reporter from the Daily Express newspaper traced Elsie, who had by then returned to the United Kingdom. Elsie left open the possibility that she believed she had photographed her thoughts, and the media once again became interested in the story.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the early 1980s Elsie and Frances admitted that the photographs were faked, using cardboard cutouts of fairies copied from a popular children's book of the time, but Frances maintained that the fifth and final photograph was genuine. As of 2019 the photographs and the cameras used are in the collections of the National Science and Media Museum in Bradford, England.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In mid-1917 nine-year-old Frances Griffiths and her mother – both newly arrived in the UK from South Africa – were staying with Frances's aunt, Elsie Wright's mother, in the village of Cottingley in West Yorkshire; Elsie was then 16 years old. The two girls often played together beside the beck at the bottom of the garden, much to their mothers' annoyance, because they frequently came back with wet feet and clothes. Frances and Elsie said they only went to the beck to see the fairies, and to prove it, Elsie borrowed her father's camera, a Midg quarter-plate. The girls returned about 30 minutes later, \"triumphant\".",
"title": "1917 photographs"
},
{
"paragraph_id": 4,
"text": "Elsie's father, Arthur, was a keen amateur photographer, and had set up his own darkroom. The picture on the photographic plate he developed showed Frances behind a bush in the foreground, on which four fairies appeared to be dancing. Knowing his daughter's artistic ability, and that she had spent some time working in a photographer's studio, he dismissed the figures as cardboard cutouts. Two months later the girls borrowed his camera again, and this time returned with a photograph of Elsie sitting on the lawn holding out her hand to a 1-foot-tall (30 cm) gnome. Exasperated by what he believed to be \"nothing but a prank\", and convinced that the girls must have tampered with his camera in some way, Arthur Wright refused to lend it to them again. His wife Polly, however, believed the photographs to be authentic.",
"title": "1917 photographs"
},
{
"paragraph_id": 5,
"text": "I am learning French, Geometry, Cookery and Algebra at school now. Dad came home from France the other week after being there ten months, and we all think the war will be over in a few days ... I am sending two photos, both of me, one of me in a bathing costume in our back yard, while the other is me with some fairies. Elsie took that one.",
"title": "1917 photographs"
},
{
"paragraph_id": 6,
"text": "Letter from Frances Griffiths to a friend in South Africa",
"title": "1917 photographs"
},
{
"paragraph_id": 7,
"text": "Towards the end of 1918, Frances sent a letter to Johanna Parvin, a friend in Cape Town, South Africa, where Frances had lived for most of her life, enclosing the photograph of herself with the fairies. On the back she wrote \"It is funny, I never used to see them in Africa. It must be too hot for them there.\"",
"title": "1917 photographs"
},
{
"paragraph_id": 8,
"text": "The photographs became public in mid-1919, after Elsie's mother attended a meeting of the Theosophical Society in Bradford. The lecture that evening was on \"fairy life\", and at the end of the meeting Polly Wright showed the two fairy photographs taken by her daughter and niece to the speaker. As a result, the photographs were displayed at the society's annual conference in Harrogate, held a few months later. There they came to the attention of a leading member of the society, Edward Gardner. One of the central beliefs of theosophy is that humanity is undergoing a cycle of evolution, towards increasing \"perfection\", and Gardner recognised the potential significance of the photographs for the movement:",
"title": "1917 photographs"
},
{
"paragraph_id": 9,
"text": "the fact that two young girls had not only been able to see fairies, which others had done, but had actually for the first time ever been able to materialise them at a density sufficient for their images to be recorded on a photographic plate, meant that it was possible that the next cycle of evolution was underway.",
"title": "1917 photographs"
},
{
"paragraph_id": 10,
"text": "Gardner sent the prints along with the original glass-plate negatives to Harold Snelling, a photography expert. Snelling's opinion was that \"the two negatives are entirely genuine, unfaked photographs ... [with] no trace whatsoever of studio work involving card or paper models\". He did not go so far as to say that the photographs showed fairies, stating only that \"these are straight forward photographs of whatever was in front of the camera at the time\". Gardner had the prints \"clarified\" by Snelling, and new negatives produced, \"more conducive to printing\", for use in the illustrated lectures he gave around the UK. Snelling supplied the photographic prints which were available for sale at Gardner's lectures.",
"title": "Initial examinations"
},
{
"paragraph_id": 11,
"text": "Author and prominent spiritualist Sir Arthur Conan Doyle learned of the photographs from the editor of the spiritualist publication Light. Doyle had been commissioned by The Strand Magazine to write an article on fairies for their Christmas issue, and the fairy photographs \"must have seemed like a godsend\" according to broadcaster and historian Magnus Magnusson. Doyle contacted Gardner in June 1920 to determine the background to the photographs, and wrote to Elsie and her father to request permission from the latter to use the prints in his article. Arthur Wright was \"obviously impressed\" that Doyle was involved, and gave his permission for publication, but he refused payment on the grounds that, if genuine, the images should not be \"soiled\" by money.",
"title": "Initial examinations"
},
{
"paragraph_id": 12,
"text": "Gardner and Doyle sought a second expert opinion from the photographic company Kodak. Several of the company's technicians examined the enhanced prints, and although they agreed with Snelling that the pictures \"showed no signs of being faked\", they concluded that \"this could not be taken as conclusive evidence ... that they were authentic photographs of fairies\". Kodak declined to issue a certificate of authenticity. Gardner believed that the Kodak technicians might not have examined the photographs entirely objectively, observing that one had commented \"after all, as fairies couldn't be true, the photographs must have been faked somehow\". The prints were also examined by another photographic company, Ilford, who reported unequivocally that there was \"some evidence of faking\". Gardner and Doyle, perhaps rather optimistically, interpreted the results of the three expert evaluations as two in favour of the photographs' authenticity and one against.",
"title": "Initial examinations"
},
{
"paragraph_id": 13,
"text": "Doyle also showed the photographs to the physicist and pioneering psychical researcher Sir Oliver Lodge, who believed the photographs to be fake. He suggested that a troupe of dancers had masqueraded as fairies, and expressed doubt as to their \"distinctly 'Parisienne'\" hairstyles.",
"title": "Initial examinations"
},
{
"paragraph_id": 14,
"text": "On 4 October 2018 the first two of the photographs, Alice and the Fairies and Iris and the Gnome, were to be sold by Dominic Winter Auctioneers, in Gloucestershire. The prints, suspected to have been made in 1920 to sell at theosophical lectures, were expected to bring £700–£1000 each. As it turned out, 'Iris with the Gnome' sold for a hammer price of £5,400 (plus 24% buyer's premium incl. VAT), while 'Alice and the Fairies' sold for a hammer price of £15,000 (plus 24% buyer's premium incl. VAT).",
"title": "Initial examinations"
},
{
"paragraph_id": 15,
"text": "Doyle was preoccupied with organising an imminent lecture tour of Australia, and in July 1920, sent Gardner to meet the Wright family. By this point, Frances was living with her parents in Scarborough, but Elsie's father told Gardner that he had been so certain the photographs were fakes that while the girls were away he searched their bedroom and the area around the beck (stream), looking for scraps of pictures or cutouts, but found nothing \"incriminating\".",
"title": "1920 photographs"
},
{
"paragraph_id": 16,
"text": "Gardner believed the Wright family to be honest and respectable. To place the matter of the photographs' authenticity beyond doubt, he returned to Cottingley at the end of July with two W. Butcher & Sons Cameo folding plate cameras and 24 secretly marked photographic plates. Frances was invited to stay with the Wright family during the school summer holiday so that she and Elsie could take more pictures of the fairies. Gardner described his briefing in his 1945 Fairies: A Book of Real Fairies:",
"title": "1920 photographs"
},
{
"paragraph_id": 17,
"text": "I went off, to Cottingley again, taking the two cameras and plates from London, and met the family and explained to the two girls the simple working of the cameras, giving one each to keep. The cameras were loaded, and my final advice was that they need go up to the glen only on fine days as they had been accustomed to do before and tice the fairies, as they called their way of attracting them, and see what they could get. I suggested only the most obvious and easy precautions about lighting and distance, for I knew it was essential they should feel free and unhampered and have no burden of responsibility. If nothing came of it all, I told them, they were not to mind a bit.",
"title": "1920 photographs"
},
{
"paragraph_id": 18,
"text": "Until 19 August the weather was unsuitable for photography. Because Frances and Elsie insisted that the fairies would not show themselves if others were watching, Elsie's mother was persuaded to visit her sister's for tea, leaving the girls alone. In her absence the girls took several photographs, two of which appeared to show fairies. In the first, Frances and the Leaping Fairy, Frances is shown in profile with a winged fairy close by her nose. The second, Fairy offering Posy of Harebells to Elsie, shows a fairy either hovering or tiptoeing on a branch, and offering Elsie a flower. Two days later the girls took the last picture, Fairies and Their Sun-Bath.",
"title": "1920 photographs"
},
{
"paragraph_id": 19,
"text": "The plates were packed in cotton wool and returned to Gardner in London, who sent an \"ecstatic\" telegram to Doyle, by then in Melbourne. Doyle wrote back:",
"title": "1920 photographs"
},
{
"paragraph_id": 20,
"text": "My heart was gladdened when out here in far Australia I had your note and the three wonderful pictures which are confirmatory of our published results. When our fairies are admitted other psychic phenomena will find a more ready acceptance ... We have had continued messages at seances for some time that a visible sign was coming through.",
"title": "1920 photographs"
},
{
"paragraph_id": 21,
"text": "Doyle's article in the December 1920 issue of The Strand contained two higher-resolution prints of the 1917 photographs, and sold out within days of publication. To protect the girls' anonymity, Frances and Elsie were called Alice and Iris respectively, and the Wright family was referred to as the \"Carpenters\". An enthusiastic and committed spiritualist, Doyle hoped that if the photographs convinced the public of the existence of fairies then they might more readily accept other psychic phenomena. He ended his article with the words:",
"title": "Publication and reaction"
},
{
"paragraph_id": 22,
"text": "The recognition of their existence will jolt the material twentieth-century mind out of its heavy ruts in the mud, and will make it admit that there is a glamour and mystery to life. Having discovered this, the world will not find it so difficult to accept that spiritual message supported by physical facts which have already been put before it.",
"title": "Publication and reaction"
},
{
"paragraph_id": 23,
"text": "Early press coverage was \"mixed\", generally a combination of \"embarrassment and puzzlement\". The historical novelist and poet Maurice Hewlett published a series of articles in the literary journal John O' London's Weekly, in which he concluded: \"And knowing children, and knowing that Sir Arthur Conan Doyle has legs, I decide that the Miss Carpenters have pulled one of them.\" The London newspaper Truth on 5 January 1921 expressed a similar view; \"For the true explanation of these fairy photographs what is wanted is not a knowledge of occult phenomena but a knowledge of children.\" Some public figures were more sympathetic. Margaret McMillan, the educational and social reformer, wrote: \"How wonderful that to these dear children such a wonderful gift has been vouchsafed.\" The novelist Henry De Vere Stacpoole decided to take the fairy photographs and the girls at face value. In a letter to Gardner he wrote: \"Look at Alice's [Frances'] face. Look at Iris's [Elsie's] face. There is an extraordinary thing called Truth which has 10 million faces and forms – it is God's currency and the cleverest coiner or forger can't imitate it.\"",
"title": "Publication and reaction"
},
{
"paragraph_id": 24,
"text": "Major John Hall-Edwards, a keen photographer and pioneer of medical X-ray treatments in Britain, was a particularly vigorous critic:",
"title": "Publication and reaction"
},
{
"paragraph_id": 25,
"text": "On the evidence I have no hesitation in saying that these photographs could have been \"faked\". I criticize the attitude of those who declared there is something supernatural in the circumstances attending to the taking of these pictures because, as a medical man, I believe that the inculcation of such absurd ideas into the minds of children will result in later life in manifestations and nervous disorder and mental disturbances.",
"title": "Publication and reaction"
},
{
"paragraph_id": 26,
"text": "Doyle used the later photographs in 1921 to illustrate a second article in The Strand, in which he described other accounts of fairy sightings. The article formed the foundation for his 1922 book The Coming of the Fairies. As before, the photographs were received with mixed credulity. Sceptics noted that the fairies \"looked suspiciously like the traditional fairies of nursery tales\" and that they had \"very fashionable hairstyles\".",
"title": "Publication and reaction"
},
{
"paragraph_id": 27,
"text": "Gardner made a final visit to Cottingley in August 1921. He again brought cameras and photographic plates for Frances and Elsie, but was accompanied by the occultist Geoffrey Hodson. Although neither of the girls claimed to see any fairies, and there were no more photographs, \"on the contrary, he [Hodson] saw them [fairies] everywhere\" and wrote voluminous notes on his observations.",
"title": "Gardner's final visit"
},
{
"paragraph_id": 28,
"text": "By now Elsie and Frances were tired of the whole fairy business. Years later Elsie looked at a photograph of herself and Frances taken with Hodson and said: \"Look at that, fed up with fairies.\" Both Elsie and Frances later admitted that they \"played along\" with Hodson \"out of mischief\", and that they considered him \"a fake\".",
"title": "Gardner's final visit"
},
{
"paragraph_id": 29,
"text": "Public interest in the Cottingley Fairies gradually subsided after 1921. Elsie and Frances both eventually married, moved away from the area and each lived overseas for varying periods of time. In 1966, a reporter from the Daily Express newspaper traced Elsie, who was by then back in England. She admitted in an interview given that year that the fairies might have been \"figments of my imagination\", but left open the possibility she believed that she had somehow managed to photograph her thoughts. The media subsequently became interested in Frances and Elsie's photographs once again. BBC television's Nationwide programme investigated the case in 1971, but Elsie stuck to her story: \"I've told you that they're photographs of figments of our imagination, and that's what I'm sticking to\".",
"title": "Later investigations"
},
{
"paragraph_id": 30,
"text": "Elsie and Frances were interviewed by journalist Austin Mitchell in September 1976, for a programme broadcast on Yorkshire Television. When pressed, both women agreed that \"a rational person doesn't see fairies\", but they denied having fabricated the photographs. In 1978 the magician and scientific sceptic James Randi and a team from the Committee for the Scientific Investigation of Claims of the Paranormal examined the photographs, using a \"computer enhancement process\". They concluded that the photographs were fakes, and that strings could be seen supporting the fairies. Geoffrey Crawley, editor of the British Journal of Photography, undertook a \"major scientific investigation of the photographs and the events surrounding them\", published between 1982 and 1983, \"the first major postwar analysis of the affair\". He also concluded that the pictures were fakes.",
"title": "Later investigations"
},
{
"paragraph_id": 31,
"text": "In 1983, the cousins admitted in an article published in the magazine The Unexplained that the photographs had been faked, although both maintained that they really had seen fairies. Elsie had copied illustrations of dancing girls from a popular children's book of the time, Princess Mary's Gift Book, published in 1914, and drew wings on them. They said they had then cut out the cardboard figures and supported them with hatpins, disposing of their props in the beck once the photograph had been taken. But the cousins disagreed about the fifth and final photograph, which Doyle in his The Coming of the Fairies described in this way:",
"title": "Confession"
},
{
"paragraph_id": 32,
"text": "Seated on the upper left hand edge with wing well displayed is an undraped fairy apparently considering whether it is time to get up. An earlier riser of more mature age is seen on the right possessing abundant hair and wonderful wings. Her slightly denser body can be glimpsed within her fairy dress.",
"title": "Confession"
},
{
"paragraph_id": 33,
"text": "Elsie maintained it was a fake, just like all the others, but Frances insisted that it was genuine. In an interview given in the early 1980s Frances said:",
"title": "Confession"
},
{
"paragraph_id": 34,
"text": "It was a wet Saturday afternoon and we were just mooching about with our cameras and Elsie had nothing prepared. I saw these fairies building up in the grasses and just aimed the camera and took a photograph.",
"title": "Confession"
},
{
"paragraph_id": 35,
"text": "Both Frances and Elsie claimed to have taken the fifth photograph. In a letter published in The Times newspaper on 9 April 1983, Geoffrey Crawley explained the discrepancy by suggesting that the photograph was \"an unintended double exposure of fairy cutouts in the grass\", and thus \"both ladies can be quite sincere in believing that they each took it\".",
"title": "Confession"
},
{
"paragraph_id": 36,
"text": "In a 1985 interview on Yorkshire Television's Arthur C. Clarke's World of Strange Powers, Elsie said that she and Frances were too embarrassed to admit the truth after fooling Doyle, the author of Sherlock Holmes: \"Two village kids and a brilliant man like Conan Doyle – well, we could only keep quiet.\" In the same interview Frances said: \"I never even thought of it as being a fraud – it was just Elsie and I having a bit of fun and I can't understand to this day why they were taken in – they wanted to be taken in.\"",
"title": "Confession"
},
{
"paragraph_id": 37,
"text": "Frances died in 1986, and Elsie in 1988. Prints of their photographs of the fairies, along with a few other items including a first edition of Doyle's book The Coming of the Fairies, were sold at auction in London for £21,620 in 1998. That same year, Geoffrey Crawley sold his Cottingley Fairy material to the National Museum of Film, Photography and Television in Bradford (now the National Science and Media Museum), where it is on display. The collection included prints of the photographs, two of the cameras used by the girls, watercolours of fairies painted by Elsie, and a nine-page letter from Elsie admitting to the hoax. The glass photographic plates were bought for £6,000 by an unnamed buyer at a London auction held in 2001.",
"title": "Subsequent history"
},
{
"paragraph_id": 38,
"text": "Frances's daughter, Christine Lynch, appeared in an episode of the television programme Antiques Roadshow in Belfast, broadcast on BBC One in January 2009, with the photographs and one of the cameras given to the girls by Doyle. Christine told the expert, Paul Atterbury, that she believed, as her mother had done, that the fairies in the fifth photograph were genuine. Atterbury estimated the value of the items at between £25,000 and £30,000. The first edition of Frances's memoirs was published a few months later, under the title Reflections on the Cottingley Fairies. The book contains correspondence, sometimes \"bitter\", between Elsie and Frances. In one letter, dated 1983, Frances wrote:",
"title": "Subsequent history"
},
{
"paragraph_id": 39,
"text": "I hated those photographs from the age of 16 when Mr Gardner presented me with a bunch of flowers and wanted me to sit on the platform [at a Theosophical Society meeting] with him. I realised what I was in for if I did not keep myself hidden.",
"title": "Subsequent history"
},
{
"paragraph_id": 40,
"text": "The 1997 films FairyTale: A True Story and Photographing Fairies were inspired by the events surrounding the Cottingley Fairies. The photographs were parodied in a 1994 book written by Terry Jones and Brian Froud, Lady Cottington's Pressed Fairy Book. In A. J. Elwood's 2021 novel, The Cottingley Cuckoo, a series of letters were written soon after the Cottingley fairy photographs were published claiming further sightings of fairies and proof of their existence.",
"title": "Subsequent history"
},
{
"paragraph_id": 41,
"text": "In 2017 a further two fairy photographs were presented as evidence that the girls' parents were part of the conspiracy. Dating from 1917 and 1918, both photographs are poorly executed copies of two of the original fairy photographs. One was published in 1918 in The Sphere newspaper, which was before the originals had been seen by anyone outside the girls' immediate family.",
"title": "Subsequent history"
},
{
"paragraph_id": 42,
"text": "In 2019, a print of the first of the five photographs sold for £1,050. A print of the second was also put up for sale but failed to sell as it did not meet its £500 reserve price. The pictures previously belonged to the Reverend George Vale Owen. In December 2019, the third camera used to take the images was acquired by the National Science and Media Museum.",
"title": "Subsequent history"
},
{
"paragraph_id": 43,
"text": "",
"title": "External links"
}
] | The Cottingley Fairies appear in a series of five photographs taken by Elsie Wright (1901–1988) and Frances Griffiths (1907–1986), two young cousins who lived in Cottingley, near Bradford in England. In 1917, when the first two photographs were taken, Elsie was 16 years old and Frances was 9. The pictures came to the attention of writer Sir Arthur Conan Doyle, who used them to illustrate an article on fairies he had been commissioned to write for the Christmas 1920 edition of The Strand Magazine. Doyle, as a spiritualist, was enthusiastic about the photographs, and interpreted them as clear and visible evidence of psychic phenomena. Public reaction was mixed; some accepted the images as genuine, others believed that they had been faked. Interest in the Cottingley Fairies gradually declined after 1921. Both girls married and lived abroad for a time after they grew up, and yet the photographs continued to hold the public imagination. In 1966 a reporter from the Daily Express newspaper traced Elsie, who had by then returned to the United Kingdom. Elsie left open the possibility that she believed she had photographed her thoughts, and the media once again became interested in the story. In the early 1980s Elsie and Frances admitted that the photographs were faked, using cardboard cutouts of fairies copied from a popular children's book of the time, but Frances maintained that the fifth and final photograph was genuine. As of 2019 the photographs and the cameras used are in the collections of the National Science and Media Museum in Bradford, England. | 2001-10-11T00:02:16Z | 2023-12-24T20:39:29Z | [
"Template:Featured article",
"Template:Authority control",
"Template:Use British English",
"Template:Sfnp",
"Template:Cite news",
"Template:ISSN",
"Template:Librivox book",
"Template:Webarchive",
"Template:Clear",
"Template:Reflist",
"Template:Cite web",
"Template:Subscription required",
"Template:Refend",
"Template:Convert",
"Template:Quote box",
"Template:Refbegin",
"Template:Cite journal",
"Template:Citation",
"Template:Skeptoid",
"Template:Short description",
"Template:Use dmy dates",
"Template:Snd",
"Template:Blockquote",
"Template:'\"",
"Template:Commons category",
"Template:Wikisource"
] | https://en.wikipedia.org/wiki/Cottingley_Fairies |
6,752 | Cheka | The All-Russian Extraordinary Commission (AREOC; Russian: Всероссийская чрезвычайная комиссия, tr. Vserossijskaja črezvyčajnaja komissija, IPA: [fsʲɪrɐˈsʲijskəjə tɕrʲɪzvɨˈtɕæjnəjə kɐˈmʲisʲɪjə]), abbreviated as VChK (Russian: ВЧК, IPA: [vɛ tɕe ˈka]), and commonly known as Cheka (Russian: Чека, IPA: [tɕɪˈka]; from the initialism ЧК), was the first of a succession of Soviet secret-police organizations known for conducting the Red Terror. Established on December 5 (Old Style) 1917 by the Sovnarkom, it came under the leadership of Bolshevik revolutionary Felix Dzerzhinsky. By late 1918, hundreds of Cheka committees had sprung up in the Russian SFSR at all levels.
Ostensibly set up to protect the revolution from reactionary forces, i.e., "class enemies" such as the bourgeoisie and members of the clergy, it soon became the repression tool against all political opponents of the communist regime. At the direction of Vladimir Lenin, the Cheka performed mass arrests, imprisonments, torture, and executions without trial.
In 1921, the Troops for the Internal Defense of the Republic (a branch of the Cheka) numbered at least 200,000. They policed labor camps, ran the Gulag system, conducted requisitions of food, and put down rebellions and riots by workers and peasants and mutinies in the Red Army.
The organization was dissolved in 1922 and succeeded by the State Political Directorate or GPU.
The official designation was All-Russian Extraordinary (or Emergency) Commission for Combating Counter-Revolution and Sabotage under the Council of People's Commissars of the RSFSR (Russian: Всероссийская чрезвычайная комиссия по борьбе с контрреволюцией и саботажем при Совете народных комиссаров РСФСР, Vserossiyskaya chrezvychaynaya komissiya po borbe s kontrrevolyutsiyey i sabotazhem pri Sovete narodnykh komisarov RSFSR).
In 1918 its name was changed, becoming All-Russian Extraordinary Commission for Combating Counter-Revolution, Profiteering and Corruption.
A member of Cheka was called a chekist (Russian: чеки́ст, tr. chekíst, IPA: [t͡ɕɪˈkʲist] ). Also, the term chekist often referred to Soviet secret police throughout the Soviet period, despite official name changes over time. In The Gulag Archipelago, Alexander Solzhenitsyn recalls that zeks in the labor camps used old chekist as a mark of special esteem for particularly experienced camp administrators. The term is still found in use in Russia today (for example, President Vladimir Putin has been referred to in the Russian media as a chekist due to his career in the KGB and as head of the KGB's successor, FSB).
The chekists commonly dressed in black leather, including long flowing coats, reportedly after being issued such distinctive coats early in their existence. Western communists adopted this clothing fashion. The Chekists also often carried with them Greek-style worry beads made of amber, which had become "fashionable among high officials during the time of the 'cleansing'".
In 1921, the Troops for the Internal Defense of the Republic (a branch of the Cheka) numbered at least 200,000. These troops policed labor camps, ran the Gulag system, conducted requisitions of food, and subjected political opponents to secret arrest, detention, torture and summary execution. They also put down rebellions and riots by workers or peasants, and mutinies in the desertion-plagued Red Army.
After 1922 Cheka groups underwent the first of a series of reorganizations; however the theme of a government dominated by "the organs" persisted indefinitely afterward, and Soviet citizens continued to refer to members of the various organs as Chekists.
In the first month and half after the October Revolution (1917), the duty of "extinguishing the resistance of exploiters" was assigned to the Petrograd Military Revolutionary Committee (or PVRK). It represented a temporary body working under directives of the Council of People's Commissars (Sovnarkom) and Central Committee of RDSRP(b). The VRK created new bodies of government, organized food delivery to cities and the Army, requisitioned products from bourgeoisie, and sent its emissaries and agitators into provinces. One of its most important functions was the security of revolutionary order, and the fight against counterrevolutionary activity (see: Anti-Soviet agitation).
On December 1, 1917, the All-Russian Central Executive Committee (VTsIK or TsIK) reviewed a proposed reorganization of the VRK, and possible replacement of it. On December 5, the Petrograd VRK published an announcement of dissolution and transferred its functions to the department of TsIK for the fight against "counterrevolutionaries". On December 6, the Council of People's Commissars (Sovnarkom) strategized how to persuade government workers to strike across Russia. They decided that a special commission was needed to implement the "most energetically revolutionary" measures. Felix Dzerzhinsky (the Iron Felix) was appointed as Director and invited the participation of the following individuals: V. K. Averin, V.V Yakovlev, D. G. Yevseyev, N. A. Zhydelev, I. K. Ksenofontov, G. K. Ordjonikidze, Ya. Kh. Peters, K. A. Peterson, V. A. Trifonov.
On December 7, 1917, all invited except Zhydelev and Vasilevsky gathered in the Smolny Institute to discuss the competence and structure of the commission to combat counterrevolution and sabotage. The obligations of the commission were: "to liquidate to the root all of the counterrevolutionary and sabotage activities and all attempts to them in all of Russia, to hand over counter-revolutionaries and saboteurs to the revolutionary tribunals, develop measures to combat them and relentlessly apply them in real-world applications. The commission should only conduct a preliminary investigation". The commission should also observe the press and counterrevolutionary parties, sabotaging officials and other criminals.
Three sections were created: informational, organizational, and a unit to combat counter-revolution and sabotage. Upon the end of the meeting, Dzerzhinsky reported to the Sovnarkom with the requested information. The commission was allowed to apply such measures of repression as 'confiscation, deprivation of ration cards, publication of lists of enemies of the people etc.'". That day, Sovnarkom officially confirmed the creation of VCheKa. The commission was created not under the VTsIK as was previously anticipated, but rather under the Council of the People's Commissars.
On December 8, 1917, some of the original members of the VCheka were replaced. Averin, Ordzhonikidze, and Trifonov were replaced by V. V. Fomin, S. E. Shchukin, Ilyin, and Chernov. On the meeting of December 8, the presidium of VChK was elected of five members, and chaired by Dzerzhinsky. The issues of "speculation" or profiteering, such as by black market grain sellers and "corruption" was raised at the same meeting, which was assigned to Peters to address and report with results to one of the next meetings of the commission. A circular, published on December 28 [O.S. December 15] 1917, gave the address of VCheka's first headquarters as "Petrograd, Gorokhovaya 2, 4th floor". On December 11, Fomin was ordered to organize a section to suppress "speculation." And in the same day, VCheKa offered Shchukin to conduct arrests of counterfeiters.
In January 1918, a subsection of the anti-counterrevolutionary effort was created to police bank officials. The structure of VCheKa was changing repeatedly. By March 1918, when the organization came to Moscow, it contained the following sections: against counterrevolution, speculation, non-residents, and information gathering. By the end of 1918–1919, some new units were created: secretly operative, investigatory, of transportation, military (special), operative, and instructional. By 1921, it changed once again, forming the following sections: directory of affairs, administrative-organizational, secretly operative, economical, and foreign affairs.
In the first months of its existence, VCheKa consisted of only 40 officials. It commanded a team of soldiers, the Sveaborgesky regiment, as well as a group of Red Guardsmen. On January 14, 1918, Sovnarkom ordered Dzerzhinsky to organize teams of "energetic and ideological" sailors to combat speculation. By the spring of 1918, the commission had several teams: in addition to the Sveaborge team, it had an intelligence team, a team of sailors, and a strike team. Through the winter of 1917–1918, all activities of VCheKa were centralized mainly in the city of Petrograd. It was one of several other commissions in the country which fought against counterrevolution, speculation, banditry, and other activities perceived as crimes. Other organizations included: the Bureau of Military Commissars, and an Army-Navy investigatory commission to attack the counterrevolutionary element in the Red Army, plus the Central Requisite and Unloading Commission to fight speculation. The investigation of counterrevolutionary or major criminal offenses was conducted by the Investigatory Commission of Revtribunal. The functions of VCheKa were closely intertwined with the Commission of V. D. Bonch-Bruyevich, which beside the fight against wine pogroms was engaged in the investigation of most major political offenses (see: Bonch-Bruyevich Commission).
All results of its activities, VCheKa had either to transfer to the Investigatory Commission of Revtribunal, or to dismiss. The control of the commission's activity was provided by the People's Commissariat for Justice (Narkomjust, at that time headed by Isaac Steinberg) and Internal Affairs (NKVD, at that time headed by Grigory Petrovsky). Although the VCheKa was officially an independent organization from the NKVD, its chief members such as Dzerzhinsky, Latsis, Unszlicht, and Uritsky (all main chekists), since November 1917 composed the collegiate of NKVD headed by Petrovsky. In November 1918, Petrovsky was appointed as head of the All-Ukrainian Central Military Revolutionary Committee during VCheKa's expansion to provinces and front-lines. At the time of political competition between Bolsheviks and SRs (January 1918), Left SRs attempted to curb the rights of VCheKa and establish through the Narkomiust their control over its work. Having failed in attempts to subordinate the VCheKa to Narkomiust, the Left SRs tried to gain control of the Extraordinary Commission in a different way: they requested that the Central Committee of the party be granted the right to directly enter their representatives into the VCheKa. Sovnarkom recognized the desirability of including five representatives of the Left Socialist-Revolutionary faction of VTsIK. Left SRs were granted the post of a companion (deputy) chairman of VCheKa. However, Sovnarkom, in which the majority belonged to the representatives of RSDLP(b) retained the right to approve members of the collegium of the VCheKa.
Originally, members of the Cheka were exclusively Bolshevik; however, in January 1918, Left SRs also joined the organization. The Left SRs were expelled or arrested later in 1918, following the attempted assassination of Lenin by an SR, Fanni Kaplan.
By the end of January 1918, the Investigatory Commission of Petrograd Soviet (probably same as of Revtribunal) petitioned Sovnarkom to delineate the role of detection and judicial-investigatory organs. It offered to leave, for the VCheKa and the Commission of Bonch-Bruyevich, only the functions of detection and suppression, while investigative functions entirely transferred to it. The Investigatory Commission prevailed. On January 31, 1918, Sovnarkom ordered to relieve VCheKa of the investigative functions, leaving for the commission only the functions of detection, suppression, and prevention of anti revolutionary crimes. At the meeting of the Council of People's Commissars on January 31, 1918, a merger of VCheKa and the Commission of Bonch-Bruyevich was proposed. The existence of both commissions, VCheKa of Sovnarkom and the Commission of Bonch-Bruyevich of VTsIK, with almost the same functions and equal rights, became impractical. A decision followed two weeks later.
On February 23, 1918, VCheKa sent a radio telegram to all Soviets with a petition to immediately organize emergency commissions to combat counter-revolution, sabotage and speculation, if such commissions had not been yet organized. February 1918 saw the creation of local Extraordinary Commissions. One of the first founded was the Moscow Cheka. Sections and commissariats to combat counterrevolution were established in other cities. The Extraordinary Commissions arose, usually in the areas during the moments of the greatest aggravation of political situation. On February 25, 1918, as the counterrevolutionary organization Union of Front-liners was making advances, the executive committee of the Saratov Soviet formed a counter-revolutionary section. On March 7, 1918, because of the move from Petrograd to Moscow, the Petrograd Cheka was created. On March 9, a section for combating counterrevolution was created under the Omsk Soviet. Extraordinary commissions were also created in Penza, Perm, Novgorod, Cherepovets, Rostov, Taganrog. On March 18, VCheKa adopted a resolution, The Work of VCheKa on the All-Russian Scale, foreseeing the formation everywhere of Extraordinary Commissions after the same model, and sent a letter that called for the widespread establishment of the Cheka in combating counterrevolution, speculation, and sabotage. Establishment of provincial Extraordinary Commissions was largely completed by August 1918. In the Soviet Republic, there were 38 gubernatorial Chekas (Gubcheks) by this time.
On June 12, 1918, the All-Russian Conference of Cheka adopted the Basic Provisions on the Organization of Extraordinary Commissions. They set out to form Extraordinary Commissions not only at Oblast and Guberniya levels, but also at the large Uyezd Soviets. In August 1918, in the Soviet Republic had accounted for some 75 Uyezd-level Extraordinary Commissions. By the end of the year, 365 Uyezd-level Chekas were established.
In 1918, the All-Russia Extraordinary Commission and the Soviets managed to establish a local Cheka apparatus. It included Oblast, Guberniya, Raion, Uyezd, and Volost Chekas, with Raion and Volost Extraordinary Commissioners. In addition, border security Chekas were included in the system of local Cheka bodies.
In the autumn of 1918, as consolidation of the political situation of the republic continued, a move toward elimination of Uyezd-, Raion-, and Volost-level Chekas, as well as the institution of Extraordinary Commissions was considered. On January 20, 1919, VTsIK adopted a resolution prepared by VCheKa, On the abolition of Uyezd Extraordinary Commissions. On January 16 the presidium of VCheKa approved the draft on the establishment of the Politburo at Uyezd militsiya. This decision was approved by the Conference of the Extraordinary Commission IV, held in early February 1920.
On August 3, a VCheKa section for combating counterrevolution, speculation and sabotage on railways was created. On August 7, 1918, Sovnarkom adopted a decree on the organization of the railway section at VCheKa. Combating counterrevolution, speculation, and crimes on railroads was passed under the jurisdiction of the railway section of VCheKa and local Cheka. In August 1918, railway sections were formed under the Gubcheks. Formally, they were part of the non-resident sections, but in fact constituted a separate division, largely autonomous in their activities. The gubernatorial and oblast-type Chekas retained in relation to the transportation sections only control and investigative functions.
The beginning of a systematic work of organs of VCheKa in RKKA refers to July 1918, the period of extreme tension of the civil war and class struggle in the country. On July 16, 1918, the Council of People's Commissars formed the Extraordinary Commission for combating counterrevolution at the Czechoslovak (Eastern) Front, led by M. I. Latsis. In the fall of 1918, Extraordinary Commissions to combat counterrevolution on the Southern (Ukraine) Front were formed. In late November, the Second All-Russian Conference of the Extraordinary Commissions accepted a decision after a report from I. N. Polukarov to establish at all frontlines, and army sections of the Cheka and granted them the right to appoint their commissioners in military units. On December 9, 1918, the collegiate (or presidium) of VCheKa had decided to form a military section, headed by M. S. Kedrov, to combat counterrevolution in the Army. In early 1919, the military control and the military section of VCheKa were merged into one body, the Special Section of the Republic, with Kedrov as head. On January 1, he issued an order to establish the Special Section. The order instructed agencies everywhere to unite the Military control and the military sections of Chekas and to form special sections of frontlines, armies, military districts, and guberniyas.
In November 1920 the Soviet of Labor and Defense created a Special Section of VCheKa for the security of the state border. On February 6, 1922, after the Ninth All-Russian Soviet Congress, the Cheka was dissolved by VTsIK, "with expressions of gratitude for heroic work." It was replaced by the State Political Administration or GPU, a section of the NKVD of the Russian Soviet Federative Socialist Republic (RSFSR). Dzerzhinsky remained as chief of the new organization.
As its name implied, the Extraordinary Commission had virtually unlimited powers and could interpret them in any way it wished. No standard procedures were ever set up, except that the commission was supposed to send the arrested to the Military-Revolutionary tribunals if outside of a war zone. This left an opportunity for a wide range of interpretations, as the whole country was in total chaos. At the direction of Lenin, the Cheka performed mass arrests, imprisonments, and executions of "enemies of the people". In this, the Cheka said that they targeted "class enemies" such as the bourgeoisie, and members of the clergy.
Within a month, the Cheka had extended its repression to all political opponents of the communist government, including anarchists and others on the left. On April 11/12, 1918, some 26 anarchist political centres in Moscow were attacked. Forty anarchists were killed by Cheka forces, and about 500 were arrested and jailed after a pitched battle took place between the two groups. In response to the anarchists' resistance, the Cheka orchestrated a massive retaliatory campaign of repression, executions, and arrests against all opponents of the Bolshevik government, in what came to be known as "Red Terror". The Red Terror, implemented by Dzerzhinsky on September 5, 1918, was vividly described by the Red Army journal Krasnaya Gazeta:
Without mercy, without sparing, we will kill our enemies in scores of hundreds. Let them be thousands, let them drown themselves in their own blood. For the blood of Lenin and Uritsky … let there be floods of blood of the bourgeoisie – more blood, as much as possible..."
An early Bolshevik, Victor Serge described in his book Memoirs of a Revolutionary:
Since the first massacres of Red prisoners by the Whites, the murders of Volodarsky and Uritsky and the attempt against Lenin (in the summer of 1918), the custom of arresting and, often, executing hostages had become generalized and legal. Already the Cheka, which made mass arrests of suspects, was tending to settle their fate independently, under formal control of the Party, but in reality without anybody's knowledge. The Party endeavoured to head it with incorruptible men like the former convict Dzerzhinsky, a sincere idealist, ruthless but chivalrous, with the emaciated profile of an Inquisitor: tall forehead, bony nose, untidy goatee, and an expression of weariness and austerity. But the Party had few men of this stamp and many Chekas. I believe that the formation of the Chekas was one of the gravest and most impermissible errors that the Bolshevik leaders committed in 1918 when plots, blockades, and interventions made them lose their heads. All evidence indicates that revolutionary tribunals, functioning in the light of day and admitting the right of defense, would have attained the same efficiency with far less abuse and depravity. Was it necessary to revert to the procedures of the Inquisition?"
The Cheka was also used against Nestor Makhno's Revolutionary Insurgent Army of Ukraine. After the Insurgent Army had served its purpose in aiding the Red Army to stop the Whites under Denikin, the Soviet communist government decided to eliminate the anarchist forces. In May 1919, two Cheka agents sent to assassinate Makhno were caught and executed.
Many victims of Cheka repression were "bourgeois hostages" rounded up and held in readiness for summary execution in reprisal for any alleged counter-revolutionary act. Wholesale, indiscriminate arrests became an integral part of the system. The Cheka used trucks disguised as delivery trucks, called "Black Marias", for the secret arrest and transport of prisoners.
It was during the Red Terror that the Cheka, hoping to avoid the bloody aftermath of having half-dead victims writhing on the floor, developed a technique for execution known later by the German words "Nackenschuss" or "Genickschuss", a shot to the nape of the neck, which caused minimal blood loss and instant death. The victim's head was bent forward, and the executioner fired slightly downward at point-blank range. This had become the standard method used later by the NKVD to liquidate Joseph Stalin's purge victims and others.
It is believed that there were more than three million deserters from the Red Army in 1919 and 1920 . Approximately 500,000 deserters were arrested in 1919 and close to 800,000 in 1920, by troops of the 'Special Punitive Department' of the Cheka, created to punish desertions. These troops were used to forcibly repatriate deserters, taking and shooting hostages to force compliance or to set an example.
In September 1918, according to The Black Book of Communism, in only twelve provinces of Russia, 48,735 deserters and 7,325 "bandits" were arrested, 1,826 were killed and 2,230 were executed. The exact identity of these individuals is confused by the fact that the Soviet Bolshevik government used the term 'bandit' to cover ordinary criminals as well as armed and unarmed political opponents, such as the anarchists.
Estimates on Cheka executions vary widely. The lowest figures (disputed below) are provided by Dzerzhinsky's lieutenant Martyn Latsis, limited to RSFSR over the period 1918–1920:
Experts generally agree these semi-official figures are vastly understated. Pioneering historian of the Red Terror Sergei Melgunov claims that this was done deliberately in an attempt to demonstrate the government's humanity. For example, he refutes the claim made by Latsis that only 22 executions were carried out in the first six months of the Cheka's existence by providing evidence that the true number was 884 executions. W. H. Chamberlin claims, "It is simply impossible to believe that the Cheka only put to death 12,733 people in all of Russia up to the end of the civil war." Donald Rayfield concurs, noting that, "Plausible evidence reveals that the actual numbers … vastly exceeded the official figures." Chamberlin provides the "reasonable and probably moderate" estimate of 50,000, while others provide estimates ranging up to 500,000. Several scholars put the number of executions at about 250,000. Some believe it is possible more people were murdered by the Cheka than died in battle. Historian James Ryan gives a modest estimate of 28,000 executions per year from December 1917 to February 1922.
Lenin himself seemed unfazed by the killings. On 12 January 1920, while addressing trade union leaders, he said: "We did not hesitate to shoot thousands of people, and we shall not hesitate, and we shall save the country." On 14 May 1921, the Politburo, chaired by Lenin, passed a motion "broadening the rights of the [Cheka] in relation to the use of the [death penalty]."
There is no consensus among the Western historians on the number of deaths from the Red Terror. One source gives estimates of 28,000 executions per year from December 1917 to February 1922. Estimates for the number of people shot during the initial period of the Red Terror are at least 10,000. Estimates for the whole period go for a low of 50,000 to highs of 140,000 and 200,000 executed. Most estimations for the number of executions in total put the number at about 100,000.
According to Vadim Erlikhman's investigation, the number of the Red Terror's victims is at least 1,200,000 people. According to Robert Conquest, a total of 140,000 people were shot in 1917–1922. Candidate of Historical Sciences Nikolay Zayats states that the number of people shot by the Cheka in 1918–1922 is about 37,300 people, shot in 1918–1921 by the verdicts of the tribunals – 14,200, i.e. about 50,000–55,000 people in total, although executions and atrocities were not limited to the Cheka, having been organized by the Red Army as well.
According to anti-Bolshevik Socialist Revolutionary Sergei Melgunov (1879–1956), at the end of 1919, the Special Investigation Commission to investigate the atrocities of the Bolsheviks estimated the number of deaths at 1,766,188 people in 1918–1919 only.
The Cheka engaged in the widespread practice of torture. Depending on Cheka committees in various cities, the methods included: being skinned alive, scalped, "crowned" with barbed wire, impaled, crucified, hanged, stoned to death, tied to planks and pushed slowly into furnaces or tanks of boiling water, or rolled around naked in internally nail-studded barrels. Chekists reportedly poured water on naked prisoners in the winter-bound streets until they became living ice statues. Others reportedly"Reportedly"? By whom was it reportedly? beheaded their victims by twisting their necks until their heads could be torn off. The Cheka detachments stationed in Kiev reportedly would attach an iron tube to the torso of a bound victim and insert a rat in the tube closed off with wire netting, while the tube was held over a flame until the rat began gnawing through the victim's guts in an effort to escape.
Women and children were also victims of Cheka terror. Women would sometimes be tortured and raped before being shot. Children between the ages of 8 and 13 were imprisoned and occasionally executed.
All of these atrocities were published on numerous occasions in Pravda and Izvestiya: January 26, 1919 Izvestiya #18 article Is it really a medieval imprisonment? («Неужели средневековый застенок?»); February 22, 1919 Pravda #12 publishes details of the Vladimir Cheka's tortures, September 21, 1922 Socialist Herald publishes details of series of tortures conducted by the Stavropol Cheka (hot basement, cold basement, skull measuring, etc.).
The Chekists were also supplemented by the militarized Units of Special Purpose (the Party's Spetsnaz or ЧОН).
Cheka was actively and openly utilizing kidnapping methods. With kidnapping methods, Cheka was able to extinguish numerous cases of discontent especially among the rural population. Among the notorious ones was the Tambov rebellion.
Villages were bombarded to complete annihilation, as in the case of Tretyaki, Novokhopersk uyezd, Voronezh Governorate.
As a result of this relentless violence, more than a few Chekists ended up with psychopathic disorders, which Nikolai Bukharin said were "an occupational hazard of the Chekist profession." Many hardened themselves to the executions by heavy drinking and drug use. Some developed a gangster-like slang for the verb to kill in an attempt to distance themselves from the killings, such as 'shooting partridges', or 'sealing' a victim, or giving him a natsokal (onomatopoeia of the trigger action).
On November 30, 1992, by the initiative of the President of the Russian Federation the Constitutional Court of the Russian Federation recognized the Red Terror as unlawful, which in turn led to the suspension of Communist Party of the RSFSR.
Cheka departments were organized not only in big cities and guberniya seats, but also in each uyezd, at any front-lines and military formations. Nothing is known on what resources they were created. Many who were hired to head those departments were so-called "nestlings of Alexander Kerensky".
Konstantin Preobrazhenskiy criticised the continuing celebration of the professional holiday of the old and the modern Russian security services on the anniversary of the creation of the Cheka, with the assent of the Presidents of Russia. (Vladimir Putin, former KGB officer, chose not to change the date to another): "The successors of the KGB still haven't renounced anything; they even celebrate their professional holiday the same day, as during repression, on the 20th of December. It is as if the present intelligence and counterespionage services of Germany celebrated Gestapo Day. I can imagine how indignant our press would be!" | [
{
"paragraph_id": 0,
"text": "The All-Russian Extraordinary Commission (AREOC; Russian: Всероссийская чрезвычайная комиссия, tr. Vserossijskaja črezvyčajnaja komissija, IPA: [fsʲɪrɐˈsʲijskəjə tɕrʲɪzvɨˈtɕæjnəjə kɐˈmʲisʲɪjə]), abbreviated as VChK (Russian: ВЧК, IPA: [vɛ tɕe ˈka]), and commonly known as Cheka (Russian: Чека, IPA: [tɕɪˈka]; from the initialism ЧК), was the first of a succession of Soviet secret-police organizations known for conducting the Red Terror. Established on December 5 (Old Style) 1917 by the Sovnarkom, it came under the leadership of Bolshevik revolutionary Felix Dzerzhinsky. By late 1918, hundreds of Cheka committees had sprung up in the Russian SFSR at all levels.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Ostensibly set up to protect the revolution from reactionary forces, i.e., \"class enemies\" such as the bourgeoisie and members of the clergy, it soon became the repression tool against all political opponents of the communist regime. At the direction of Vladimir Lenin, the Cheka performed mass arrests, imprisonments, torture, and executions without trial.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In 1921, the Troops for the Internal Defense of the Republic (a branch of the Cheka) numbered at least 200,000. They policed labor camps, ran the Gulag system, conducted requisitions of food, and put down rebellions and riots by workers and peasants and mutinies in the Red Army.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The organization was dissolved in 1922 and succeeded by the State Political Directorate or GPU.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The official designation was All-Russian Extraordinary (or Emergency) Commission for Combating Counter-Revolution and Sabotage under the Council of People's Commissars of the RSFSR (Russian: Всероссийская чрезвычайная комиссия по борьбе с контрреволюцией и саботажем при Совете народных комиссаров РСФСР, Vserossiyskaya chrezvychaynaya komissiya po borbe s kontrrevolyutsiyey i sabotazhem pri Sovete narodnykh komisarov RSFSR).",
"title": "Name"
},
{
"paragraph_id": 5,
"text": "In 1918 its name was changed, becoming All-Russian Extraordinary Commission for Combating Counter-Revolution, Profiteering and Corruption.",
"title": "Name"
},
{
"paragraph_id": 6,
"text": "A member of Cheka was called a chekist (Russian: чеки́ст, tr. chekíst, IPA: [t͡ɕɪˈkʲist] ). Also, the term chekist often referred to Soviet secret police throughout the Soviet period, despite official name changes over time. In The Gulag Archipelago, Alexander Solzhenitsyn recalls that zeks in the labor camps used old chekist as a mark of special esteem for particularly experienced camp administrators. The term is still found in use in Russia today (for example, President Vladimir Putin has been referred to in the Russian media as a chekist due to his career in the KGB and as head of the KGB's successor, FSB).",
"title": "Name"
},
{
"paragraph_id": 7,
"text": "The chekists commonly dressed in black leather, including long flowing coats, reportedly after being issued such distinctive coats early in their existence. Western communists adopted this clothing fashion. The Chekists also often carried with them Greek-style worry beads made of amber, which had become \"fashionable among high officials during the time of the 'cleansing'\".",
"title": "Name"
},
{
"paragraph_id": 8,
"text": "In 1921, the Troops for the Internal Defense of the Republic (a branch of the Cheka) numbered at least 200,000. These troops policed labor camps, ran the Gulag system, conducted requisitions of food, and subjected political opponents to secret arrest, detention, torture and summary execution. They also put down rebellions and riots by workers or peasants, and mutinies in the desertion-plagued Red Army.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "After 1922 Cheka groups underwent the first of a series of reorganizations; however the theme of a government dominated by \"the organs\" persisted indefinitely afterward, and Soviet citizens continued to refer to members of the various organs as Chekists.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "In the first month and half after the October Revolution (1917), the duty of \"extinguishing the resistance of exploiters\" was assigned to the Petrograd Military Revolutionary Committee (or PVRK). It represented a temporary body working under directives of the Council of People's Commissars (Sovnarkom) and Central Committee of RDSRP(b). The VRK created new bodies of government, organized food delivery to cities and the Army, requisitioned products from bourgeoisie, and sent its emissaries and agitators into provinces. One of its most important functions was the security of revolutionary order, and the fight against counterrevolutionary activity (see: Anti-Soviet agitation).",
"title": "History"
},
{
"paragraph_id": 11,
"text": "On December 1, 1917, the All-Russian Central Executive Committee (VTsIK or TsIK) reviewed a proposed reorganization of the VRK, and possible replacement of it. On December 5, the Petrograd VRK published an announcement of dissolution and transferred its functions to the department of TsIK for the fight against \"counterrevolutionaries\". On December 6, the Council of People's Commissars (Sovnarkom) strategized how to persuade government workers to strike across Russia. They decided that a special commission was needed to implement the \"most energetically revolutionary\" measures. Felix Dzerzhinsky (the Iron Felix) was appointed as Director and invited the participation of the following individuals: V. K. Averin, V.V Yakovlev, D. G. Yevseyev, N. A. Zhydelev, I. K. Ksenofontov, G. K. Ordjonikidze, Ya. Kh. Peters, K. A. Peterson, V. A. Trifonov.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "On December 7, 1917, all invited except Zhydelev and Vasilevsky gathered in the Smolny Institute to discuss the competence and structure of the commission to combat counterrevolution and sabotage. The obligations of the commission were: \"to liquidate to the root all of the counterrevolutionary and sabotage activities and all attempts to them in all of Russia, to hand over counter-revolutionaries and saboteurs to the revolutionary tribunals, develop measures to combat them and relentlessly apply them in real-world applications. The commission should only conduct a preliminary investigation\". The commission should also observe the press and counterrevolutionary parties, sabotaging officials and other criminals.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Three sections were created: informational, organizational, and a unit to combat counter-revolution and sabotage. Upon the end of the meeting, Dzerzhinsky reported to the Sovnarkom with the requested information. The commission was allowed to apply such measures of repression as 'confiscation, deprivation of ration cards, publication of lists of enemies of the people etc.'\". That day, Sovnarkom officially confirmed the creation of VCheKa. The commission was created not under the VTsIK as was previously anticipated, but rather under the Council of the People's Commissars.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "On December 8, 1917, some of the original members of the VCheka were replaced. Averin, Ordzhonikidze, and Trifonov were replaced by V. V. Fomin, S. E. Shchukin, Ilyin, and Chernov. On the meeting of December 8, the presidium of VChK was elected of five members, and chaired by Dzerzhinsky. The issues of \"speculation\" or profiteering, such as by black market grain sellers and \"corruption\" was raised at the same meeting, which was assigned to Peters to address and report with results to one of the next meetings of the commission. A circular, published on December 28 [O.S. December 15] 1917, gave the address of VCheka's first headquarters as \"Petrograd, Gorokhovaya 2, 4th floor\". On December 11, Fomin was ordered to organize a section to suppress \"speculation.\" And in the same day, VCheKa offered Shchukin to conduct arrests of counterfeiters.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "In January 1918, a subsection of the anti-counterrevolutionary effort was created to police bank officials. The structure of VCheKa was changing repeatedly. By March 1918, when the organization came to Moscow, it contained the following sections: against counterrevolution, speculation, non-residents, and information gathering. By the end of 1918–1919, some new units were created: secretly operative, investigatory, of transportation, military (special), operative, and instructional. By 1921, it changed once again, forming the following sections: directory of affairs, administrative-organizational, secretly operative, economical, and foreign affairs.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In the first months of its existence, VCheKa consisted of only 40 officials. It commanded a team of soldiers, the Sveaborgesky regiment, as well as a group of Red Guardsmen. On January 14, 1918, Sovnarkom ordered Dzerzhinsky to organize teams of \"energetic and ideological\" sailors to combat speculation. By the spring of 1918, the commission had several teams: in addition to the Sveaborge team, it had an intelligence team, a team of sailors, and a strike team. Through the winter of 1917–1918, all activities of VCheKa were centralized mainly in the city of Petrograd. It was one of several other commissions in the country which fought against counterrevolution, speculation, banditry, and other activities perceived as crimes. Other organizations included: the Bureau of Military Commissars, and an Army-Navy investigatory commission to attack the counterrevolutionary element in the Red Army, plus the Central Requisite and Unloading Commission to fight speculation. The investigation of counterrevolutionary or major criminal offenses was conducted by the Investigatory Commission of Revtribunal. The functions of VCheKa were closely intertwined with the Commission of V. D. Bonch-Bruyevich, which beside the fight against wine pogroms was engaged in the investigation of most major political offenses (see: Bonch-Bruyevich Commission).",
"title": "History"
},
{
"paragraph_id": 17,
"text": "All results of its activities, VCheKa had either to transfer to the Investigatory Commission of Revtribunal, or to dismiss. The control of the commission's activity was provided by the People's Commissariat for Justice (Narkomjust, at that time headed by Isaac Steinberg) and Internal Affairs (NKVD, at that time headed by Grigory Petrovsky). Although the VCheKa was officially an independent organization from the NKVD, its chief members such as Dzerzhinsky, Latsis, Unszlicht, and Uritsky (all main chekists), since November 1917 composed the collegiate of NKVD headed by Petrovsky. In November 1918, Petrovsky was appointed as head of the All-Ukrainian Central Military Revolutionary Committee during VCheKa's expansion to provinces and front-lines. At the time of political competition between Bolsheviks and SRs (January 1918), Left SRs attempted to curb the rights of VCheKa and establish through the Narkomiust their control over its work. Having failed in attempts to subordinate the VCheKa to Narkomiust, the Left SRs tried to gain control of the Extraordinary Commission in a different way: they requested that the Central Committee of the party be granted the right to directly enter their representatives into the VCheKa. Sovnarkom recognized the desirability of including five representatives of the Left Socialist-Revolutionary faction of VTsIK. Left SRs were granted the post of a companion (deputy) chairman of VCheKa. However, Sovnarkom, in which the majority belonged to the representatives of RSDLP(b) retained the right to approve members of the collegium of the VCheKa.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Originally, members of the Cheka were exclusively Bolshevik; however, in January 1918, Left SRs also joined the organization. The Left SRs were expelled or arrested later in 1918, following the attempted assassination of Lenin by an SR, Fanni Kaplan.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "By the end of January 1918, the Investigatory Commission of Petrograd Soviet (probably same as of Revtribunal) petitioned Sovnarkom to delineate the role of detection and judicial-investigatory organs. It offered to leave, for the VCheKa and the Commission of Bonch-Bruyevich, only the functions of detection and suppression, while investigative functions entirely transferred to it. The Investigatory Commission prevailed. On January 31, 1918, Sovnarkom ordered to relieve VCheKa of the investigative functions, leaving for the commission only the functions of detection, suppression, and prevention of anti revolutionary crimes. At the meeting of the Council of People's Commissars on January 31, 1918, a merger of VCheKa and the Commission of Bonch-Bruyevich was proposed. The existence of both commissions, VCheKa of Sovnarkom and the Commission of Bonch-Bruyevich of VTsIK, with almost the same functions and equal rights, became impractical. A decision followed two weeks later.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "On February 23, 1918, VCheKa sent a radio telegram to all Soviets with a petition to immediately organize emergency commissions to combat counter-revolution, sabotage and speculation, if such commissions had not been yet organized. February 1918 saw the creation of local Extraordinary Commissions. One of the first founded was the Moscow Cheka. Sections and commissariats to combat counterrevolution were established in other cities. The Extraordinary Commissions arose, usually in the areas during the moments of the greatest aggravation of political situation. On February 25, 1918, as the counterrevolutionary organization Union of Front-liners was making advances, the executive committee of the Saratov Soviet formed a counter-revolutionary section. On March 7, 1918, because of the move from Petrograd to Moscow, the Petrograd Cheka was created. On March 9, a section for combating counterrevolution was created under the Omsk Soviet. Extraordinary commissions were also created in Penza, Perm, Novgorod, Cherepovets, Rostov, Taganrog. On March 18, VCheKa adopted a resolution, The Work of VCheKa on the All-Russian Scale, foreseeing the formation everywhere of Extraordinary Commissions after the same model, and sent a letter that called for the widespread establishment of the Cheka in combating counterrevolution, speculation, and sabotage. Establishment of provincial Extraordinary Commissions was largely completed by August 1918. In the Soviet Republic, there were 38 gubernatorial Chekas (Gubcheks) by this time.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "On June 12, 1918, the All-Russian Conference of Cheka adopted the Basic Provisions on the Organization of Extraordinary Commissions. They set out to form Extraordinary Commissions not only at Oblast and Guberniya levels, but also at the large Uyezd Soviets. In August 1918, in the Soviet Republic had accounted for some 75 Uyezd-level Extraordinary Commissions. By the end of the year, 365 Uyezd-level Chekas were established.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "In 1918, the All-Russia Extraordinary Commission and the Soviets managed to establish a local Cheka apparatus. It included Oblast, Guberniya, Raion, Uyezd, and Volost Chekas, with Raion and Volost Extraordinary Commissioners. In addition, border security Chekas were included in the system of local Cheka bodies.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In the autumn of 1918, as consolidation of the political situation of the republic continued, a move toward elimination of Uyezd-, Raion-, and Volost-level Chekas, as well as the institution of Extraordinary Commissions was considered. On January 20, 1919, VTsIK adopted a resolution prepared by VCheKa, On the abolition of Uyezd Extraordinary Commissions. On January 16 the presidium of VCheKa approved the draft on the establishment of the Politburo at Uyezd militsiya. This decision was approved by the Conference of the Extraordinary Commission IV, held in early February 1920.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "On August 3, a VCheKa section for combating counterrevolution, speculation and sabotage on railways was created. On August 7, 1918, Sovnarkom adopted a decree on the organization of the railway section at VCheKa. Combating counterrevolution, speculation, and crimes on railroads was passed under the jurisdiction of the railway section of VCheKa and local Cheka. In August 1918, railway sections were formed under the Gubcheks. Formally, they were part of the non-resident sections, but in fact constituted a separate division, largely autonomous in their activities. The gubernatorial and oblast-type Chekas retained in relation to the transportation sections only control and investigative functions.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "The beginning of a systematic work of organs of VCheKa in RKKA refers to July 1918, the period of extreme tension of the civil war and class struggle in the country. On July 16, 1918, the Council of People's Commissars formed the Extraordinary Commission for combating counterrevolution at the Czechoslovak (Eastern) Front, led by M. I. Latsis. In the fall of 1918, Extraordinary Commissions to combat counterrevolution on the Southern (Ukraine) Front were formed. In late November, the Second All-Russian Conference of the Extraordinary Commissions accepted a decision after a report from I. N. Polukarov to establish at all frontlines, and army sections of the Cheka and granted them the right to appoint their commissioners in military units. On December 9, 1918, the collegiate (or presidium) of VCheKa had decided to form a military section, headed by M. S. Kedrov, to combat counterrevolution in the Army. In early 1919, the military control and the military section of VCheKa were merged into one body, the Special Section of the Republic, with Kedrov as head. On January 1, he issued an order to establish the Special Section. The order instructed agencies everywhere to unite the Military control and the military sections of Chekas and to form special sections of frontlines, armies, military districts, and guberniyas.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "In November 1920 the Soviet of Labor and Defense created a Special Section of VCheKa for the security of the state border. On February 6, 1922, after the Ninth All-Russian Soviet Congress, the Cheka was dissolved by VTsIK, \"with expressions of gratitude for heroic work.\" It was replaced by the State Political Administration or GPU, a section of the NKVD of the Russian Soviet Federative Socialist Republic (RSFSR). Dzerzhinsky remained as chief of the new organization.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "As its name implied, the Extraordinary Commission had virtually unlimited powers and could interpret them in any way it wished. No standard procedures were ever set up, except that the commission was supposed to send the arrested to the Military-Revolutionary tribunals if outside of a war zone. This left an opportunity for a wide range of interpretations, as the whole country was in total chaos. At the direction of Lenin, the Cheka performed mass arrests, imprisonments, and executions of \"enemies of the people\". In this, the Cheka said that they targeted \"class enemies\" such as the bourgeoisie, and members of the clergy.",
"title": "Operations"
},
{
"paragraph_id": 28,
"text": "Within a month, the Cheka had extended its repression to all political opponents of the communist government, including anarchists and others on the left. On April 11/12, 1918, some 26 anarchist political centres in Moscow were attacked. Forty anarchists were killed by Cheka forces, and about 500 were arrested and jailed after a pitched battle took place between the two groups. In response to the anarchists' resistance, the Cheka orchestrated a massive retaliatory campaign of repression, executions, and arrests against all opponents of the Bolshevik government, in what came to be known as \"Red Terror\". The Red Terror, implemented by Dzerzhinsky on September 5, 1918, was vividly described by the Red Army journal Krasnaya Gazeta:",
"title": "Operations"
},
{
"paragraph_id": 29,
"text": "Without mercy, without sparing, we will kill our enemies in scores of hundreds. Let them be thousands, let them drown themselves in their own blood. For the blood of Lenin and Uritsky … let there be floods of blood of the bourgeoisie – more blood, as much as possible...\"",
"title": "Operations"
},
{
"paragraph_id": 30,
"text": "An early Bolshevik, Victor Serge described in his book Memoirs of a Revolutionary:",
"title": "Operations"
},
{
"paragraph_id": 31,
"text": "Since the first massacres of Red prisoners by the Whites, the murders of Volodarsky and Uritsky and the attempt against Lenin (in the summer of 1918), the custom of arresting and, often, executing hostages had become generalized and legal. Already the Cheka, which made mass arrests of suspects, was tending to settle their fate independently, under formal control of the Party, but in reality without anybody's knowledge. The Party endeavoured to head it with incorruptible men like the former convict Dzerzhinsky, a sincere idealist, ruthless but chivalrous, with the emaciated profile of an Inquisitor: tall forehead, bony nose, untidy goatee, and an expression of weariness and austerity. But the Party had few men of this stamp and many Chekas. I believe that the formation of the Chekas was one of the gravest and most impermissible errors that the Bolshevik leaders committed in 1918 when plots, blockades, and interventions made them lose their heads. All evidence indicates that revolutionary tribunals, functioning in the light of day and admitting the right of defense, would have attained the same efficiency with far less abuse and depravity. Was it necessary to revert to the procedures of the Inquisition?\"",
"title": "Operations"
},
{
"paragraph_id": 32,
"text": "The Cheka was also used against Nestor Makhno's Revolutionary Insurgent Army of Ukraine. After the Insurgent Army had served its purpose in aiding the Red Army to stop the Whites under Denikin, the Soviet communist government decided to eliminate the anarchist forces. In May 1919, two Cheka agents sent to assassinate Makhno were caught and executed.",
"title": "Operations"
},
{
"paragraph_id": 33,
"text": "Many victims of Cheka repression were \"bourgeois hostages\" rounded up and held in readiness for summary execution in reprisal for any alleged counter-revolutionary act. Wholesale, indiscriminate arrests became an integral part of the system. The Cheka used trucks disguised as delivery trucks, called \"Black Marias\", for the secret arrest and transport of prisoners.",
"title": "Operations"
},
{
"paragraph_id": 34,
"text": "It was during the Red Terror that the Cheka, hoping to avoid the bloody aftermath of having half-dead victims writhing on the floor, developed a technique for execution known later by the German words \"Nackenschuss\" or \"Genickschuss\", a shot to the nape of the neck, which caused minimal blood loss and instant death. The victim's head was bent forward, and the executioner fired slightly downward at point-blank range. This had become the standard method used later by the NKVD to liquidate Joseph Stalin's purge victims and others.",
"title": "Operations"
},
{
"paragraph_id": 35,
"text": "It is believed that there were more than three million deserters from the Red Army in 1919 and 1920 . Approximately 500,000 deserters were arrested in 1919 and close to 800,000 in 1920, by troops of the 'Special Punitive Department' of the Cheka, created to punish desertions. These troops were used to forcibly repatriate deserters, taking and shooting hostages to force compliance or to set an example.",
"title": "Operations"
},
{
"paragraph_id": 36,
"text": "In September 1918, according to The Black Book of Communism, in only twelve provinces of Russia, 48,735 deserters and 7,325 \"bandits\" were arrested, 1,826 were killed and 2,230 were executed. The exact identity of these individuals is confused by the fact that the Soviet Bolshevik government used the term 'bandit' to cover ordinary criminals as well as armed and unarmed political opponents, such as the anarchists.",
"title": "Operations"
},
{
"paragraph_id": 37,
"text": "Estimates on Cheka executions vary widely. The lowest figures (disputed below) are provided by Dzerzhinsky's lieutenant Martyn Latsis, limited to RSFSR over the period 1918–1920:",
"title": "Repression"
},
{
"paragraph_id": 38,
"text": "Experts generally agree these semi-official figures are vastly understated. Pioneering historian of the Red Terror Sergei Melgunov claims that this was done deliberately in an attempt to demonstrate the government's humanity. For example, he refutes the claim made by Latsis that only 22 executions were carried out in the first six months of the Cheka's existence by providing evidence that the true number was 884 executions. W. H. Chamberlin claims, \"It is simply impossible to believe that the Cheka only put to death 12,733 people in all of Russia up to the end of the civil war.\" Donald Rayfield concurs, noting that, \"Plausible evidence reveals that the actual numbers … vastly exceeded the official figures.\" Chamberlin provides the \"reasonable and probably moderate\" estimate of 50,000, while others provide estimates ranging up to 500,000. Several scholars put the number of executions at about 250,000. Some believe it is possible more people were murdered by the Cheka than died in battle. Historian James Ryan gives a modest estimate of 28,000 executions per year from December 1917 to February 1922.",
"title": "Repression"
},
{
"paragraph_id": 39,
"text": "Lenin himself seemed unfazed by the killings. On 12 January 1920, while addressing trade union leaders, he said: \"We did not hesitate to shoot thousands of people, and we shall not hesitate, and we shall save the country.\" On 14 May 1921, the Politburo, chaired by Lenin, passed a motion \"broadening the rights of the [Cheka] in relation to the use of the [death penalty].\"",
"title": "Repression"
},
{
"paragraph_id": 40,
"text": "There is no consensus among the Western historians on the number of deaths from the Red Terror. One source gives estimates of 28,000 executions per year from December 1917 to February 1922. Estimates for the number of people shot during the initial period of the Red Terror are at least 10,000. Estimates for the whole period go for a low of 50,000 to highs of 140,000 and 200,000 executed. Most estimations for the number of executions in total put the number at about 100,000.",
"title": "Repression"
},
{
"paragraph_id": 41,
"text": "According to Vadim Erlikhman's investigation, the number of the Red Terror's victims is at least 1,200,000 people. According to Robert Conquest, a total of 140,000 people were shot in 1917–1922. Candidate of Historical Sciences Nikolay Zayats states that the number of people shot by the Cheka in 1918–1922 is about 37,300 people, shot in 1918–1921 by the verdicts of the tribunals – 14,200, i.e. about 50,000–55,000 people in total, although executions and atrocities were not limited to the Cheka, having been organized by the Red Army as well.",
"title": "Repression"
},
{
"paragraph_id": 42,
"text": "According to anti-Bolshevik Socialist Revolutionary Sergei Melgunov (1879–1956), at the end of 1919, the Special Investigation Commission to investigate the atrocities of the Bolsheviks estimated the number of deaths at 1,766,188 people in 1918–1919 only.",
"title": "Repression"
},
{
"paragraph_id": 43,
"text": "The Cheka engaged in the widespread practice of torture. Depending on Cheka committees in various cities, the methods included: being skinned alive, scalped, \"crowned\" with barbed wire, impaled, crucified, hanged, stoned to death, tied to planks and pushed slowly into furnaces or tanks of boiling water, or rolled around naked in internally nail-studded barrels. Chekists reportedly poured water on naked prisoners in the winter-bound streets until they became living ice statues. Others reportedly\"Reportedly\"? By whom was it reportedly? beheaded their victims by twisting their necks until their heads could be torn off. The Cheka detachments stationed in Kiev reportedly would attach an iron tube to the torso of a bound victim and insert a rat in the tube closed off with wire netting, while the tube was held over a flame until the rat began gnawing through the victim's guts in an effort to escape.",
"title": "Atrocities"
},
{
"paragraph_id": 44,
"text": "Women and children were also victims of Cheka terror. Women would sometimes be tortured and raped before being shot. Children between the ages of 8 and 13 were imprisoned and occasionally executed.",
"title": "Atrocities"
},
{
"paragraph_id": 45,
"text": "All of these atrocities were published on numerous occasions in Pravda and Izvestiya: January 26, 1919 Izvestiya #18 article Is it really a medieval imprisonment? («Неужели средневековый застенок?»); February 22, 1919 Pravda #12 publishes details of the Vladimir Cheka's tortures, September 21, 1922 Socialist Herald publishes details of series of tortures conducted by the Stavropol Cheka (hot basement, cold basement, skull measuring, etc.).",
"title": "Atrocities"
},
{
"paragraph_id": 46,
"text": "The Chekists were also supplemented by the militarized Units of Special Purpose (the Party's Spetsnaz or ЧОН).",
"title": "Atrocities"
},
{
"paragraph_id": 47,
"text": "Cheka was actively and openly utilizing kidnapping methods. With kidnapping methods, Cheka was able to extinguish numerous cases of discontent especially among the rural population. Among the notorious ones was the Tambov rebellion.",
"title": "Atrocities"
},
{
"paragraph_id": 48,
"text": "Villages were bombarded to complete annihilation, as in the case of Tretyaki, Novokhopersk uyezd, Voronezh Governorate.",
"title": "Atrocities"
},
{
"paragraph_id": 49,
"text": "As a result of this relentless violence, more than a few Chekists ended up with psychopathic disorders, which Nikolai Bukharin said were \"an occupational hazard of the Chekist profession.\" Many hardened themselves to the executions by heavy drinking and drug use. Some developed a gangster-like slang for the verb to kill in an attempt to distance themselves from the killings, such as 'shooting partridges', or 'sealing' a victim, or giving him a natsokal (onomatopoeia of the trigger action).",
"title": "Atrocities"
},
{
"paragraph_id": 50,
"text": "On November 30, 1992, by the initiative of the President of the Russian Federation the Constitutional Court of the Russian Federation recognized the Red Terror as unlawful, which in turn led to the suspension of Communist Party of the RSFSR.",
"title": "Atrocities"
},
{
"paragraph_id": 51,
"text": "Cheka departments were organized not only in big cities and guberniya seats, but also in each uyezd, at any front-lines and military formations. Nothing is known on what resources they were created. Many who were hired to head those departments were so-called \"nestlings of Alexander Kerensky\".",
"title": "Regional Chekas"
},
{
"paragraph_id": 52,
"text": "Konstantin Preobrazhenskiy criticised the continuing celebration of the professional holiday of the old and the modern Russian security services on the anniversary of the creation of the Cheka, with the assent of the Presidents of Russia. (Vladimir Putin, former KGB officer, chose not to change the date to another): \"The successors of the KGB still haven't renounced anything; they even celebrate their professional holiday the same day, as during repression, on the 20th of December. It is as if the present intelligence and counterespionage services of Germany celebrated Gestapo Day. I can imagine how indignant our press would be!\"",
"title": "Legacy"
}
] | The All-Russian Extraordinary Commission, abbreviated as VChK, and commonly known as Cheka, was the first of a succession of Soviet secret-police organizations known for conducting the Red Terror. Established on December 5 1917 by the Sovnarkom, it came under the leadership of Bolshevik revolutionary Felix Dzerzhinsky. By late 1918, hundreds of Cheka committees had sprung up in the Russian SFSR at all levels. Ostensibly set up to protect the revolution from reactionary forces, i.e., "class enemies" such as the bourgeoisie and members of the clergy, it soon became the repression tool against all political opponents of the communist regime. At the direction of Vladimir Lenin, the Cheka performed mass arrests, imprisonments, torture, and executions without trial. In 1921, the Troops for the Internal Defense of the Republic numbered at least 200,000. They policed labor camps, ran the Gulag system, conducted requisitions of food, and put down rebellions and riots by workers and peasants and mutinies in the Red Army. The organization was dissolved in 1922 and succeeded by the State Political Directorate or GPU. | 2001-10-11T14:08:02Z | 2023-12-24T09:04:03Z | [
"Template:Infobox government agency",
"Template:Clear",
"Template:Clarify",
"Template:Repression in the Soviet Union",
"Template:Nowrap",
"Template:Commons category-inline",
"Template:Russian Revolution 1917",
"Template:Blockquote",
"Template:Page needed",
"Template:Div col end",
"Template:Secret police of Communist Europe",
"Template:Authority control",
"Template:See also",
"Template:-\"",
"Template:Better source needed",
"Template:Div col",
"Template:Cite web",
"Template:Cite journal",
"Template:Lang",
"Template:ISBN",
"Template:Refend",
"Template:Request quotation",
"Template:OldStyleDate",
"Template:Sfnp",
"Template:Reflist",
"Template:Cite book",
"Template:Cite news",
"Template:Webarchive",
"Template:Soviet Union topics",
"Template:Chronology of Soviet secret police agencies",
"Template:More citations needed section",
"Template:Short description",
"Template:Weasel inline",
"Template:By whom",
"Template:Full citation needed",
"Template:In lang",
"Template:Refbegin",
"Template:Lang-rus",
"Template:Citation needed"
] | https://en.wikipedia.org/wiki/Cheka |
6,753 | Clitic | In morphology and syntax, a clitic (/ˈklɪtɪk/ KLIT-ik, backformed from Greek ἐγκλιτικός enklitikós "leaning" or "enclitic") is a morpheme that has syntactic characteristics of a word, but depends phonologically on another word or phrase. In this sense, it is syntactically independent but phonologically dependent—always attached to a host. A clitic is pronounced like an affix, but plays a syntactic role at the phrase level. In other words, clitics have the form of affixes, but the distribution of function words.
Clitics can belong to any grammatical category, although they are commonly pronouns, determiners, or adpositions. Note that orthography is not always a good guide for distinguishing clitics from affixes: clitics may be written as separate words, but sometimes they are joined to the word they depend on (like the Latin clitic -que, meaning "and") or separated by special characters such as hyphens or apostrophes (like the English clitic 's in "it's" for "it has" or "it is").
Clitics fall into various categories depending on their position in relation to the word they connect to.
A proclitic appears before its host.
An enclitic appears after its host.
Some authors postulate endoclitics, which split a stem and are inserted between the two elements. For example, they have been claimed to occur between the elements of bipartite verbs (equivalent to English verbs such as take part) in the Udi language. Endoclitics have also been claimed for Pashto and Degema. However, other authors treat such forms as a sequence of clitics docked to the stem.
One distinction drawn by some scholars divides the broad term "clitics" into two categories, simple clitics and special clitics. This distinction is, however, disputed.
Simple clitics are free morphemes: can stand alone in a phrase or sentence. They are unaccented and thus phonologically dependent upon a nearby word. They derive meaning only from that "host".
Special clitics are morphemes that are bound to the word upon which they depend: they exist as a part of their host. That form, which is unaccented, represents a variant of a free form that carries stress. Both variants carry similar meaning and phonological makeup, but the special clitic is bound to a host word and is unaccented.
Some clitics can be understood as elements undergoing a historical process of grammaticalization:
lexical item → clitic → affix
According to this model from Judith Klavans, an autonomous lexical item in a particular context loses the properties of a fully independent word over time and acquires the properties of a morphological affix (prefix, suffix, infix, etc.). At any intermediate stage of this evolutionary process, the element in question can be described as a "clitic". As a result, this term ends up being applied to a highly heterogeneous class of elements, presenting different combinations of word-like and affix-like properties.
One characteristic shared by many clitics, shared with affixes, is a lack of prosodic independence. A clitic attaches to an adjacent word, known as its host. Orthographic conventions treat clitics in different ways: Some are written as separate words, some are written as one word with their hosts, and some are attached to their hosts, but set off by punctuation (a hyphen or an apostrophe, for example).
Although the term "clitic" can be used descriptively to refer to any element whose grammatical status is somewhere in between a typical word and a typical affix, linguists have proposed various definitions of "clitic" as a technical term. One common approach is to treat clitics as words that are prosodically deficient: that, like affixes, they cannot appear without a host, and can only form an accentual unit in combination with their host. The term postlexical clitic is sometimes used for this sense of the term.
Given this basic definition, further criteria are needed to establish a dividing line between clitics and affixes. There is no natural, clear-cut boundary between the two categories (since from a diachronic point of view, a given form can move gradually from one to the other by morphologization). However, by identifying clusters of observable properties that are associated with core examples of clitics on the one hand, and core examples of affixes on the other, one can pick out a battery of tests that provide an empirical foundation for a clitic-affix distinction.
An affix syntactically and phonologically attaches to a base morpheme of a limited part of speech, such as a verb, to form a new word. A clitic syntactically functions above the word level, on the phrase or clause level, and attaches only phonetically to the first, last, or only word in the phrase or clause, whichever part of speech the word belongs to. The results of applying these criteria sometimes reveal that elements that have traditionally been called "clitics" actually have the status of affixes (e.g., the Romance pronominal clitics discussed below).
Zwicky and Pullum postulated five characteristics that distinguish clitics from affixes:
An example of differing analyses by different linguists is the discussion of the possessive marker ('s) in English. Some linguists treat it as an affix, while others treat it as a clitic.
Similar to the discussion above, clitics must be distinguishable from words. Linguists have proposed a number of tests to differentiate between the two categories. Some tests, specifically, are based upon the understanding that when comparing the two, clitics resemble affixes, while words resemble syntactic phrases. Clitics and words resemble different categories, in the sense that they share certain properties. Six such tests are described below. These are not the only ways to differentiate between words and clitics.
Clitics do not always appear next to the word or phrase that they are associated with grammatically. They may be subject to global word order constraints that act on the entire sentence. Many Indo-European languages, for example, obey Wackernagel's law (named after Jacob Wackernagel), which requires sentential clitics to appear in "second position", after the first syntactic phrase or the first stressed word in a clause:
English enclitics include the contracted versions of auxiliary verbs, as in I'm and we've. Some also regard the possessive marker, as in The Queen of England's crown as an enclitic, rather than a (phrasal) genitival inflection.
Some consider the infinitive marker to and the English articles a, an, the to be proclitics.
The negative marker -n't as in couldn't etc. is typically considered a clitic that developed from the lexical item not. Linguists Arnold Zwicky and Geoffrey Pullum argue, however, that the form has the properties of an affix rather than a syntactically independent clitic.
In Cornish, the clitics ma / na are used after a noun and definite article to express "this" / "that" (singular) and "these" / "those" (plural). For example:
Irish Gaelic uses seo / sin as clitics in a similar way, also to express "this" / "that" and "these" / "those". For example:
In Romance languages, some have treated the object personal pronoun forms as clitics, though they only attach to the verb they are the object of and so are affixes by the definition used here. There is no general agreement on the issue. For the Spanish object pronouns, for example:
Portuguese allows object suffixes before the conditional and future suffixes of the verbs:
Colloquial Portuguese allows ser to be conjugated as a verbal clitic adverbial adjunct to emphasize the importance of the phrase compared to its context, or with the meaning of "really" or "in truth":
Note that this clitic form is only for the verb ser and is restricted to only third-person singular conjugations. It is not used as a verb in the grammar of the sentence but introduces prepositional phrases and adds emphasis. It does not need to concord with the tense of the main verb, as in the second example, and can be usually removed from the sentence without affecting the simple meaning.
In the Indo-European languages, some clitics can be traced back to Proto-Indo-European: for example, *-kʷe is the original form of Sanskrit च (-ca), Greek τε (-te), and Latin -que.
Serbo-Croatian: the reflexive pronoun forms si and se, li (yes–no question), unstressed present and aorist tense forms of biti ("to be"; sam, si, je, smo, ste, su; and bih, bi, bi, bismo, biste, bi, for the respective tense), unstressed personal pronouns in genitive (me, te, ga, je, nas, vas, ih), dative (mi, ti, mu, joj, nam, vam, im) and accusative (me, te, ga (nj), je (ju), nas, vas, ih), and unstressed present tense of htjeti ("want/will"; ću, ćeš, će, ćemo, ćete, će)
These clitics follow the first stressed word in the sentence or clause in most cases, which may have been inherited from Proto-Indo-European (see Wackernagel's Law), even though many of the modern clitics became cliticised much more recently in the language (e.g. auxiliary verbs or the accusative forms of pronouns). In subordinate clauses and questions, they follow the connector and/or the question word respectively.
Examples (clitics – sam "I am", biste "you would (pl.)", mi "to me", vam "to you (pl.)", ih "them"):
In certain rural dialects this rule is (or was until recently) very strict, whereas elsewhere various exceptions occur. These include phrases containing conjunctions (e. g. Ivan i Ana "Ivan and Ana"), nouns with a genitival attribute (e. g. vrh brda "the top of the hill"), proper names and titles and the like (e. g. (gospođa) Ivana Marić "(Mrs) Ivana Marić", grad Zagreb "the city (of) Zagreb"), and in many local varieties clitics are hardly ever inserted into any phrases (e. g. moj najbolji prijatelj "my best friend", sutra ujutro "tomorrow morning"). In cases like these, clitics normally follow the initial phrase, although some Standard grammar handbooks recommend that they should be placed immediately after the verb (many native speakers find this unnatural).
Examples:
Clitics are however never inserted after the negative particle ne, which always precedes the verb in Serbo-Croatian, or after prefixes (earlier preverbs), and the interrogative particle li always immediately follows the verb. Colloquial interrogative particles such as da li, dal, jel appear in sentence-initial position and are followed by clitics (if there are any).
Examples: | [
{
"paragraph_id": 0,
"text": "In morphology and syntax, a clitic (/ˈklɪtɪk/ KLIT-ik, backformed from Greek ἐγκλιτικός enklitikós \"leaning\" or \"enclitic\") is a morpheme that has syntactic characteristics of a word, but depends phonologically on another word or phrase. In this sense, it is syntactically independent but phonologically dependent—always attached to a host. A clitic is pronounced like an affix, but plays a syntactic role at the phrase level. In other words, clitics have the form of affixes, but the distribution of function words.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Clitics can belong to any grammatical category, although they are commonly pronouns, determiners, or adpositions. Note that orthography is not always a good guide for distinguishing clitics from affixes: clitics may be written as separate words, but sometimes they are joined to the word they depend on (like the Latin clitic -que, meaning \"and\") or separated by special characters such as hyphens or apostrophes (like the English clitic 's in \"it's\" for \"it has\" or \"it is\").",
"title": ""
},
{
"paragraph_id": 2,
"text": "Clitics fall into various categories depending on their position in relation to the word they connect to.",
"title": "Classification"
},
{
"paragraph_id": 3,
"text": "A proclitic appears before its host.",
"title": "Classification"
},
{
"paragraph_id": 4,
"text": "An enclitic appears after its host.",
"title": "Classification"
},
{
"paragraph_id": 5,
"text": "Some authors postulate endoclitics, which split a stem and are inserted between the two elements. For example, they have been claimed to occur between the elements of bipartite verbs (equivalent to English verbs such as take part) in the Udi language. Endoclitics have also been claimed for Pashto and Degema. However, other authors treat such forms as a sequence of clitics docked to the stem.",
"title": "Classification"
},
{
"paragraph_id": 6,
"text": "One distinction drawn by some scholars divides the broad term \"clitics\" into two categories, simple clitics and special clitics. This distinction is, however, disputed.",
"title": "Distinction"
},
{
"paragraph_id": 7,
"text": "Simple clitics are free morphemes: can stand alone in a phrase or sentence. They are unaccented and thus phonologically dependent upon a nearby word. They derive meaning only from that \"host\".",
"title": "Distinction"
},
{
"paragraph_id": 8,
"text": "Special clitics are morphemes that are bound to the word upon which they depend: they exist as a part of their host. That form, which is unaccented, represents a variant of a free form that carries stress. Both variants carry similar meaning and phonological makeup, but the special clitic is bound to a host word and is unaccented.",
"title": "Distinction"
},
{
"paragraph_id": 9,
"text": "Some clitics can be understood as elements undergoing a historical process of grammaticalization:",
"title": "Properties"
},
{
"paragraph_id": 10,
"text": "lexical item → clitic → affix",
"title": "Properties"
},
{
"paragraph_id": 11,
"text": "According to this model from Judith Klavans, an autonomous lexical item in a particular context loses the properties of a fully independent word over time and acquires the properties of a morphological affix (prefix, suffix, infix, etc.). At any intermediate stage of this evolutionary process, the element in question can be described as a \"clitic\". As a result, this term ends up being applied to a highly heterogeneous class of elements, presenting different combinations of word-like and affix-like properties.",
"title": "Properties"
},
{
"paragraph_id": 12,
"text": "One characteristic shared by many clitics, shared with affixes, is a lack of prosodic independence. A clitic attaches to an adjacent word, known as its host. Orthographic conventions treat clitics in different ways: Some are written as separate words, some are written as one word with their hosts, and some are attached to their hosts, but set off by punctuation (a hyphen or an apostrophe, for example).",
"title": "Properties"
},
{
"paragraph_id": 13,
"text": "Although the term \"clitic\" can be used descriptively to refer to any element whose grammatical status is somewhere in between a typical word and a typical affix, linguists have proposed various definitions of \"clitic\" as a technical term. One common approach is to treat clitics as words that are prosodically deficient: that, like affixes, they cannot appear without a host, and can only form an accentual unit in combination with their host. The term postlexical clitic is sometimes used for this sense of the term.",
"title": "Properties"
},
{
"paragraph_id": 14,
"text": "Given this basic definition, further criteria are needed to establish a dividing line between clitics and affixes. There is no natural, clear-cut boundary between the two categories (since from a diachronic point of view, a given form can move gradually from one to the other by morphologization). However, by identifying clusters of observable properties that are associated with core examples of clitics on the one hand, and core examples of affixes on the other, one can pick out a battery of tests that provide an empirical foundation for a clitic-affix distinction.",
"title": "Properties"
},
{
"paragraph_id": 15,
"text": "An affix syntactically and phonologically attaches to a base morpheme of a limited part of speech, such as a verb, to form a new word. A clitic syntactically functions above the word level, on the phrase or clause level, and attaches only phonetically to the first, last, or only word in the phrase or clause, whichever part of speech the word belongs to. The results of applying these criteria sometimes reveal that elements that have traditionally been called \"clitics\" actually have the status of affixes (e.g., the Romance pronominal clitics discussed below).",
"title": "Properties"
},
{
"paragraph_id": 16,
"text": "Zwicky and Pullum postulated five characteristics that distinguish clitics from affixes:",
"title": "Properties"
},
{
"paragraph_id": 17,
"text": "An example of differing analyses by different linguists is the discussion of the possessive marker ('s) in English. Some linguists treat it as an affix, while others treat it as a clitic.",
"title": "Properties"
},
{
"paragraph_id": 18,
"text": "Similar to the discussion above, clitics must be distinguishable from words. Linguists have proposed a number of tests to differentiate between the two categories. Some tests, specifically, are based upon the understanding that when comparing the two, clitics resemble affixes, while words resemble syntactic phrases. Clitics and words resemble different categories, in the sense that they share certain properties. Six such tests are described below. These are not the only ways to differentiate between words and clitics.",
"title": "Properties"
},
{
"paragraph_id": 19,
"text": "Clitics do not always appear next to the word or phrase that they are associated with grammatically. They may be subject to global word order constraints that act on the entire sentence. Many Indo-European languages, for example, obey Wackernagel's law (named after Jacob Wackernagel), which requires sentential clitics to appear in \"second position\", after the first syntactic phrase or the first stressed word in a clause:",
"title": "Properties"
},
{
"paragraph_id": 20,
"text": "English enclitics include the contracted versions of auxiliary verbs, as in I'm and we've. Some also regard the possessive marker, as in The Queen of England's crown as an enclitic, rather than a (phrasal) genitival inflection.",
"title": "Indo-European languages"
},
{
"paragraph_id": 21,
"text": "Some consider the infinitive marker to and the English articles a, an, the to be proclitics.",
"title": "Indo-European languages"
},
{
"paragraph_id": 22,
"text": "The negative marker -n't as in couldn't etc. is typically considered a clitic that developed from the lexical item not. Linguists Arnold Zwicky and Geoffrey Pullum argue, however, that the form has the properties of an affix rather than a syntactically independent clitic.",
"title": "Indo-European languages"
},
{
"paragraph_id": 23,
"text": "In Cornish, the clitics ma / na are used after a noun and definite article to express \"this\" / \"that\" (singular) and \"these\" / \"those\" (plural). For example:",
"title": "Indo-European languages"
},
{
"paragraph_id": 24,
"text": "Irish Gaelic uses seo / sin as clitics in a similar way, also to express \"this\" / \"that\" and \"these\" / \"those\". For example:",
"title": "Indo-European languages"
},
{
"paragraph_id": 25,
"text": "In Romance languages, some have treated the object personal pronoun forms as clitics, though they only attach to the verb they are the object of and so are affixes by the definition used here. There is no general agreement on the issue. For the Spanish object pronouns, for example:",
"title": "Indo-European languages"
},
{
"paragraph_id": 26,
"text": "Portuguese allows object suffixes before the conditional and future suffixes of the verbs:",
"title": "Indo-European languages"
},
{
"paragraph_id": 27,
"text": "Colloquial Portuguese allows ser to be conjugated as a verbal clitic adverbial adjunct to emphasize the importance of the phrase compared to its context, or with the meaning of \"really\" or \"in truth\":",
"title": "Indo-European languages"
},
{
"paragraph_id": 28,
"text": "Note that this clitic form is only for the verb ser and is restricted to only third-person singular conjugations. It is not used as a verb in the grammar of the sentence but introduces prepositional phrases and adds emphasis. It does not need to concord with the tense of the main verb, as in the second example, and can be usually removed from the sentence without affecting the simple meaning.",
"title": "Indo-European languages"
},
{
"paragraph_id": 29,
"text": "In the Indo-European languages, some clitics can be traced back to Proto-Indo-European: for example, *-kʷe is the original form of Sanskrit च (-ca), Greek τε (-te), and Latin -que.",
"title": "Indo-European languages"
},
{
"paragraph_id": 30,
"text": "Serbo-Croatian: the reflexive pronoun forms si and se, li (yes–no question), unstressed present and aorist tense forms of biti (\"to be\"; sam, si, je, smo, ste, su; and bih, bi, bi, bismo, biste, bi, for the respective tense), unstressed personal pronouns in genitive (me, te, ga, je, nas, vas, ih), dative (mi, ti, mu, joj, nam, vam, im) and accusative (me, te, ga (nj), je (ju), nas, vas, ih), and unstressed present tense of htjeti (\"want/will\"; ću, ćeš, će, ćemo, ćete, će)",
"title": "Indo-European languages"
},
{
"paragraph_id": 31,
"text": "These clitics follow the first stressed word in the sentence or clause in most cases, which may have been inherited from Proto-Indo-European (see Wackernagel's Law), even though many of the modern clitics became cliticised much more recently in the language (e.g. auxiliary verbs or the accusative forms of pronouns). In subordinate clauses and questions, they follow the connector and/or the question word respectively.",
"title": "Indo-European languages"
},
{
"paragraph_id": 32,
"text": "Examples (clitics – sam \"I am\", biste \"you would (pl.)\", mi \"to me\", vam \"to you (pl.)\", ih \"them\"):",
"title": "Indo-European languages"
},
{
"paragraph_id": 33,
"text": "In certain rural dialects this rule is (or was until recently) very strict, whereas elsewhere various exceptions occur. These include phrases containing conjunctions (e. g. Ivan i Ana \"Ivan and Ana\"), nouns with a genitival attribute (e. g. vrh brda \"the top of the hill\"), proper names and titles and the like (e. g. (gospođa) Ivana Marić \"(Mrs) Ivana Marić\", grad Zagreb \"the city (of) Zagreb\"), and in many local varieties clitics are hardly ever inserted into any phrases (e. g. moj najbolji prijatelj \"my best friend\", sutra ujutro \"tomorrow morning\"). In cases like these, clitics normally follow the initial phrase, although some Standard grammar handbooks recommend that they should be placed immediately after the verb (many native speakers find this unnatural).",
"title": "Indo-European languages"
},
{
"paragraph_id": 34,
"text": "Examples:",
"title": "Indo-European languages"
},
{
"paragraph_id": 35,
"text": "Clitics are however never inserted after the negative particle ne, which always precedes the verb in Serbo-Croatian, or after prefixes (earlier preverbs), and the interrogative particle li always immediately follows the verb. Colloquial interrogative particles such as da li, dal, jel appear in sentence-initial position and are followed by clitics (if there are any).",
"title": "Indo-European languages"
},
{
"paragraph_id": 36,
"text": "Examples:",
"title": "Indo-European languages"
}
] | In morphology and syntax, a clitic is a morpheme that has syntactic characteristics of a word, but depends phonologically on another word or phrase. In this sense, it is syntactically independent but phonologically dependent—always attached to a host. A clitic is pronounced like an affix, but plays a syntactic role at the phrase level. In other words, clitics have the form of affixes, but the distribution of function words. Clitics can belong to any grammatical category, although they are commonly pronouns, determiners, or adpositions. Note that orthography is not always a good guide for distinguishing clitics from affixes: clitics may be written as separate words, but sometimes they are joined to the word they depend on or separated by special characters such as hyphens or apostrophes. | 2001-10-11T17:15:53Z | 2023-12-31T04:43:24Z | [
"Template:Example needed",
"Template:In5",
"Template:Citation needed",
"Template:Short description",
"Template:IPA",
"Template:Respelling",
"Template:Lang",
"Template:Confusing",
"Template:PIE",
"Template:Reflist",
"Template:ISBN",
"Template:Cite journal",
"Template:IPAc-en",
"Template:Grc-transl",
"Template:'",
"Template:Contradiction inline",
"Template:Cite web",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Clitic |
6,759 | Context-free grammar | In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context. In particular, in a context-free grammar, each production rule is of the form
with A {\displaystyle A} a single nonterminal symbol, and α {\displaystyle \alpha } a string of terminals and/or nonterminals ( α {\displaystyle \alpha } can be empty). Regardless of which symbols surround it, the single nonterminal A {\displaystyle A} on the left hand side can always be replaced by α {\displaystyle \alpha } on the right hand side. This distinguishes it from a context-sensitive grammar, which can have production rules in the form α A β → α γ β {\displaystyle \alpha A\beta \rightarrow \alpha \gamma \beta } with A {\displaystyle A} a nonterminal symbol and α {\displaystyle \alpha } , β {\displaystyle \beta } , and γ {\displaystyle \gamma } strings of terminal and/or nonterminal symbols.
A formal grammar is essentially a set of production rules that describe all possible strings in a given formal language. Production rules are simple replacements. For example, the first rule in the picture,
replaces ⟨ Stmt ⟩ {\displaystyle \langle {\text{Stmt}}\rangle } with ⟨ Id ⟩ = ⟨ Expr ⟩ ; {\displaystyle \langle {\text{Id}}\rangle =\langle {\text{Expr}}\rangle ;} . There can be multiple replacement rules for a given nonterminal symbol. The language generated by a grammar is the set of all strings of terminal symbols that can be derived, by repeated rule applications, from some particular nonterminal symbol ("start symbol"). Nonterminal symbols are used during the derivation process, but do not appear in its final result string.
Languages generated by context-free grammars are known as context-free languages (CFL). Different context-free grammars can generate the same context-free language. It is important to distinguish the properties of the language (intrinsic properties) from the properties of a particular grammar (extrinsic properties). The language equality question (do two given context-free grammars generate the same language?) is undecidable.
Context-free grammars arise in linguistics where they are used to describe the structure of sentences and words in a natural language, and they were invented by the linguist Noam Chomsky for this purpose. By contrast, in computer science, as the use of recursively-defined concepts increased, they were used more and more. In an early application, grammars are used to describe the structure of programming languages. In a newer application, they are used in an essential part of the Extensible Markup Language (XML) called the document type definition.
In linguistics, some authors use the term phrase structure grammar to refer to context-free grammars, whereby phrase-structure grammars are distinct from dependency grammars. In computer science, a popular notation for context-free grammars is Backus–Naur form, or BNF.
Since at least the time of the ancient Indian scholar Pāṇini, linguists have described the grammars of languages in terms of their block structure, and described how sentences are recursively built up from smaller phrases, and eventually individual words or word elements. An essential property of these block structures is that logical units never overlap. For example, the sentence:
can be logically parenthesized (with the logical metasymbols [ ]) as follows:
A context-free grammar provides a simple and mathematically precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks, capturing the "block structure" of sentences in a natural way. Its simplicity makes the formalism amenable to rigorous mathematical study. Important features of natural language syntax such as agreement and reference are not part of the context-free grammar, but the basic recursive structure of sentences, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are swallowed by nouns and verbs, is described exactly.
Context-free grammars are a special form of Semi-Thue systems that in their general form date back to the work of Axel Thue.
The formalism of context-free grammars was developed in the mid-1950s by Noam Chomsky, and also their classification as a special type of formal grammar (which he called phrase-structure grammars). Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars. In Chomsky's generative grammar framework, the syntax of natural language was described by context-free rules combined with transformation rules.
Block structure was introduced into computer programming languages by the Algol project (1957–1960), which, as a consequence, also featured a context-free grammar to describe the resulting Algol syntax. This became a standard feature of computer languages, and the notation for grammars used in concrete descriptions of computer languages came to be known as Backus–Naur form, after two members of the Algol language design committee. The "block structure" aspect that context-free grammars capture is so fundamental to grammar that the terms syntax and grammar are often identified with context-free grammar rules, especially in computer science. Formal constraints not captured by the grammar are then considered to be part of the "semantics" of the language.
Context-free grammars are simple enough to allow the construction of efficient parsing algorithms that, for a given string, determine whether and how it can be generated from the grammar. An Earley parser is an example of such an algorithm, while the widely used LR and LL parsers are simpler algorithms that deal only with more restrictive subsets of context-free grammars.
A context-free grammar G is defined by the 4-tuple G = ( V , Σ , R , S ) {\displaystyle G=(V,\Sigma ,R,S)} , where
A production rule in R is formalized mathematically as a pair ( α , β ) ∈ R {\displaystyle (\alpha ,\beta )\in R} , where α ∈ V {\displaystyle \alpha \in V} is a nonterminal and β ∈ ( V ∪ Σ ) ∗ {\displaystyle \beta \in (V\cup \Sigma )^{*}} is a string of variables and/or terminals; rather than using ordered pair notation, production rules are usually written using an arrow operator with α {\displaystyle \alpha } as its left hand side and β as its right hand side: α → β {\displaystyle \alpha \rightarrow \beta } .
It is allowed for β to be the empty string, and in this case it is customary to denote it by ε. The form α → ε {\displaystyle \alpha \rightarrow \varepsilon } is called an ε-production.
It is common to list all right-hand sides for the same left-hand side on the same line, using | (the vertical bar) to separate them. Rules α → β 1 {\displaystyle \alpha \rightarrow \beta _{1}} and α → β 2 {\displaystyle \alpha \rightarrow \beta _{2}} can hence be written as α → β 1 ∣ β 2 {\displaystyle \alpha \rightarrow \beta _{1}\mid \beta _{2}} . In this case, β 1 {\displaystyle \beta _{1}} and β 2 {\displaystyle \beta _{2}} are called the first and second alternative, respectively.
For any strings u , v ∈ ( V ∪ Σ ) ∗ {\displaystyle u,v\in (V\cup \Sigma )^{*}} , we say u directly yields v, written as u ⇒ v {\displaystyle u\Rightarrow v\,} , if ∃ ( α , β ) ∈ R {\displaystyle \exists (\alpha ,\beta )\in R} with α ∈ V {\displaystyle \alpha \in V} and u 1 , u 2 ∈ ( V ∪ Σ ) ∗ {\displaystyle u_{1},u_{2}\in (V\cup \Sigma )^{*}} such that u = u 1 α u 2 {\displaystyle u\,=u_{1}\alpha u_{2}} and v = u 1 β u 2 {\displaystyle v\,=u_{1}\beta u_{2}} . Thus, v is a result of applying the rule ( α , β ) {\displaystyle (\alpha ,\beta )} to u.
For any strings u , v ∈ ( V ∪ Σ ) ∗ , {\displaystyle u,v\in (V\cup \Sigma )^{*},} we say u yields v or v is derived from u if there is a positive integer k and strings u 1 , … , u k ∈ ( V ∪ Σ ) ∗ {\displaystyle u_{1},\ldots ,u_{k}\in (V\cup \Sigma )^{*}} such that u = u 1 ⇒ u 2 ⇒ ⋯ ⇒ u k = v {\displaystyle u=u_{1}\Rightarrow u_{2}\Rightarrow \cdots \Rightarrow u_{k}=v} . This relation is denoted u ⇒ ∗ v {\displaystyle u{\stackrel {*}{\Rightarrow }}v} , or u ⇒⇒ v {\displaystyle u\Rightarrow \Rightarrow v} in some textbooks. If k ≥ 2 {\displaystyle k\geq 2} , the relation u ⇒ + v {\displaystyle u{\stackrel {+}{\Rightarrow }}v} holds. In other words, ( ⇒ ∗ ) {\displaystyle ({\stackrel {*}{\Rightarrow }})} and ( ⇒ + ) {\displaystyle ({\stackrel {+}{\Rightarrow }})} are the reflexive transitive closure (allowing a string to yield itself) and the transitive closure (requiring at least one step) of ( ⇒ ) {\displaystyle (\Rightarrow )} , respectively.
The language of a grammar G = ( V , Σ , R , S ) {\displaystyle G=(V,\Sigma ,R,S)} is the set
of all terminal-symbol strings derivable from the start symbol.
A language L is said to be a context-free language (CFL), if there exists a CFG G, such that L = L ( G ) {\displaystyle L\,=\,L(G)} .
Non-deterministic pushdown automata recognize exactly the context-free languages.
The grammar G = ( { S } , { a , b } , P , S ) {\displaystyle G=(\{S\},\{a,b\},P,S)} , with productions
is context-free. It is not proper since it includes an ε-production. A typical derivation in this grammar is
This makes it clear that L ( G ) = { w w R : w ∈ { a , b } ∗ } {\displaystyle L(G)=\{ww^{R}:w\in \{a,b\}^{*}\}} . The language is context-free, however, it can be proved that it is not regular.
If the productions
are added, a context-free grammar for the set of all palindromes over the alphabet { a, b } is obtained.
The canonical example of a context-free grammar is parenthesis matching, which is representative of the general case. There are two terminal symbols "(" and ")" and one nonterminal symbol S. The production rules are
The first rule allows the S symbol to multiply; the second rule allows the S symbol to become enclosed by matching parentheses; and the third rule terminates the recursion.
A second canonical example is two different kinds of matching nested parentheses, described by the productions:
with terminal symbols [ ] ( ) and nonterminal S.
The following sequence can be derived in that grammar:
In a context-free grammar, we can pair up characters the way we do with brackets. The simplest example:
This grammar generates the language { a n b n : n ≥ 1 } {\displaystyle \{a^{n}b^{n}:n\geq 1\}} , which is not regular (according to the pumping lemma for regular languages).
The special character ε stands for the empty string. By changing the above grammar to
we obtain a grammar generating the language { a n b n : n ≥ 0 } {\displaystyle \{a^{n}b^{n}:n\geq 0\}} instead. This differs only in that it contains the empty string while the original grammar did not.
A context-free grammar for the language consisting of all strings over {a,b} containing an unequal number of a's and b's:
Here, the nonterminal T can generate all strings with more a's than b's, the nonterminal U generates all strings with more b's than a's and the nonterminal V generates all strings with an equal number of a's and b's. Omitting the third alternative in the rules for T and U does not restrict the grammar's language.
Another example of a non-regular language is { b n a m b 2 n : n ≥ 0 , m ≥ 0 } {\displaystyle \{{\text{b}}^{n}{\text{a}}^{m}{\text{b}}^{2n}:n\geq 0,m\geq 0\}} . It is context-free as it can be generated by the following context-free grammar:
The formation rules for the terms and formulas of formal logic fit the definition of context-free grammar, except that the set of symbols may be infinite and there may be more than one start symbol.
In contrast to well-formed nested parentheses and square brackets in the previous section, there is no context-free grammar for generating all sequences of two different types of parentheses, each separately balanced disregarding the other, where the two types need not nest inside one another, for example:
or
The fact that this language is not context free can be proven using pumping lemma for context-free languages and a proof by contradiction, observing that all words of the form ( n [ n ) n ] n {\displaystyle {(}^{n}{[}^{n}{)}^{n}{]}^{n}} should belong to the language. This language belongs instead to a more general class and can be described by a conjunctive grammar, which in turn also includes other non-context-free languages, such as the language of all words of the form a n b n c n {\displaystyle {\text{a}}^{n}{\text{b}}^{n}{\text{c}}^{n}} .
Every regular grammar is context-free, but not all context-free grammars are regular. The following context-free grammar, for example, is also regular.
The terminals here are a and b, while the only nonterminal is S. The language described is all nonempty strings of a {\displaystyle a} s and b {\displaystyle b} s that end in a {\displaystyle a} .
This grammar is regular: no rule has more than one nonterminal in its right-hand side, and each of these nonterminals is at the same end of the right-hand side.
Every regular grammar corresponds directly to a nondeterministic finite automaton, so we know that this is a regular language.
Using vertical bars, the grammar above can be described more tersely as follows:
A derivation of a string for a grammar is a sequence of grammar rule applications that transform the start symbol into the string. A derivation proves that the string belongs to the grammar's language.
A derivation is fully determined by giving, for each step:
For clarity, the intermediate string is usually given as well.
For instance, with the grammar:
the string
can be derived from the start symbol S with the following derivation:
Often, a strategy is followed that deterministically chooses the next nonterminal to rewrite:
Given such a strategy, a derivation is completely determined by the sequence of rules applied. For instance, one leftmost derivation of the same string is
which can be summarized as
One rightmost derivation is:
which can be summarized as
The distinction between leftmost derivation and rightmost derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore, it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an example LL parsers and LR parsers.
A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example, if the string "1 + 1 + a" is derived according to the leftmost derivation outlined above, the structure of the string would be:
where {...}S indicates a substring recognized as belonging to S. This hierarchy can also be seen as a tree:
This tree is called a parse tree or "concrete syntax tree" of the string, by contrast with the abstract syntax tree. In this case the presented leftmost and the rightmost derivations define the same parse tree; however, there is another rightmost derivation of the same string
which defines a string with a different structure
and a different parse tree:
Note however that both parse trees can be obtained by both leftmost and rightmost derivations. For example, the last tree can be obtained with the leftmost derivation as follows:
If a string in the language of the grammar has more than one parsing tree, then the grammar is said to be an ambiguous grammar. Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply. Usually, ambiguity is a feature of the grammar, not the language, and an unambiguous grammar can be found that generates the same context-free language. However, there are certain languages that can only be generated by ambiguous grammars; such languages are called inherently ambiguous languages.
Here is a context-free grammar for syntactically correct infix algebraic expressions in the variables x, y and z:
This grammar can, for example, generate the string
as follows:
Note that many choices were made underway as to which rewrite was going to be performed next. These choices look quite arbitrary. As a matter of fact, they are, in the sense that the string finally generated is always the same. For example, the second and third rewrites
could be done in the opposite order:
Also, many choices were made on which rule to apply to each selected S. Changing the choices made and not only the order they were made in usually affects which terminal string comes out at the end.
Let's look at this in more detail. Consider the parse tree of this derivation:
Starting at the top, step by step, an S in the tree is expanded, until no more unexpanded Ses (nonterminals) remain. Picking a different order of expansion will produce a different derivation, but the same parse tree. The parse tree will only change if we pick a different rule to apply at some position in the tree.
But can a different parse tree still produce the same terminal string, which is (x + y) * x – z * y / (x + x) in this case? Yes, for this particular grammar, this is possible. Grammars with this property are called ambiguous.
For example, x + y * z can be produced with these two different parse trees:
However, the language described by this grammar is not inherently ambiguous: an alternative, unambiguous grammar can be given for the language, for example:
once again picking S as the start symbol. This alternative grammar will produce x + y * z with a parse tree similar to the left one above, i.e. implicitly assuming the association (x + y) * z, which does not follow standard order of operations. More elaborate, unambiguous and context-free grammars can be constructed that produce parse trees that obey all desired operator precedence and associativity rules.
Every context-free grammar with no ε-production has an equivalent grammar in Chomsky normal form, and a grammar in Greibach normal form. "Equivalent" here means that the two grammars generate the same language.
The especially simple form of production rules in Chomsky normal form grammars has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky normal form to construct a polynomial-time algorithm that decides whether a given string is in the language represented by that grammar or not (the CYK algorithm).
Context-free languages are closed under the various operations, that is, if the languages K and L are context-free, so is the result of the following operations:
They are not closed under general intersection (hence neither under complementation) and set difference.
The following are some decidable problems about context-free grammars.
The parsing problem, checking whether a given word belongs to the language given by a context-free grammar, is decidable, using one of the general-purpose parsing algorithms:
Context-free parsing for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to boolean matrix multiplication, thus inheriting its complexity upper bound of O(n). Conversely, Lillian Lee has shown O(n) boolean matrix multiplication to be reducible to O(n) CFG parsing, thus establishing some kind of lower bound for the latter.
A nonterminal symbol X {\displaystyle X} is called productive, or generating, if there is a derivation X ⇒ ∗ w {\displaystyle X{\stackrel {*}{\Rightarrow }}w} for some string w {\displaystyle w} of terminal symbols. X {\displaystyle X} is called reachable if there is a derivation S ⇒ ∗ α X β {\displaystyle S{\stackrel {*}{\Rightarrow }}\alpha X\beta } for some strings α , β {\displaystyle \alpha ,\beta } of nonterminal and terminal symbols from the start symbol. X {\displaystyle X} is called useless if it is unreachable or unproductive. X {\displaystyle X} is called nullable if there is a derivation X ⇒ ∗ ε {\displaystyle X{\stackrel {*}{\Rightarrow }}\varepsilon } . A rule X → ε {\displaystyle X\rightarrow \varepsilon } is called an ε-production. A derivation X ⇒ + X {\displaystyle X{\stackrel {+}{\Rightarrow }}X} is called a cycle.
Algorithms are known to eliminate from a given grammar, without changing its generated language,
In particular, an alternative containing a useless nonterminal symbol can be deleted from the right-hand side of a rule. Such rules and alternatives are called useless.
In the depicted example grammar, the nonterminal D is unreachable, and E is unproductive, while C → C causes a cycle. Hence, omitting the last three rules does not change the language generated by the grammar, nor does omitting the alternatives "| Cc | Ee" from the right-hand side of the rule for S.
A context-free grammar is said to be proper if it has neither useless symbols nor ε-productions nor cycles. Combining the above algorithms, every context-free grammar not generating ε can be transformed into a weakly equivalent proper one.
It is decidable whether a given grammar is a regular grammar, as well as whether it is an LL(k) grammar for a given k≥0. If k is not given, the latter problem is undecidable.
Given a context-free grammar, it is not decidable whether its language is regular, nor whether it is an LL(k) language for a given k.
There are algorithms to decide whether the language of a given context-free grammar is empty, as well as whether it is finite.
Some questions that are undecidable for wider classes of grammars become decidable for context-free grammars; e.g. the emptiness problem (whether the grammar generates any terminal strings at all), is undecidable for context-sensitive grammars, but decidable for context-free grammars.
However, many problems are undecidable even for context-free grammars. Examples are:
Given a CFG, does it generate the language of all strings over the alphabet of terminal symbols used in its rules?
A reduction can be demonstrated to this problem from the well-known undecidable problem of determining whether a Turing machine accepts a particular input (the halting problem). The reduction uses the concept of a computation history, a string describing an entire computation of a Turing machine. A CFG can be constructed that generates all strings that are not accepting computation histories for a particular Turing machine on a particular input, and thus it will accept all strings only if the machine does not accept that input.
Given two CFGs, do they generate the same language?
The undecidability of this problem is a direct consequence of the previous: it is impossible to even decide whether a CFG is equivalent to the trivial CFG defining the language of all strings.
Given two CFGs, can the first one generate all strings that the second one can generate?
If this problem was decidable, then language equality could be decided too: two CFGs G1 and G2 generate the same language if L(G1) is a subset of L(G2) and L(G2) is a subset of L(G1).
Using Greibach's theorem, it can be shown that the two following problems are undecidable:
Given a CFG, is it ambiguous?
The undecidability of this problem follows from the fact that if an algorithm to determine ambiguity existed, the Post correspondence problem could be decided, which is known to be undecidable. This may be proved by Ogden's lemma.
Given two CFGs, is there any string derivable from both grammars?
If this problem was decidable, the undecidable Post correspondence problem could be decided, too: given strings α 1 , … , α N , β 1 , … , β N {\displaystyle \alpha _{1},\ldots ,\alpha _{N},\beta _{1},\ldots ,\beta _{N}} over some alphabet { a 1 , … , a k } {\displaystyle \{a_{1},\ldots ,a_{k}\}} , let the grammar G 1 {\displaystyle G_{1}} consist of the rule
where β i r e v {\displaystyle \beta _{i}^{rev}} denotes the reversed string β i {\displaystyle \beta _{i}} and b {\displaystyle b} does not occur among the a i {\displaystyle a_{i}} ; and let grammar G 2 {\displaystyle G_{2}} consist of the rule
Then the Post problem given by α 1 , … , α N , β 1 , … , β N {\displaystyle \alpha _{1},\ldots ,\alpha _{N},\beta _{1},\ldots ,\beta _{N}} has a solution if and only if L ( G 1 ) {\displaystyle L(G_{1})} and L ( G 2 ) {\displaystyle L(G_{2})} share a derivable string.
An obvious way to extend the context-free grammar formalism is to allow nonterminals to have arguments, the values of which are passed along within the rules. This allows natural language features such as agreement and reference, and programming language analogs such as the correct use and definition of identifiers, to be expressed in a natural way. E.g. we can now easily express that in English sentences, the subject and verb must agree in number. In computer science, examples of this approach include affix grammars, attribute grammars, indexed grammars, and Van Wijngaarden two-level grammars. Similar extensions exist in linguistics.
An extended context-free grammar (or regular right part grammar) is one in which the right-hand side of the production rules is allowed to be a regular expression over the grammar's terminals and nonterminals. Extended context-free grammars describe exactly the context-free languages.
Another extension is to allow additional terminal symbols to appear at the left-hand side of rules, constraining their application. This produces the formalism of context-sensitive grammars.
There are a number of important subclasses of the context-free grammars:
LR parsing extends LL parsing to support a larger range of grammars; in turn, generalized LR parsing extends LR parsing to support arbitrary context-free grammars. On LL grammars and LR grammars, it essentially performs LL parsing and LR parsing, respectively, while on nondeterministic grammars, it is as efficient as can be expected. Although GLR parsing was developed in the 1980s, many new language definitions and parser generators continue to be based on LL, LALR or LR parsing up to the present day.
Chomsky initially hoped to overcome the limitations of context-free grammars by adding transformation rules.
Such rules are another standard device in traditional linguistics; e.g. passivization in English. Much of generative grammar has been devoted to finding ways of refining the descriptive mechanisms of phrase-structure grammar and transformation rules such that exactly the kinds of things can be expressed that natural language actually allows. Allowing arbitrary transformations does not meet that goal: they are much too powerful, being Turing complete unless significant restrictions are added (e.g. no transformations that introduce and then rewrite symbols in a context-free fashion).
Chomsky's general position regarding the non-context-freeness of natural language has held up since then, although his specific examples regarding the inadequacy of context-free grammars in terms of their weak generative capacity were later disproved. Gerald Gazdar and Geoffrey Pullum have argued that despite a few non-context-free constructions in natural language (such as cross-serial dependencies in Swiss German and reduplication in Bambara), the vast majority of forms in natural language are indeed context-free. | [
{
"paragraph_id": 0,
"text": "In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context. In particular, in a context-free grammar, each production rule is of the form",
"title": ""
},
{
"paragraph_id": 1,
"text": "with A {\\displaystyle A} a single nonterminal symbol, and α {\\displaystyle \\alpha } a string of terminals and/or nonterminals ( α {\\displaystyle \\alpha } can be empty). Regardless of which symbols surround it, the single nonterminal A {\\displaystyle A} on the left hand side can always be replaced by α {\\displaystyle \\alpha } on the right hand side. This distinguishes it from a context-sensitive grammar, which can have production rules in the form α A β → α γ β {\\displaystyle \\alpha A\\beta \\rightarrow \\alpha \\gamma \\beta } with A {\\displaystyle A} a nonterminal symbol and α {\\displaystyle \\alpha } , β {\\displaystyle \\beta } , and γ {\\displaystyle \\gamma } strings of terminal and/or nonterminal symbols.",
"title": ""
},
{
"paragraph_id": 2,
"text": "A formal grammar is essentially a set of production rules that describe all possible strings in a given formal language. Production rules are simple replacements. For example, the first rule in the picture,",
"title": ""
},
{
"paragraph_id": 3,
"text": "replaces ⟨ Stmt ⟩ {\\displaystyle \\langle {\\text{Stmt}}\\rangle } with ⟨ Id ⟩ = ⟨ Expr ⟩ ; {\\displaystyle \\langle {\\text{Id}}\\rangle =\\langle {\\text{Expr}}\\rangle ;} . There can be multiple replacement rules for a given nonterminal symbol. The language generated by a grammar is the set of all strings of terminal symbols that can be derived, by repeated rule applications, from some particular nonterminal symbol (\"start symbol\"). Nonterminal symbols are used during the derivation process, but do not appear in its final result string.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Languages generated by context-free grammars are known as context-free languages (CFL). Different context-free grammars can generate the same context-free language. It is important to distinguish the properties of the language (intrinsic properties) from the properties of a particular grammar (extrinsic properties). The language equality question (do two given context-free grammars generate the same language?) is undecidable.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Context-free grammars arise in linguistics where they are used to describe the structure of sentences and words in a natural language, and they were invented by the linguist Noam Chomsky for this purpose. By contrast, in computer science, as the use of recursively-defined concepts increased, they were used more and more. In an early application, grammars are used to describe the structure of programming languages. In a newer application, they are used in an essential part of the Extensible Markup Language (XML) called the document type definition.",
"title": ""
},
{
"paragraph_id": 6,
"text": "In linguistics, some authors use the term phrase structure grammar to refer to context-free grammars, whereby phrase-structure grammars are distinct from dependency grammars. In computer science, a popular notation for context-free grammars is Backus–Naur form, or BNF.",
"title": ""
},
{
"paragraph_id": 7,
"text": "Since at least the time of the ancient Indian scholar Pāṇini, linguists have described the grammars of languages in terms of their block structure, and described how sentences are recursively built up from smaller phrases, and eventually individual words or word elements. An essential property of these block structures is that logical units never overlap. For example, the sentence:",
"title": "Background"
},
{
"paragraph_id": 8,
"text": "can be logically parenthesized (with the logical metasymbols [ ]) as follows:",
"title": "Background"
},
{
"paragraph_id": 9,
"text": "A context-free grammar provides a simple and mathematically precise mechanism for describing the methods by which phrases in some natural language are built from smaller blocks, capturing the \"block structure\" of sentences in a natural way. Its simplicity makes the formalism amenable to rigorous mathematical study. Important features of natural language syntax such as agreement and reference are not part of the context-free grammar, but the basic recursive structure of sentences, the way in which clauses nest inside other clauses, and the way in which lists of adjectives and adverbs are swallowed by nouns and verbs, is described exactly.",
"title": "Background"
},
{
"paragraph_id": 10,
"text": "Context-free grammars are a special form of Semi-Thue systems that in their general form date back to the work of Axel Thue.",
"title": "Background"
},
{
"paragraph_id": 11,
"text": "The formalism of context-free grammars was developed in the mid-1950s by Noam Chomsky, and also their classification as a special type of formal grammar (which he called phrase-structure grammars). Some authors, however, reserve the term for more restricted grammars in the Chomsky hierarchy: context-sensitive grammars or context-free grammars. In a broader sense, phrase structure grammars are also known as constituency grammars. The defining trait of phrase structure grammars is thus their adherence to the constituency relation, as opposed to the dependency relation of dependency grammars. In Chomsky's generative grammar framework, the syntax of natural language was described by context-free rules combined with transformation rules.",
"title": "Background"
},
{
"paragraph_id": 12,
"text": "Block structure was introduced into computer programming languages by the Algol project (1957–1960), which, as a consequence, also featured a context-free grammar to describe the resulting Algol syntax. This became a standard feature of computer languages, and the notation for grammars used in concrete descriptions of computer languages came to be known as Backus–Naur form, after two members of the Algol language design committee. The \"block structure\" aspect that context-free grammars capture is so fundamental to grammar that the terms syntax and grammar are often identified with context-free grammar rules, especially in computer science. Formal constraints not captured by the grammar are then considered to be part of the \"semantics\" of the language.",
"title": "Background"
},
{
"paragraph_id": 13,
"text": "Context-free grammars are simple enough to allow the construction of efficient parsing algorithms that, for a given string, determine whether and how it can be generated from the grammar. An Earley parser is an example of such an algorithm, while the widely used LR and LL parsers are simpler algorithms that deal only with more restrictive subsets of context-free grammars.",
"title": "Background"
},
{
"paragraph_id": 14,
"text": "A context-free grammar G is defined by the 4-tuple G = ( V , Σ , R , S ) {\\displaystyle G=(V,\\Sigma ,R,S)} , where",
"title": "Formal definitions"
},
{
"paragraph_id": 15,
"text": "A production rule in R is formalized mathematically as a pair ( α , β ) ∈ R {\\displaystyle (\\alpha ,\\beta )\\in R} , where α ∈ V {\\displaystyle \\alpha \\in V} is a nonterminal and β ∈ ( V ∪ Σ ) ∗ {\\displaystyle \\beta \\in (V\\cup \\Sigma )^{*}} is a string of variables and/or terminals; rather than using ordered pair notation, production rules are usually written using an arrow operator with α {\\displaystyle \\alpha } as its left hand side and β as its right hand side: α → β {\\displaystyle \\alpha \\rightarrow \\beta } .",
"title": "Formal definitions"
},
{
"paragraph_id": 16,
"text": "It is allowed for β to be the empty string, and in this case it is customary to denote it by ε. The form α → ε {\\displaystyle \\alpha \\rightarrow \\varepsilon } is called an ε-production.",
"title": "Formal definitions"
},
{
"paragraph_id": 17,
"text": "It is common to list all right-hand sides for the same left-hand side on the same line, using | (the vertical bar) to separate them. Rules α → β 1 {\\displaystyle \\alpha \\rightarrow \\beta _{1}} and α → β 2 {\\displaystyle \\alpha \\rightarrow \\beta _{2}} can hence be written as α → β 1 ∣ β 2 {\\displaystyle \\alpha \\rightarrow \\beta _{1}\\mid \\beta _{2}} . In this case, β 1 {\\displaystyle \\beta _{1}} and β 2 {\\displaystyle \\beta _{2}} are called the first and second alternative, respectively.",
"title": "Formal definitions"
},
{
"paragraph_id": 18,
"text": "For any strings u , v ∈ ( V ∪ Σ ) ∗ {\\displaystyle u,v\\in (V\\cup \\Sigma )^{*}} , we say u directly yields v, written as u ⇒ v {\\displaystyle u\\Rightarrow v\\,} , if ∃ ( α , β ) ∈ R {\\displaystyle \\exists (\\alpha ,\\beta )\\in R} with α ∈ V {\\displaystyle \\alpha \\in V} and u 1 , u 2 ∈ ( V ∪ Σ ) ∗ {\\displaystyle u_{1},u_{2}\\in (V\\cup \\Sigma )^{*}} such that u = u 1 α u 2 {\\displaystyle u\\,=u_{1}\\alpha u_{2}} and v = u 1 β u 2 {\\displaystyle v\\,=u_{1}\\beta u_{2}} . Thus, v is a result of applying the rule ( α , β ) {\\displaystyle (\\alpha ,\\beta )} to u.",
"title": "Formal definitions"
},
{
"paragraph_id": 19,
"text": "For any strings u , v ∈ ( V ∪ Σ ) ∗ , {\\displaystyle u,v\\in (V\\cup \\Sigma )^{*},} we say u yields v or v is derived from u if there is a positive integer k and strings u 1 , … , u k ∈ ( V ∪ Σ ) ∗ {\\displaystyle u_{1},\\ldots ,u_{k}\\in (V\\cup \\Sigma )^{*}} such that u = u 1 ⇒ u 2 ⇒ ⋯ ⇒ u k = v {\\displaystyle u=u_{1}\\Rightarrow u_{2}\\Rightarrow \\cdots \\Rightarrow u_{k}=v} . This relation is denoted u ⇒ ∗ v {\\displaystyle u{\\stackrel {*}{\\Rightarrow }}v} , or u ⇒⇒ v {\\displaystyle u\\Rightarrow \\Rightarrow v} in some textbooks. If k ≥ 2 {\\displaystyle k\\geq 2} , the relation u ⇒ + v {\\displaystyle u{\\stackrel {+}{\\Rightarrow }}v} holds. In other words, ( ⇒ ∗ ) {\\displaystyle ({\\stackrel {*}{\\Rightarrow }})} and ( ⇒ + ) {\\displaystyle ({\\stackrel {+}{\\Rightarrow }})} are the reflexive transitive closure (allowing a string to yield itself) and the transitive closure (requiring at least one step) of ( ⇒ ) {\\displaystyle (\\Rightarrow )} , respectively.",
"title": "Formal definitions"
},
{
"paragraph_id": 20,
"text": "The language of a grammar G = ( V , Σ , R , S ) {\\displaystyle G=(V,\\Sigma ,R,S)} is the set",
"title": "Formal definitions"
},
{
"paragraph_id": 21,
"text": "of all terminal-symbol strings derivable from the start symbol.",
"title": "Formal definitions"
},
{
"paragraph_id": 22,
"text": "A language L is said to be a context-free language (CFL), if there exists a CFG G, such that L = L ( G ) {\\displaystyle L\\,=\\,L(G)} .",
"title": "Formal definitions"
},
{
"paragraph_id": 23,
"text": "Non-deterministic pushdown automata recognize exactly the context-free languages.",
"title": "Formal definitions"
},
{
"paragraph_id": 24,
"text": "The grammar G = ( { S } , { a , b } , P , S ) {\\displaystyle G=(\\{S\\},\\{a,b\\},P,S)} , with productions",
"title": "Examples"
},
{
"paragraph_id": 25,
"text": "is context-free. It is not proper since it includes an ε-production. A typical derivation in this grammar is",
"title": "Examples"
},
{
"paragraph_id": 26,
"text": "This makes it clear that L ( G ) = { w w R : w ∈ { a , b } ∗ } {\\displaystyle L(G)=\\{ww^{R}:w\\in \\{a,b\\}^{*}\\}} . The language is context-free, however, it can be proved that it is not regular.",
"title": "Examples"
},
{
"paragraph_id": 27,
"text": "If the productions",
"title": "Examples"
},
{
"paragraph_id": 28,
"text": "are added, a context-free grammar for the set of all palindromes over the alphabet { a, b } is obtained.",
"title": "Examples"
},
{
"paragraph_id": 29,
"text": "The canonical example of a context-free grammar is parenthesis matching, which is representative of the general case. There are two terminal symbols \"(\" and \")\" and one nonterminal symbol S. The production rules are",
"title": "Examples"
},
{
"paragraph_id": 30,
"text": "The first rule allows the S symbol to multiply; the second rule allows the S symbol to become enclosed by matching parentheses; and the third rule terminates the recursion.",
"title": "Examples"
},
{
"paragraph_id": 31,
"text": "A second canonical example is two different kinds of matching nested parentheses, described by the productions:",
"title": "Examples"
},
{
"paragraph_id": 32,
"text": "with terminal symbols [ ] ( ) and nonterminal S.",
"title": "Examples"
},
{
"paragraph_id": 33,
"text": "The following sequence can be derived in that grammar:",
"title": "Examples"
},
{
"paragraph_id": 34,
"text": "In a context-free grammar, we can pair up characters the way we do with brackets. The simplest example:",
"title": "Examples"
},
{
"paragraph_id": 35,
"text": "This grammar generates the language { a n b n : n ≥ 1 } {\\displaystyle \\{a^{n}b^{n}:n\\geq 1\\}} , which is not regular (according to the pumping lemma for regular languages).",
"title": "Examples"
},
{
"paragraph_id": 36,
"text": "The special character ε stands for the empty string. By changing the above grammar to",
"title": "Examples"
},
{
"paragraph_id": 37,
"text": "we obtain a grammar generating the language { a n b n : n ≥ 0 } {\\displaystyle \\{a^{n}b^{n}:n\\geq 0\\}} instead. This differs only in that it contains the empty string while the original grammar did not.",
"title": "Examples"
},
{
"paragraph_id": 38,
"text": "A context-free grammar for the language consisting of all strings over {a,b} containing an unequal number of a's and b's:",
"title": "Examples"
},
{
"paragraph_id": 39,
"text": "Here, the nonterminal T can generate all strings with more a's than b's, the nonterminal U generates all strings with more b's than a's and the nonterminal V generates all strings with an equal number of a's and b's. Omitting the third alternative in the rules for T and U does not restrict the grammar's language.",
"title": "Examples"
},
{
"paragraph_id": 40,
"text": "Another example of a non-regular language is { b n a m b 2 n : n ≥ 0 , m ≥ 0 } {\\displaystyle \\{{\\text{b}}^{n}{\\text{a}}^{m}{\\text{b}}^{2n}:n\\geq 0,m\\geq 0\\}} . It is context-free as it can be generated by the following context-free grammar:",
"title": "Examples"
},
{
"paragraph_id": 41,
"text": "The formation rules for the terms and formulas of formal logic fit the definition of context-free grammar, except that the set of symbols may be infinite and there may be more than one start symbol.",
"title": "Examples"
},
{
"paragraph_id": 42,
"text": "In contrast to well-formed nested parentheses and square brackets in the previous section, there is no context-free grammar for generating all sequences of two different types of parentheses, each separately balanced disregarding the other, where the two types need not nest inside one another, for example:",
"title": "Examples of languages that are not context free"
},
{
"paragraph_id": 43,
"text": "or",
"title": "Examples of languages that are not context free"
},
{
"paragraph_id": 44,
"text": "The fact that this language is not context free can be proven using pumping lemma for context-free languages and a proof by contradiction, observing that all words of the form ( n [ n ) n ] n {\\displaystyle {(}^{n}{[}^{n}{)}^{n}{]}^{n}} should belong to the language. This language belongs instead to a more general class and can be described by a conjunctive grammar, which in turn also includes other non-context-free languages, such as the language of all words of the form a n b n c n {\\displaystyle {\\text{a}}^{n}{\\text{b}}^{n}{\\text{c}}^{n}} .",
"title": "Examples of languages that are not context free"
},
{
"paragraph_id": 45,
"text": "Every regular grammar is context-free, but not all context-free grammars are regular. The following context-free grammar, for example, is also regular.",
"title": "Regular grammars"
},
{
"paragraph_id": 46,
"text": "The terminals here are a and b, while the only nonterminal is S. The language described is all nonempty strings of a {\\displaystyle a} s and b {\\displaystyle b} s that end in a {\\displaystyle a} .",
"title": "Regular grammars"
},
{
"paragraph_id": 47,
"text": "This grammar is regular: no rule has more than one nonterminal in its right-hand side, and each of these nonterminals is at the same end of the right-hand side.",
"title": "Regular grammars"
},
{
"paragraph_id": 48,
"text": "Every regular grammar corresponds directly to a nondeterministic finite automaton, so we know that this is a regular language.",
"title": "Regular grammars"
},
{
"paragraph_id": 49,
"text": "Using vertical bars, the grammar above can be described more tersely as follows:",
"title": "Regular grammars"
},
{
"paragraph_id": 50,
"text": "A derivation of a string for a grammar is a sequence of grammar rule applications that transform the start symbol into the string. A derivation proves that the string belongs to the grammar's language.",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 51,
"text": "A derivation is fully determined by giving, for each step:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 52,
"text": "For clarity, the intermediate string is usually given as well.",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 53,
"text": "For instance, with the grammar:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 54,
"text": "the string",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 55,
"text": "can be derived from the start symbol S with the following derivation:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 56,
"text": "Often, a strategy is followed that deterministically chooses the next nonterminal to rewrite:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 57,
"text": "Given such a strategy, a derivation is completely determined by the sequence of rules applied. For instance, one leftmost derivation of the same string is",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 58,
"text": "which can be summarized as",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 59,
"text": "One rightmost derivation is:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 60,
"text": "which can be summarized as",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 61,
"text": "The distinction between leftmost derivation and rightmost derivation is important because in most parsers the transformation of the input is defined by giving a piece of code for every grammar rule that is executed whenever the rule is applied. Therefore, it is important to know whether the parser determines a leftmost or a rightmost derivation because this determines the order in which the pieces of code will be executed. See for an example LL parsers and LR parsers.",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 62,
"text": "A derivation also imposes in some sense a hierarchical structure on the string that is derived. For example, if the string \"1 + 1 + a\" is derived according to the leftmost derivation outlined above, the structure of the string would be:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 63,
"text": "where {...}S indicates a substring recognized as belonging to S. This hierarchy can also be seen as a tree:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 64,
"text": "",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 65,
"text": "This tree is called a parse tree or \"concrete syntax tree\" of the string, by contrast with the abstract syntax tree. In this case the presented leftmost and the rightmost derivations define the same parse tree; however, there is another rightmost derivation of the same string",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 66,
"text": "which defines a string with a different structure",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 67,
"text": "and a different parse tree:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 68,
"text": "",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 69,
"text": "Note however that both parse trees can be obtained by both leftmost and rightmost derivations. For example, the last tree can be obtained with the leftmost derivation as follows:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 70,
"text": "If a string in the language of the grammar has more than one parsing tree, then the grammar is said to be an ambiguous grammar. Such grammars are usually hard to parse because the parser cannot always decide which grammar rule it has to apply. Usually, ambiguity is a feature of the grammar, not the language, and an unambiguous grammar can be found that generates the same context-free language. However, there are certain languages that can only be generated by ambiguous grammars; such languages are called inherently ambiguous languages.",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 71,
"text": "Here is a context-free grammar for syntactically correct infix algebraic expressions in the variables x, y and z:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 72,
"text": "This grammar can, for example, generate the string",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 73,
"text": "as follows:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 74,
"text": "Note that many choices were made underway as to which rewrite was going to be performed next. These choices look quite arbitrary. As a matter of fact, they are, in the sense that the string finally generated is always the same. For example, the second and third rewrites",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 75,
"text": "could be done in the opposite order:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 76,
"text": "Also, many choices were made on which rule to apply to each selected S. Changing the choices made and not only the order they were made in usually affects which terminal string comes out at the end.",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 77,
"text": "Let's look at this in more detail. Consider the parse tree of this derivation:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 78,
"text": "",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 79,
"text": "Starting at the top, step by step, an S in the tree is expanded, until no more unexpanded Ses (nonterminals) remain. Picking a different order of expansion will produce a different derivation, but the same parse tree. The parse tree will only change if we pick a different rule to apply at some position in the tree.",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 80,
"text": "But can a different parse tree still produce the same terminal string, which is (x + y) * x – z * y / (x + x) in this case? Yes, for this particular grammar, this is possible. Grammars with this property are called ambiguous.",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 81,
"text": "For example, x + y * z can be produced with these two different parse trees:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 82,
"text": "",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 83,
"text": "However, the language described by this grammar is not inherently ambiguous: an alternative, unambiguous grammar can be given for the language, for example:",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 84,
"text": "once again picking S as the start symbol. This alternative grammar will produce x + y * z with a parse tree similar to the left one above, i.e. implicitly assuming the association (x + y) * z, which does not follow standard order of operations. More elaborate, unambiguous and context-free grammars can be constructed that produce parse trees that obey all desired operator precedence and associativity rules.",
"title": "Derivations and syntax trees"
},
{
"paragraph_id": 85,
"text": "Every context-free grammar with no ε-production has an equivalent grammar in Chomsky normal form, and a grammar in Greibach normal form. \"Equivalent\" here means that the two grammars generate the same language.",
"title": "Normal forms"
},
{
"paragraph_id": 86,
"text": "The especially simple form of production rules in Chomsky normal form grammars has both theoretical and practical implications. For instance, given a context-free grammar, one can use the Chomsky normal form to construct a polynomial-time algorithm that decides whether a given string is in the language represented by that grammar or not (the CYK algorithm).",
"title": "Normal forms"
},
{
"paragraph_id": 87,
"text": "Context-free languages are closed under the various operations, that is, if the languages K and L are context-free, so is the result of the following operations:",
"title": "Closure properties"
},
{
"paragraph_id": 88,
"text": "They are not closed under general intersection (hence neither under complementation) and set difference.",
"title": "Closure properties"
},
{
"paragraph_id": 89,
"text": "The following are some decidable problems about context-free grammars.",
"title": "Decidable problems"
},
{
"paragraph_id": 90,
"text": "The parsing problem, checking whether a given word belongs to the language given by a context-free grammar, is decidable, using one of the general-purpose parsing algorithms:",
"title": "Decidable problems"
},
{
"paragraph_id": 91,
"text": "Context-free parsing for Chomsky normal form grammars was shown by Leslie G. Valiant to be reducible to boolean matrix multiplication, thus inheriting its complexity upper bound of O(n). Conversely, Lillian Lee has shown O(n) boolean matrix multiplication to be reducible to O(n) CFG parsing, thus establishing some kind of lower bound for the latter.",
"title": "Decidable problems"
},
{
"paragraph_id": 92,
"text": "A nonterminal symbol X {\\displaystyle X} is called productive, or generating, if there is a derivation X ⇒ ∗ w {\\displaystyle X{\\stackrel {*}{\\Rightarrow }}w} for some string w {\\displaystyle w} of terminal symbols. X {\\displaystyle X} is called reachable if there is a derivation S ⇒ ∗ α X β {\\displaystyle S{\\stackrel {*}{\\Rightarrow }}\\alpha X\\beta } for some strings α , β {\\displaystyle \\alpha ,\\beta } of nonterminal and terminal symbols from the start symbol. X {\\displaystyle X} is called useless if it is unreachable or unproductive. X {\\displaystyle X} is called nullable if there is a derivation X ⇒ ∗ ε {\\displaystyle X{\\stackrel {*}{\\Rightarrow }}\\varepsilon } . A rule X → ε {\\displaystyle X\\rightarrow \\varepsilon } is called an ε-production. A derivation X ⇒ + X {\\displaystyle X{\\stackrel {+}{\\Rightarrow }}X} is called a cycle.",
"title": "Decidable problems"
},
{
"paragraph_id": 93,
"text": "Algorithms are known to eliminate from a given grammar, without changing its generated language,",
"title": "Decidable problems"
},
{
"paragraph_id": 94,
"text": "In particular, an alternative containing a useless nonterminal symbol can be deleted from the right-hand side of a rule. Such rules and alternatives are called useless.",
"title": "Decidable problems"
},
{
"paragraph_id": 95,
"text": "In the depicted example grammar, the nonterminal D is unreachable, and E is unproductive, while C → C causes a cycle. Hence, omitting the last three rules does not change the language generated by the grammar, nor does omitting the alternatives \"| Cc | Ee\" from the right-hand side of the rule for S.",
"title": "Decidable problems"
},
{
"paragraph_id": 96,
"text": "A context-free grammar is said to be proper if it has neither useless symbols nor ε-productions nor cycles. Combining the above algorithms, every context-free grammar not generating ε can be transformed into a weakly equivalent proper one.",
"title": "Decidable problems"
},
{
"paragraph_id": 97,
"text": "It is decidable whether a given grammar is a regular grammar, as well as whether it is an LL(k) grammar for a given k≥0. If k is not given, the latter problem is undecidable.",
"title": "Decidable problems"
},
{
"paragraph_id": 98,
"text": "Given a context-free grammar, it is not decidable whether its language is regular, nor whether it is an LL(k) language for a given k.",
"title": "Decidable problems"
},
{
"paragraph_id": 99,
"text": "There are algorithms to decide whether the language of a given context-free grammar is empty, as well as whether it is finite.",
"title": "Decidable problems"
},
{
"paragraph_id": 100,
"text": "Some questions that are undecidable for wider classes of grammars become decidable for context-free grammars; e.g. the emptiness problem (whether the grammar generates any terminal strings at all), is undecidable for context-sensitive grammars, but decidable for context-free grammars.",
"title": "Undecidable problems"
},
{
"paragraph_id": 101,
"text": "However, many problems are undecidable even for context-free grammars. Examples are:",
"title": "Undecidable problems"
},
{
"paragraph_id": 102,
"text": "Given a CFG, does it generate the language of all strings over the alphabet of terminal symbols used in its rules?",
"title": "Undecidable problems"
},
{
"paragraph_id": 103,
"text": "A reduction can be demonstrated to this problem from the well-known undecidable problem of determining whether a Turing machine accepts a particular input (the halting problem). The reduction uses the concept of a computation history, a string describing an entire computation of a Turing machine. A CFG can be constructed that generates all strings that are not accepting computation histories for a particular Turing machine on a particular input, and thus it will accept all strings only if the machine does not accept that input.",
"title": "Undecidable problems"
},
{
"paragraph_id": 104,
"text": "Given two CFGs, do they generate the same language?",
"title": "Undecidable problems"
},
{
"paragraph_id": 105,
"text": "The undecidability of this problem is a direct consequence of the previous: it is impossible to even decide whether a CFG is equivalent to the trivial CFG defining the language of all strings.",
"title": "Undecidable problems"
},
{
"paragraph_id": 106,
"text": "Given two CFGs, can the first one generate all strings that the second one can generate?",
"title": "Undecidable problems"
},
{
"paragraph_id": 107,
"text": "If this problem was decidable, then language equality could be decided too: two CFGs G1 and G2 generate the same language if L(G1) is a subset of L(G2) and L(G2) is a subset of L(G1).",
"title": "Undecidable problems"
},
{
"paragraph_id": 108,
"text": "Using Greibach's theorem, it can be shown that the two following problems are undecidable:",
"title": "Undecidable problems"
},
{
"paragraph_id": 109,
"text": "Given a CFG, is it ambiguous?",
"title": "Undecidable problems"
},
{
"paragraph_id": 110,
"text": "The undecidability of this problem follows from the fact that if an algorithm to determine ambiguity existed, the Post correspondence problem could be decided, which is known to be undecidable. This may be proved by Ogden's lemma.",
"title": "Undecidable problems"
},
{
"paragraph_id": 111,
"text": "Given two CFGs, is there any string derivable from both grammars?",
"title": "Undecidable problems"
},
{
"paragraph_id": 112,
"text": "If this problem was decidable, the undecidable Post correspondence problem could be decided, too: given strings α 1 , … , α N , β 1 , … , β N {\\displaystyle \\alpha _{1},\\ldots ,\\alpha _{N},\\beta _{1},\\ldots ,\\beta _{N}} over some alphabet { a 1 , … , a k } {\\displaystyle \\{a_{1},\\ldots ,a_{k}\\}} , let the grammar G 1 {\\displaystyle G_{1}} consist of the rule",
"title": "Undecidable problems"
},
{
"paragraph_id": 113,
"text": "where β i r e v {\\displaystyle \\beta _{i}^{rev}} denotes the reversed string β i {\\displaystyle \\beta _{i}} and b {\\displaystyle b} does not occur among the a i {\\displaystyle a_{i}} ; and let grammar G 2 {\\displaystyle G_{2}} consist of the rule",
"title": "Undecidable problems"
},
{
"paragraph_id": 114,
"text": "Then the Post problem given by α 1 , … , α N , β 1 , … , β N {\\displaystyle \\alpha _{1},\\ldots ,\\alpha _{N},\\beta _{1},\\ldots ,\\beta _{N}} has a solution if and only if L ( G 1 ) {\\displaystyle L(G_{1})} and L ( G 2 ) {\\displaystyle L(G_{2})} share a derivable string.",
"title": "Undecidable problems"
},
{
"paragraph_id": 115,
"text": "An obvious way to extend the context-free grammar formalism is to allow nonterminals to have arguments, the values of which are passed along within the rules. This allows natural language features such as agreement and reference, and programming language analogs such as the correct use and definition of identifiers, to be expressed in a natural way. E.g. we can now easily express that in English sentences, the subject and verb must agree in number. In computer science, examples of this approach include affix grammars, attribute grammars, indexed grammars, and Van Wijngaarden two-level grammars. Similar extensions exist in linguistics.",
"title": "Extensions"
},
{
"paragraph_id": 116,
"text": "An extended context-free grammar (or regular right part grammar) is one in which the right-hand side of the production rules is allowed to be a regular expression over the grammar's terminals and nonterminals. Extended context-free grammars describe exactly the context-free languages.",
"title": "Extensions"
},
{
"paragraph_id": 117,
"text": "Another extension is to allow additional terminal symbols to appear at the left-hand side of rules, constraining their application. This produces the formalism of context-sensitive grammars.",
"title": "Extensions"
},
{
"paragraph_id": 118,
"text": "There are a number of important subclasses of the context-free grammars:",
"title": "Subclasses"
},
{
"paragraph_id": 119,
"text": "LR parsing extends LL parsing to support a larger range of grammars; in turn, generalized LR parsing extends LR parsing to support arbitrary context-free grammars. On LL grammars and LR grammars, it essentially performs LL parsing and LR parsing, respectively, while on nondeterministic grammars, it is as efficient as can be expected. Although GLR parsing was developed in the 1980s, many new language definitions and parser generators continue to be based on LL, LALR or LR parsing up to the present day.",
"title": "Subclasses"
},
{
"paragraph_id": 120,
"text": "Chomsky initially hoped to overcome the limitations of context-free grammars by adding transformation rules.",
"title": "Linguistic applications"
},
{
"paragraph_id": 121,
"text": "Such rules are another standard device in traditional linguistics; e.g. passivization in English. Much of generative grammar has been devoted to finding ways of refining the descriptive mechanisms of phrase-structure grammar and transformation rules such that exactly the kinds of things can be expressed that natural language actually allows. Allowing arbitrary transformations does not meet that goal: they are much too powerful, being Turing complete unless significant restrictions are added (e.g. no transformations that introduce and then rewrite symbols in a context-free fashion).",
"title": "Linguistic applications"
},
{
"paragraph_id": 122,
"text": "Chomsky's general position regarding the non-context-freeness of natural language has held up since then, although his specific examples regarding the inadequacy of context-free grammars in terms of their weak generative capacity were later disproved. Gerald Gazdar and Geoffrey Pullum have argued that despite a few non-context-free constructions in natural language (such as cross-serial dependencies in Swiss German and reduplication in Bambara), the vast majority of forms in natural language are indeed context-free.",
"title": "Linguistic applications"
}
] | In formal language theory, a context-free grammar (CFG) is a formal grammar whose production rules can be applied to a nonterminal symbol regardless of its context.
In particular, in a context-free grammar, each production rule is of the form with A a single nonterminal symbol, and α a string of terminals and/or nonterminals. Regardless of which symbols surround it, the single nonterminal A on the left hand side can always be replaced by α on the right hand side. This distinguishes it from a context-sensitive grammar, which can have production rules in the form α A β → α γ β with A a nonterminal symbol and α , β , and γ strings of terminal and/or nonterminal symbols. A formal grammar is essentially a set of production rules that describe all possible strings in a given formal language. Production rules are simple replacements. For example, the first rule in the picture, replaces ⟨ Stmt ⟩ with ⟨ Id ⟩ = ⟨ Expr ⟩ ; ;} . There can be multiple replacement rules for a given nonterminal symbol. The language generated by a grammar is the set of all strings of terminal symbols that can be derived, by repeated rule applications, from some particular nonterminal symbol.
Nonterminal symbols are used during the derivation process, but do not appear in its final result string. Languages generated by context-free grammars are known as context-free languages (CFL). Different context-free grammars can generate the same context-free language. It is important to distinguish the properties of the language from the properties of a particular grammar. The language equality question is undecidable. Context-free grammars arise in linguistics where they are used to describe the structure of sentences and words in a natural language, and they were invented by the linguist Noam Chomsky for this purpose. By contrast, in computer science, as the use of recursively-defined concepts increased, they were used more and more. In an early application, grammars are used to describe the structure of programming languages. In a newer application, they are used in an essential part of the Extensible Markup Language (XML) called the document type definition. In linguistics, some authors use the term phrase structure grammar to refer to context-free grammars, whereby phrase-structure grammars are distinct from dependency grammars. In computer science, a popular notation for context-free grammars is Backus–Naur form, or BNF. | 2001-10-15T05:18:52Z | 2023-12-13T17:23:16Z | [
"Template:Mvar",
"Template:Tmath",
"Template:Cite book",
"Template:Harvtxt",
"Template:Cite journal",
"Template:Use American English",
"Template:Formal languages and grammars",
"Template:Reflist",
"Template:More citations needed section",
"Template:Rp",
"Template:Citation",
"Template:Cite web",
"Template:Cite tech report",
"Template:Math",
"Template:More citations needed",
"Template:Main",
"Template:Short description"
] | https://en.wikipedia.org/wiki/Context-free_grammar |
6,760 | Cryonics | Cryonics (from Greek: κρύος kryos meaning 'cold') is the low-temperature freezing (usually at −196 °C or −320.8 °F or 77.1 K) and storage of human remains, with the speculative hope that resurrection may be possible in the future. Cryonics is regarded with skepticism within the mainstream scientific community. It is generally viewed as a pseudoscience, and its practice has been characterized as quackery.
Cryonics procedures can begin only after the "patients" are clinically and legally dead. Cryonics procedures may begin within minutes of death, and use cryoprotectants to try to prevent ice formation during cryopreservation. It is, however, not possible for a corpse to be reanimated after undergoing vitrification, as this causes damage to the brain including its neural circuits. The first corpse to be frozen was that of James Bedford in 1967. As of 2014, about 250 bodies had been cryopreserved in the United States, and 1,500 people had made arrangements for cryopreservation of their corpses.
Economic reality means it is highly improbable that any cryonics corporation could continue in business long enough to take advantage of the claimed long-term benefits offered. Early attempts of cryonic preservations were performed in the 1960s and early 1970s which ended in failure with all but one of the companies going out of business, and their stored corpses thawed and disposed of.
Cryonicists argue that as long as brain structure remains intact, there is no fundamental barrier, given our current understanding of physical law, to recovering its information content. Cryonics proponents go further than the mainstream consensus in saying that the brain does not have to be continuously active to survive or retain memory. Cryonics controversially states that a human survives even within an inactive brain that has been badly damaged, provided that original encoding of memory and personality can, in theory, be adequately inferred and reconstituted from what structure remains.
Cryonics uses temperatures below −130 °C, called cryopreservation, in an attempt to preserve enough brain information to permit the future revival of the cryopreserved person. Cryopreservation may be accomplished by freezing, freezing with cryoprotectant to reduce ice damage, or by vitrification to avoid ice damage. Even using the best methods, cryopreservation of whole bodies or brains is very damaging and irreversible with current technology.
Cryonics advocates hold that in the future the use of some kind of presently-nonexistent nanotechnology may be able to help bring the dead back to life and treat the diseases which killed them. Mind uploading has also been proposed.
Cryonics can be expensive. As of 2018, the cost of preparing and storing corpses using cryonics ranged from US$28,000 to $200,000.
When used at high concentrations, cryoprotectants can stop ice formation completely. Cooling and solidification without crystal formation is called vitrification. The first cryoprotectant solutions able to vitrify at very slow cooling rates while still being compatible with whole organ survival were developed in the late 1990s by cryobiologists Gregory Fahy and Brian Wowk for the purpose of banking transplantable organs. This has allowed animal brains to be vitrified, warmed back up, and examined for ice damage using light and electron microscopy. No ice crystal damage was found; cellular damage was due to dehydration and toxicity of the cryoprotectant solutions.
Costs can include payment for medical personnel to be on call for death, vitrification, transportation in dry ice to a preservation facility, and payment into a trust fund intended to cover indefinite storage in liquid nitrogen and future revival costs. As of 2011, U.S. cryopreservation costs can range from $28,000 to $200,000, and are often financed via life insurance. KrioRus, which stores bodies communally in large dewars, charges $12,000 to $36,000 for the procedure. Some customers opt to have only their brain cryopreserved ("neuropreservation"), rather than their whole body.
As of 2014, about 250 corpses have been cryogenically preserved in the U.S., and around 1,500 people have signed up to have their remains preserved. As of 2016, four facilities exist in the world to retain cryopreserved bodies: three in the U.S. and one in Russia.
A more recent development is Tomorrow Biostasis GmbH, which is a Berlin-based firm offering cryonics and standby and transportation services in Europe. Founded in December 2019 by Emil Kendziorra and Fernando Azevedo Pinheiro, it partners with the European Biostasis Foundation in Switzerland for long-term corpse storage, with their facility completed in 2022.
Considering the lifecycle of corporations, it is extremely unlikely that any cryonics company could continue to exist for sufficient time to take advantage even of the supposed benefits offered: historically, even the most robust corporations have only a one-in-a-thousand chance of surviving even one hundred years. Many cryonics companies have failed; as of 2018, all but one of the pre-1973 batch had gone out of business, and their stored corpses have been defrosted and disposed of.
Cryopreservation has long been used by medical laboratories to maintain animal cells, human embryos, and even some organized tissues, for periods as long as three decades. Recovering large animals and organs from a frozen state is however not considered possible at the current level of scientific knowledge. Large vitrified organs tend to develop fractures during cooling, a problem worsened by the large tissue masses and very low temperatures of cryonics. Without cryoprotectants, cell shrinkage and high salt concentrations during freezing usually prevent frozen cells from functioning again after thawing. Ice crystals can also disrupt connections between cells that are necessary for organs to function.
In 2016, Robert L. McIntyre and Gregory Fahy at the cryobiology research company 21st Century Medicine, Inc. won the Small Animal Brain Preservation Prize of the Brain Preservation Foundation by demonstrating to the satisfaction of neuroscientist judges that a particular implementation of fixation and vitrification called aldehyde-stabilized cryopreservation could preserve a rabbit brain in "near perfect" condition at −135 °C, with the cell membranes, synapses, and intracellular structures intact in electron micrographs. Brain Preservation Foundation President, Ken Hayworth, said, "This result directly answers a main skeptical and scientific criticism against cryonics—that it does not provably preserve the delicate synaptic circuitry of the brain." However, the price paid for perfect preservation, as seen by microscopy, was tying up all protein molecules with chemical crosslinks, eliminating biological viability.
Some cryonics organizations use vitrification without a chemical fixation step, sacrificing some structural preservation quality for less damage at the molecular level. Some scientists, like João Pedro Magalhães, have questioned whether using a deadly chemical for fixation eliminates the possibility of biological revival, making chemical fixation unsuitable for cryonics.
Outside of cryonics firms and cryonics-linked interest groups, many scientists show strong skepticism toward cryonics methods. Cryobiologist Dayong Gao states that "we simply don't know if (subjects have) been damaged to the point where they've 'died' during vitrification because the subjects are now inside liquid nitrogen canisters." Biochemist Ken Storey argues (based on experience with organ transplants), that "even if you only wanted to preserve the brain, it has dozens of different areas, which would need to be cryopreserved using different protocols."
Revival would require repairing damage from lack of oxygen, cryoprotectant toxicity, thermal stress (fracturing) and freezing in tissues that do not successfully vitrify, finally followed by reversing the cause of death. In many cases, extensive tissue regeneration would be necessary. This revival technology remains speculative and does not currently exist.
Historically, a person had little control regarding how their body was treated after death as religion held jurisdiction over the ultimate fate of their body. However, secular courts began to exercise jurisdiction over the body and use discretion in carrying out of the wishes of the deceased person. Most countries legally treat preserved individuals as deceased persons because of laws that forbid vitrifying someone who is medically alive. In France, cryonics is not considered a legal mode of body disposal; only burial, cremation, and formal donation to science are allowed. However, bodies may legally be shipped to other countries for cryonic freezing. As of 2015, the Canadian province of British Columbia prohibits the sale of arrangements for body preservation based on cryonics. In Russia, cryonics falls outside both the medical industry and the funeral services industry, making it easier in Russia than in the U.S. to get hospitals and morgues to release cryonics candidates.
In London in 2016, the English High Court ruled in favor of a mother's right to seek cryopreservation of her terminally ill 14-year-old daughter, as the girl wanted, contrary to the father's wishes. The decision was made on the basis that the case represented a conventional dispute over the disposal of the girl's body, although the judge urged ministers to seek "proper regulation" for the future of cryonic preservation following concerns raised by the hospital about the competence and professionalism of the team that conducted the preservation procedures. In Alcor Life Extension Foundation v. Richardson, the Iowa Court of Appeals ordered for the disinterment of Richardson, who was buried against his wishes, for cryopreservation.
A detailed legal examination by Jochen Taupitz concludes that cryonic storage is legal in Germany for an indefinite period of time.
In 2009, writing in Bioethics, David Shaw examined cryonics. The arguments against it included changing the concept of death, the expense of preservation and revival, lack of scientific advancement to permit revival, temptation to use premature euthanasia, and failure due to catastrophe. Arguments in favor of cryonics include the potential benefit to society, the prospect of immortality, and the benefits associated with avoiding death. Shaw explores the expense and the potential payoff, and applies an adapted version of Pascal's Wager to the question.
In 2016, Charles Tandy wrote in favor of cryonics, arguing that honoring someone's last wishes is seen as a benevolent duty in American and many other cultures.
Cryopreservation was applied to human cells beginning in 1954 with frozen sperm, which was thawed and used to inseminate three women. The freezing of humans was first scientifically proposed by Michigan professor Robert Ettinger when he wrote The Prospect of Immortality (1962). In April 1966, the first human body was frozen—though it had been embalmed for two months—by being placed in liquid nitrogen and stored at just above freezing. The middle-aged woman from Los Angeles, whose name is unknown, was soon thawed out and buried by relatives.
The first body to be cryopreserved and then frozen with the hope of future revival was that of James Bedford, claimed by Alcor's Mike Darwin to have occurred within around two hours of his death from cardiorespiratory arrest (secondary to metastasized kidney cancer) on January 12, 1967. Bedford's corpse is the only one frozen before 1974 still preserved today. In 1976, Ettinger founded the Cryonics Institute; his corpse was cryopreserved in 2011. Robert Nelson, "a former TV repairman with no scientific background" who led the Cryonics Society of California, was sued in 1981 for allowing nine bodies to thaw and decompose in the 1970s; in his defense, he claimed that the Cryonics Society had run out of money. This led to the lowered reputation of cryonics in the U.S.
In 2018, a Y-Combinator startup called Nectome was recognized for developing a method of preserving brains with chemicals rather than by freezing. The method is fatal, performed as euthanasia under general anesthesia, but the hope is that future technology would allow the brain to be physically scanned into a computer simulation, neuron by neuron.
According to The New York Times, cryonicists are predominantly non-religious white males, outnumbering women by about three to one. According to The Guardian, as of 2008, while most cryonicists used to be young, male, and "geeky", recent demographics have shifted slightly towards whole families.
In 2015, Du Hong, a 61-year-old female writer of children's literature, became the first known Chinese national to have her head cryopreserved.
Cryonics is generally regarded as a fringe pseudoscience. The Society for Cryobiology rejected members who practiced cryonics, and issued a public statement saying that cryonics is "not science", and that it is a "personal choice" how people want to have their dead bodies disposed of.
Russian company KrioRus is the first non-US vendor of cryonics services. Yevgeny Alexandrov, chair of the Russian Academy of Sciences commission against pseudoscience, said there was "no scientific basis" for cryonics, and that the company's offering was based on "unfounded speculation".
Scientists have expressed skepticism about cryonics in media sources, and the Norwegian philosopher Ole Martin Moen has written that the topic receives a "minuscule" amount of attention from academia.
While some neuroscientists contend that all the subtleties of a human mind are contained in its anatomical structure, few neuroscientists will comment directly upon the topic of cryonics due to its speculative nature. Individuals who intend to be frozen are often "looked at as a bunch of kooks". Cryobiologist Kenneth B. Storey said in 2004 that cryonics is impossible and will never be possible, as cryonics proponents are proposing to "over-turn the laws of physics, chemistry, and molecular science". Neurobiologist Michael Hendricks has said that "Reanimation or simulation is an abjectly false hope that is beyond the promise of technology and is certainly impossible with the frozen, dead tissue offered by the 'cryonics' industry".
Anthropologist Simon Dein write that cryonics is a typical pseudoscience because of its lack of falsifiability and testability. In Dein's view cryonics is not science, but religion: it places faith in non-existent technology and promises to overcome death itself.
William T. Jarvis has written that "Cryonics might be a suitable subject for scientific research, but marketing an unproven method to the public is quackery".
According to cryonicist Aschwin de Wolf and others, cryonics can often produce intense hostility from spouses who are not cryonicists. James Hughes, the executive director of the pro-life-extension Institute for Ethics and Emerging Technologies, chooses not to personally sign up for cryonics, calling it a worthy experiment but stating laconically that "I value my relationship with my wife."
Cryobiologist Dayong Gao states that "People can always have hope that things will change in the future, but there is no scientific foundation supporting cryonics at this time." While it is universally agreed that "personal identity" is uninterrupted when brain activity temporarily ceases during incidents of accidental drowning (where people have been restored to normal functioning after being completely submerged in cold water for up to 66 minutes), one argument against cryonics is that a centuries-long absence from life might interrupt the conception of personal identity, such that the revived person would "not be themself".
Maastricht University bioethicist David Shaw raises the argument that there would be no point in being revived in the far future if one's friends and families are dead, leaving them all alone; he notes, however, that family and friends can also be frozen, that there is "nothing to prevent the thawed-out freezee from making new friends", and that a lonely existence may be preferable to no existence at all for the revived.
Suspended animation is a popular subject in science fiction and fantasy settings. It is often the means by which a character is transported into the future.
A survey in Germany found that about half of the respondents were familiar with cryonics, and about half of those familiar with cryonics had learned of the subject from films or television.
The town of Nederland, Colorado, hosts an annual Frozen Dead Guy Days festival to commemorate a substandard attempt at cryopreservation.
Corpses subjected to the cryonics process include those of baseball players Ted Williams and son John Henry Williams (in 2002 and 2004, respectively), engineer and doctor L. Stephen Coles (in 2014), economist and entrepreneur Phil Salin, and software engineer Hal Finney (in 2014).
People known to have arranged for cryonics upon death include PayPal founders Luke Nosek and Peter Thiel, Oxford transhumanists Nick Bostrom and Anders Sandberg, and transhumanist philosopher David Pearce. Larry King previously arranged for cryonics, but according to Inside Edition, later changed his mind.
Disgraced financier Jeffrey Epstein wanted to have his head and penis frozen after death so that he could "seed the human race with his DNA".
The corpses of some are mistakenly believed to have undergone cryonics – for instance, the urban legend suggesting Walt Disney's corpse was cryopreserved is false; it was cremated and interred at Forest Lawn Memorial Park Cemetery. Timothy Leary was a long-time cryonics advocate and signed up with a major cryonics provider, but he changed his mind shortly before his death and was not cryopreserved. | [
{
"paragraph_id": 0,
"text": "Cryonics (from Greek: κρύος kryos meaning 'cold') is the low-temperature freezing (usually at −196 °C or −320.8 °F or 77.1 K) and storage of human remains, with the speculative hope that resurrection may be possible in the future. Cryonics is regarded with skepticism within the mainstream scientific community. It is generally viewed as a pseudoscience, and its practice has been characterized as quackery.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cryonics procedures can begin only after the \"patients\" are clinically and legally dead. Cryonics procedures may begin within minutes of death, and use cryoprotectants to try to prevent ice formation during cryopreservation. It is, however, not possible for a corpse to be reanimated after undergoing vitrification, as this causes damage to the brain including its neural circuits. The first corpse to be frozen was that of James Bedford in 1967. As of 2014, about 250 bodies had been cryopreserved in the United States, and 1,500 people had made arrangements for cryopreservation of their corpses.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Economic reality means it is highly improbable that any cryonics corporation could continue in business long enough to take advantage of the claimed long-term benefits offered. Early attempts of cryonic preservations were performed in the 1960s and early 1970s which ended in failure with all but one of the companies going out of business, and their stored corpses thawed and disposed of.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Cryonicists argue that as long as brain structure remains intact, there is no fundamental barrier, given our current understanding of physical law, to recovering its information content. Cryonics proponents go further than the mainstream consensus in saying that the brain does not have to be continuously active to survive or retain memory. Cryonics controversially states that a human survives even within an inactive brain that has been badly damaged, provided that original encoding of memory and personality can, in theory, be adequately inferred and reconstituted from what structure remains.",
"title": "Conceptual basis"
},
{
"paragraph_id": 4,
"text": "Cryonics uses temperatures below −130 °C, called cryopreservation, in an attempt to preserve enough brain information to permit the future revival of the cryopreserved person. Cryopreservation may be accomplished by freezing, freezing with cryoprotectant to reduce ice damage, or by vitrification to avoid ice damage. Even using the best methods, cryopreservation of whole bodies or brains is very damaging and irreversible with current technology.",
"title": "Conceptual basis"
},
{
"paragraph_id": 5,
"text": "Cryonics advocates hold that in the future the use of some kind of presently-nonexistent nanotechnology may be able to help bring the dead back to life and treat the diseases which killed them. Mind uploading has also been proposed.",
"title": "Conceptual basis"
},
{
"paragraph_id": 6,
"text": "Cryonics can be expensive. As of 2018, the cost of preparing and storing corpses using cryonics ranged from US$28,000 to $200,000.",
"title": "Cryonics in practice"
},
{
"paragraph_id": 7,
"text": "When used at high concentrations, cryoprotectants can stop ice formation completely. Cooling and solidification without crystal formation is called vitrification. The first cryoprotectant solutions able to vitrify at very slow cooling rates while still being compatible with whole organ survival were developed in the late 1990s by cryobiologists Gregory Fahy and Brian Wowk for the purpose of banking transplantable organs. This has allowed animal brains to be vitrified, warmed back up, and examined for ice damage using light and electron microscopy. No ice crystal damage was found; cellular damage was due to dehydration and toxicity of the cryoprotectant solutions.",
"title": "Cryonics in practice"
},
{
"paragraph_id": 8,
"text": "Costs can include payment for medical personnel to be on call for death, vitrification, transportation in dry ice to a preservation facility, and payment into a trust fund intended to cover indefinite storage in liquid nitrogen and future revival costs. As of 2011, U.S. cryopreservation costs can range from $28,000 to $200,000, and are often financed via life insurance. KrioRus, which stores bodies communally in large dewars, charges $12,000 to $36,000 for the procedure. Some customers opt to have only their brain cryopreserved (\"neuropreservation\"), rather than their whole body.",
"title": "Cryonics in practice"
},
{
"paragraph_id": 9,
"text": "As of 2014, about 250 corpses have been cryogenically preserved in the U.S., and around 1,500 people have signed up to have their remains preserved. As of 2016, four facilities exist in the world to retain cryopreserved bodies: three in the U.S. and one in Russia.",
"title": "Cryonics in practice"
},
{
"paragraph_id": 10,
"text": "A more recent development is Tomorrow Biostasis GmbH, which is a Berlin-based firm offering cryonics and standby and transportation services in Europe. Founded in December 2019 by Emil Kendziorra and Fernando Azevedo Pinheiro, it partners with the European Biostasis Foundation in Switzerland for long-term corpse storage, with their facility completed in 2022.",
"title": "Cryonics in practice"
},
{
"paragraph_id": 11,
"text": "Considering the lifecycle of corporations, it is extremely unlikely that any cryonics company could continue to exist for sufficient time to take advantage even of the supposed benefits offered: historically, even the most robust corporations have only a one-in-a-thousand chance of surviving even one hundred years. Many cryonics companies have failed; as of 2018, all but one of the pre-1973 batch had gone out of business, and their stored corpses have been defrosted and disposed of.",
"title": "Cryonics in practice"
},
{
"paragraph_id": 12,
"text": "Cryopreservation has long been used by medical laboratories to maintain animal cells, human embryos, and even some organized tissues, for periods as long as three decades. Recovering large animals and organs from a frozen state is however not considered possible at the current level of scientific knowledge. Large vitrified organs tend to develop fractures during cooling, a problem worsened by the large tissue masses and very low temperatures of cryonics. Without cryoprotectants, cell shrinkage and high salt concentrations during freezing usually prevent frozen cells from functioning again after thawing. Ice crystals can also disrupt connections between cells that are necessary for organs to function.",
"title": "Obstacles to success"
},
{
"paragraph_id": 13,
"text": "In 2016, Robert L. McIntyre and Gregory Fahy at the cryobiology research company 21st Century Medicine, Inc. won the Small Animal Brain Preservation Prize of the Brain Preservation Foundation by demonstrating to the satisfaction of neuroscientist judges that a particular implementation of fixation and vitrification called aldehyde-stabilized cryopreservation could preserve a rabbit brain in \"near perfect\" condition at −135 °C, with the cell membranes, synapses, and intracellular structures intact in electron micrographs. Brain Preservation Foundation President, Ken Hayworth, said, \"This result directly answers a main skeptical and scientific criticism against cryonics—that it does not provably preserve the delicate synaptic circuitry of the brain.\" However, the price paid for perfect preservation, as seen by microscopy, was tying up all protein molecules with chemical crosslinks, eliminating biological viability.",
"title": "Obstacles to success"
},
{
"paragraph_id": 14,
"text": "Some cryonics organizations use vitrification without a chemical fixation step, sacrificing some structural preservation quality for less damage at the molecular level. Some scientists, like João Pedro Magalhães, have questioned whether using a deadly chemical for fixation eliminates the possibility of biological revival, making chemical fixation unsuitable for cryonics.",
"title": "Obstacles to success"
},
{
"paragraph_id": 15,
"text": "Outside of cryonics firms and cryonics-linked interest groups, many scientists show strong skepticism toward cryonics methods. Cryobiologist Dayong Gao states that \"we simply don't know if (subjects have) been damaged to the point where they've 'died' during vitrification because the subjects are now inside liquid nitrogen canisters.\" Biochemist Ken Storey argues (based on experience with organ transplants), that \"even if you only wanted to preserve the brain, it has dozens of different areas, which would need to be cryopreserved using different protocols.\"",
"title": "Obstacles to success"
},
{
"paragraph_id": 16,
"text": "Revival would require repairing damage from lack of oxygen, cryoprotectant toxicity, thermal stress (fracturing) and freezing in tissues that do not successfully vitrify, finally followed by reversing the cause of death. In many cases, extensive tissue regeneration would be necessary. This revival technology remains speculative and does not currently exist.",
"title": "Obstacles to success"
},
{
"paragraph_id": 17,
"text": "Historically, a person had little control regarding how their body was treated after death as religion held jurisdiction over the ultimate fate of their body. However, secular courts began to exercise jurisdiction over the body and use discretion in carrying out of the wishes of the deceased person. Most countries legally treat preserved individuals as deceased persons because of laws that forbid vitrifying someone who is medically alive. In France, cryonics is not considered a legal mode of body disposal; only burial, cremation, and formal donation to science are allowed. However, bodies may legally be shipped to other countries for cryonic freezing. As of 2015, the Canadian province of British Columbia prohibits the sale of arrangements for body preservation based on cryonics. In Russia, cryonics falls outside both the medical industry and the funeral services industry, making it easier in Russia than in the U.S. to get hospitals and morgues to release cryonics candidates.",
"title": "Obstacles to success"
},
{
"paragraph_id": 18,
"text": "In London in 2016, the English High Court ruled in favor of a mother's right to seek cryopreservation of her terminally ill 14-year-old daughter, as the girl wanted, contrary to the father's wishes. The decision was made on the basis that the case represented a conventional dispute over the disposal of the girl's body, although the judge urged ministers to seek \"proper regulation\" for the future of cryonic preservation following concerns raised by the hospital about the competence and professionalism of the team that conducted the preservation procedures. In Alcor Life Extension Foundation v. Richardson, the Iowa Court of Appeals ordered for the disinterment of Richardson, who was buried against his wishes, for cryopreservation.",
"title": "Obstacles to success"
},
{
"paragraph_id": 19,
"text": "A detailed legal examination by Jochen Taupitz concludes that cryonic storage is legal in Germany for an indefinite period of time.",
"title": "Obstacles to success"
},
{
"paragraph_id": 20,
"text": "In 2009, writing in Bioethics, David Shaw examined cryonics. The arguments against it included changing the concept of death, the expense of preservation and revival, lack of scientific advancement to permit revival, temptation to use premature euthanasia, and failure due to catastrophe. Arguments in favor of cryonics include the potential benefit to society, the prospect of immortality, and the benefits associated with avoiding death. Shaw explores the expense and the potential payoff, and applies an adapted version of Pascal's Wager to the question.",
"title": "Ethics"
},
{
"paragraph_id": 21,
"text": "In 2016, Charles Tandy wrote in favor of cryonics, arguing that honoring someone's last wishes is seen as a benevolent duty in American and many other cultures.",
"title": "Ethics"
},
{
"paragraph_id": 22,
"text": "Cryopreservation was applied to human cells beginning in 1954 with frozen sperm, which was thawed and used to inseminate three women. The freezing of humans was first scientifically proposed by Michigan professor Robert Ettinger when he wrote The Prospect of Immortality (1962). In April 1966, the first human body was frozen—though it had been embalmed for two months—by being placed in liquid nitrogen and stored at just above freezing. The middle-aged woman from Los Angeles, whose name is unknown, was soon thawed out and buried by relatives.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "The first body to be cryopreserved and then frozen with the hope of future revival was that of James Bedford, claimed by Alcor's Mike Darwin to have occurred within around two hours of his death from cardiorespiratory arrest (secondary to metastasized kidney cancer) on January 12, 1967. Bedford's corpse is the only one frozen before 1974 still preserved today. In 1976, Ettinger founded the Cryonics Institute; his corpse was cryopreserved in 2011. Robert Nelson, \"a former TV repairman with no scientific background\" who led the Cryonics Society of California, was sued in 1981 for allowing nine bodies to thaw and decompose in the 1970s; in his defense, he claimed that the Cryonics Society had run out of money. This led to the lowered reputation of cryonics in the U.S.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "In 2018, a Y-Combinator startup called Nectome was recognized for developing a method of preserving brains with chemicals rather than by freezing. The method is fatal, performed as euthanasia under general anesthesia, but the hope is that future technology would allow the brain to be physically scanned into a computer simulation, neuron by neuron.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "According to The New York Times, cryonicists are predominantly non-religious white males, outnumbering women by about three to one. According to The Guardian, as of 2008, while most cryonicists used to be young, male, and \"geeky\", recent demographics have shifted slightly towards whole families.",
"title": "Demographics"
},
{
"paragraph_id": 26,
"text": "In 2015, Du Hong, a 61-year-old female writer of children's literature, became the first known Chinese national to have her head cryopreserved.",
"title": "Demographics"
},
{
"paragraph_id": 27,
"text": "Cryonics is generally regarded as a fringe pseudoscience. The Society for Cryobiology rejected members who practiced cryonics, and issued a public statement saying that cryonics is \"not science\", and that it is a \"personal choice\" how people want to have their dead bodies disposed of.",
"title": "Reception"
},
{
"paragraph_id": 28,
"text": "Russian company KrioRus is the first non-US vendor of cryonics services. Yevgeny Alexandrov, chair of the Russian Academy of Sciences commission against pseudoscience, said there was \"no scientific basis\" for cryonics, and that the company's offering was based on \"unfounded speculation\".",
"title": "Reception"
},
{
"paragraph_id": 29,
"text": "Scientists have expressed skepticism about cryonics in media sources, and the Norwegian philosopher Ole Martin Moen has written that the topic receives a \"minuscule\" amount of attention from academia.",
"title": "Reception"
},
{
"paragraph_id": 30,
"text": "While some neuroscientists contend that all the subtleties of a human mind are contained in its anatomical structure, few neuroscientists will comment directly upon the topic of cryonics due to its speculative nature. Individuals who intend to be frozen are often \"looked at as a bunch of kooks\". Cryobiologist Kenneth B. Storey said in 2004 that cryonics is impossible and will never be possible, as cryonics proponents are proposing to \"over-turn the laws of physics, chemistry, and molecular science\". Neurobiologist Michael Hendricks has said that \"Reanimation or simulation is an abjectly false hope that is beyond the promise of technology and is certainly impossible with the frozen, dead tissue offered by the 'cryonics' industry\".",
"title": "Reception"
},
{
"paragraph_id": 31,
"text": "Anthropologist Simon Dein write that cryonics is a typical pseudoscience because of its lack of falsifiability and testability. In Dein's view cryonics is not science, but religion: it places faith in non-existent technology and promises to overcome death itself.",
"title": "Reception"
},
{
"paragraph_id": 32,
"text": "William T. Jarvis has written that \"Cryonics might be a suitable subject for scientific research, but marketing an unproven method to the public is quackery\".",
"title": "Reception"
},
{
"paragraph_id": 33,
"text": "According to cryonicist Aschwin de Wolf and others, cryonics can often produce intense hostility from spouses who are not cryonicists. James Hughes, the executive director of the pro-life-extension Institute for Ethics and Emerging Technologies, chooses not to personally sign up for cryonics, calling it a worthy experiment but stating laconically that \"I value my relationship with my wife.\"",
"title": "Reception"
},
{
"paragraph_id": 34,
"text": "Cryobiologist Dayong Gao states that \"People can always have hope that things will change in the future, but there is no scientific foundation supporting cryonics at this time.\" While it is universally agreed that \"personal identity\" is uninterrupted when brain activity temporarily ceases during incidents of accidental drowning (where people have been restored to normal functioning after being completely submerged in cold water for up to 66 minutes), one argument against cryonics is that a centuries-long absence from life might interrupt the conception of personal identity, such that the revived person would \"not be themself\".",
"title": "Reception"
},
{
"paragraph_id": 35,
"text": "Maastricht University bioethicist David Shaw raises the argument that there would be no point in being revived in the far future if one's friends and families are dead, leaving them all alone; he notes, however, that family and friends can also be frozen, that there is \"nothing to prevent the thawed-out freezee from making new friends\", and that a lonely existence may be preferable to no existence at all for the revived.",
"title": "Reception"
},
{
"paragraph_id": 36,
"text": "Suspended animation is a popular subject in science fiction and fantasy settings. It is often the means by which a character is transported into the future.",
"title": "In fiction"
},
{
"paragraph_id": 37,
"text": "A survey in Germany found that about half of the respondents were familiar with cryonics, and about half of those familiar with cryonics had learned of the subject from films or television.",
"title": "In fiction"
},
{
"paragraph_id": 38,
"text": "The town of Nederland, Colorado, hosts an annual Frozen Dead Guy Days festival to commemorate a substandard attempt at cryopreservation.",
"title": "In popular culture"
},
{
"paragraph_id": 39,
"text": "Corpses subjected to the cryonics process include those of baseball players Ted Williams and son John Henry Williams (in 2002 and 2004, respectively), engineer and doctor L. Stephen Coles (in 2014), economist and entrepreneur Phil Salin, and software engineer Hal Finney (in 2014).",
"title": "Notable people"
},
{
"paragraph_id": 40,
"text": "People known to have arranged for cryonics upon death include PayPal founders Luke Nosek and Peter Thiel, Oxford transhumanists Nick Bostrom and Anders Sandberg, and transhumanist philosopher David Pearce. Larry King previously arranged for cryonics, but according to Inside Edition, later changed his mind.",
"title": "Notable people"
},
{
"paragraph_id": 41,
"text": "Disgraced financier Jeffrey Epstein wanted to have his head and penis frozen after death so that he could \"seed the human race with his DNA\".",
"title": "Notable people"
},
{
"paragraph_id": 42,
"text": "The corpses of some are mistakenly believed to have undergone cryonics – for instance, the urban legend suggesting Walt Disney's corpse was cryopreserved is false; it was cremated and interred at Forest Lawn Memorial Park Cemetery. Timothy Leary was a long-time cryonics advocate and signed up with a major cryonics provider, but he changed his mind shortly before his death and was not cryopreserved.",
"title": "Notable people"
}
] | Cryonics is the low-temperature freezing and storage of human remains, with the speculative hope that resurrection may be possible in the future. Cryonics is regarded with skepticism within the mainstream scientific community. It is generally viewed as a pseudoscience, and its practice has been characterized as quackery. Cryonics procedures can begin only after the "patients" are clinically and legally dead. Cryonics procedures may begin within minutes of death, and use cryoprotectants to try to prevent ice formation during cryopreservation. It is, however, not possible for a corpse to be reanimated after undergoing vitrification, as this causes damage to the brain including its neural circuits. The first corpse to be frozen was that of James Bedford in 1967. As of 2014, about 250 bodies had been cryopreserved in the United States, and 1,500 people had made arrangements for cryopreservation of their corpses. Economic reality means it is highly improbable that any cryonics corporation could continue in business long enough to take advantage of the claimed long-term benefits offered. Early attempts of cryonic preservations were performed in the 1960s and early 1970s which ended in failure with all but one of the companies going out of business, and their stored corpses thawed and disposed of. | 2001-10-12T03:21:34Z | 2023-12-29T09:04:09Z | [
"Template:Short description",
"Template:Pp-protected",
"Template:See also",
"Template:For multi",
"Template:Asof",
"Template:Notelist",
"Template:Commons category",
"Template:Death",
"Template:Curlie",
"Template:Use dmy dates",
"Template:Cvt",
"Template:Efn",
"Template:Cite news",
"Template:Cite SSRN",
"Template:Cite episode",
"Template:Wikiquote",
"Template:Pseudoscience",
"Template:Cite magazine",
"Template:Lang-el",
"Template:Bsn",
"Template:Main",
"Template:Reflist",
"Template:Cite book",
"Template:Cite journal",
"Template:Cite web",
"Template:Cryonics"
] | https://en.wikipedia.org/wiki/Cryonics |
6,761 | Unitary patent | The European patent with unitary effect, also known as the unitary patent, is a European patent which benefits from unitary effect in the 17 participating member states of the European Union. Unitary effect may be requested by the proprietor within one month of grant of a European patent, replacing validation of the European patent in the individual countries concerned. Infringement and revocation proceedings are conducted before the Unified Patent Court (UPC), which decisions have a uniform effect for the unitary patent in the participating member states as a whole rather than in each country individually. The unitary patent may be only limited, transferred or revoked, or lapse, in respect of all the participating Member States. Licensing is however possible for part of the unitary territory. The unitary patent may coexist with nationally enforceable patents ("classical" patents) in the non-participating states. The unitary patent's stated aims are to make access to the patent system "easier, less costly and legally secure within the European Union" and "the creation of uniform patent protection throughout the Union".
European patents are granted in English, French, or German and the unitary effect will not require further translations after a transition period. The maintenance fees of the unitary patents are lower than the sum of the renewal fees for national patents of the corresponding area, being equivalent to the combined maintenance fees of Germany, France, the UK and the Netherlands (although the UK is no longer participating following Brexit).
The negotiations which resulted in the unitary patent can be traced back to various initiatives dating to the 1970s. At different times, the project, or very similar projects, have been referred to as the "European Union patent" (the name used in the EU treaties, which serve as the legal basis for EU competency), "EU patent", "Community patent", "European Community Patent", "EC patent" and "COMPAT".
On 17 December 2012, agreement was reached between the European Council and European Parliament on the two EU regulations that made the unitary patent possible through enhanced cooperation at EU level. The legality of the two regulations was challenged by Spain and Italy, but all their claims were rejected by the European Court of Justice. Italy subsequently joined the unitary patent regulation in September 2015, so that all EU member states except Spain and Croatia now participate in the enhanced cooperation for a unitary patent. Unitary effect of newly granted European patents will be available from the date when the related Unified Patent Court Agreement enters into force for those EU countries that have also ratified the UPC, and will extend to those participating member states for which the UPC Agreement enters into force at the time of registration of the unitary patent. Previously granted unitary patents will not automatically get their unitary effect extended to the territory of participating states which ratify the UPC agreement at a later date.
The unitary patent system applies since 1 June 2023, the date of entry into force of the UPC Agreement.
In 2009, three draft documents were published regarding a community patent: a European patent in which the European Community was designated:
Based on those documents, the European Council requested on 6 July 2009 an opinion from the Court of Justice of the European Union, regarding the compatibility of the envisioned Agreement with EU law: "'Is the envisaged agreement creating a Unified Patent Litigation System (currently named European and Community Patents Court) compatible with the provisions of the Treaty establishing the European Community?’"
In December 2010, the use of the enhanced co-operation procedure, under which Articles 326–334 of the Treaty on the Functioning of the European Union provides that a group of member states of the European Union can choose to co-operate on a specific topic, was proposed by twelve Member States to set up a unitary patent applicable in all participating European Union Member States. The use of this procedure had only been used once in the past, for harmonising rules regarding the applicable law in divorce across several EU Member States.
In early 2011, the procedure leading to the enhanced co-operation was reported to be progressing. Twenty-five Member States had written to the European Commission requesting to participate, with Spain and Italy remaining outside, primarily on the basis of ongoing concerns over translation issues. On 15 February, the European Parliament approved the use of the enhanced co-operation procedure for unitary patent protection by a vote of 471 to 160, and on 10 March 2011 the Council gave their authorisation. Two days earlier, on 8 March 2011, the Court of Justice of the European Union had issued its opinion, stating that the draft Agreement creating the European and Community Patent Court would be incompatible with EU law. The same day, the Hungarian Presidency of the Council insisted that this opinion would not affect the enhanced co-operation procedure.
In November 2011, negotiations on the enhanced co-operation system were reportedly advancing rapidly—too fast, in some views. It was announced that implementation required an enabling European Regulation, and a Court agreement between the states that elect to take part. The European Parliament approved the continuation of negotiations in September. A draft of the agreement was issued on 11 November 2011 and was open to all member states of the European Union, but not to other European Patent Convention states. However, serious criticisms of the proposal remained mostly unresolved. A meeting of the Competitiveness Council on 5 December failed to agree on the final text. In particular, there was no agreement on where the Central Division of a Unified Patent Court should be located, "with London, Munich and Paris the candidate cities."
The Polish Presidency acknowledged on 16 December 2011 the failure to reach an agreement "on the question of the location of the seat of the central division." The Danish Presidency therefore inherited the issue. According to the President of the European Commission in January 2012, the only question remaining to be settled was the location of the Central Division of the Court. However, evidence presented to the UK House of Commons European Scrutiny Committee in February suggested that the position was more complicated. At an EU summit at the end of January 2012, participants agreed to press on and finalise the system by June. On 26 April, Herman Van Rompuy, President of the European Council, wrote to members of the council, saying "This important file has been discussed for many years and we are now very close to a final deal,.... This deal is needed now, because this is an issue of crucial importance for innovation and growth. I very much hope that the last outstanding issue will be sorted out at the May Competitiveness Council. If not, I will take it up at the June European Council." The Competitiveness Council met on 30 May and failed to reach agreement.
A compromise agreement on the seat(s) of the unified court was eventually reached at the June European Council (28–29 June 2012), splitting the central division according to technology between Paris (the main seat), London and Munich. However, on 2 July 2012, the European Parliament decided to postpone the vote following a move by the European Council to modify the arrangements previously approved by MEPs in negotiations with the European Council. The modification was considered controversial and included the deletion of three key articles (6–8) of the legislation, seeking to reduce the competence of the European Union Court of Justice in unitary patent litigation. On 9 July 2012, the Committee on Legal Affairs of the European Parliament debated the patent package following the decisions adopted by the General Council on 28–29 June 2012 in camera in the presence of MEP Bernhard Rapkay. A later press release by Rapkay quoted from a legal opinion submitted by the Legal Service of the European Parliament, which affirmed the concerns of MEPs to approve the decision of a recent EU summit to delete said articles as it "nullifies central aspects of a substantive patent protection". A Europe-wide uniform protection of intellectual property would thus not exist with the consequence that the requirements of the corresponding EU treaty would not be met and that the European Court of Justice could therefore invalidate the legislation. By the end of 2012 a new compromise was reached between the European Parliament and the European Council, including a limited role for the European Court of Justice. The Unified Court will apply the Unified Patent Court Agreement, which is considered national patent law from an EU law point of view, but still is equal for each participant. [However the draft statutory instrument aimed at implementation of the Unified Court and UPC in the UK provides for different infringement laws for: European patents (unitary or not) litigated through the Unified Court; European patents (UK) litigated before UK courts; and national patents]. The legislation for the enhanced co-operation mechanism was approved by the European Parliament on 11 December 2012 and the regulations were signed by the European Council and European Parliament officials on 17 December 2012.
On 30 May 2011, Italy and Spain challenged the council's authorisation of the use of enhanced co-operation to introduce the trilingual (English, French, German) system for the unitary patent, which they viewed as discriminatory to their languages, with the CJEU on the grounds that it did not comply with the EU treaties. In January 2013, Advocate General Yves Bot delivered his recommendation that the court reject the complaint. Suggestions by the Advocate General are advisory only, but are generally followed by the court. The case was dismissed by the court in April 2013, however Spain launched two new challenges with the EUCJ in March 2013 against the regulations implementing the unitary patent package. The court hearing for both cases was scheduled for 1 July 2014. Advocate-General Yves Bot published his opinion on 18 November 2014, suggesting that both actions be dismissed (ECLI:EU:C:2014:2380 and ECLI:EU:C:2014:2381). The court handed down its decisions on 5 May 2015 as ECLI:EU:C:2015:298 and ECLI:EU:C:2015:299 fully dismissing the Spanish claims. Following a request by its government, Italy became a participant of the unitary patent regulations in September 2015.
European patents are granted in accordance with the provisions of the European Patent Convention (EPC), via a unified procedure before the European Patent Office (EPO). While upon filing of a European patent application, all 39 Contracting States are automatically designated, a European patent becomes a bundle of "national" European patents upon grant. In contrast to the unified character of a European patent application, a granted European patent has, in effect, no unitary character, except for the centralized opposition procedure (which can be initiated within 9 months from grant, by somebody else than the patent proprietor), and the centralized limitation and revocation procedures (which can only be instituted by the patent proprietor). In other words, a European patent in one Contracting State, i.e. a "national" European patent, is effectively independent of the same European patent in each other Contracting State, except for the opposition, limitation and revocation procedures. The enforcement of a European patent is dealt with by national law. The abandonment, revocation or limitation of the European patent in one state does not affect the European patent in other states.
While the EPC already provided the possibility for a group of member states to allow European patents to have a unitary character also after grant, until now, only Liechtenstein and Switzerland have opted to create a unified protection area (see Unitary patent (Switzerland and Liechtenstein)).
By requesting unitary effect within one month of grant, the patent proprietor is now able to obtain uniform protection in the participating members states of the European Union in a single step, considerably simplifying obtaining patent protection in a large part of the EU. The unitary patent system co-exists with national patent systems and European patent without unitary effects. The unitary patent does not cover EPC countries that are not member of the European Union, such as UK or Turkey.
The implementation of the unitary patent is based on three legal instruments:
Thus the unitary patent is based on EU law as well as the European Patent Convention (EPC). Article 142 EPC provides the legal basis for establishing a common system of patents for Parties to the EPC. Previously, only Liechtenstein and Switzerland had used this possibility to create a unified protection area (see Unitary patent (Switzerland and Liechtenstein)).
The first two regulations were approved by the European Parliament on 11 December 2012, with future application set for the 25 member states then participating in the enhanced cooperation for a unitary patent (all then EU member states except Croatia, Italy and Spain). The instruments were adopted as regulations EU 1257/2012 and 1260/2012 on 17 December 2012, and entered into force in January 2013. Following a request by its government, Italy became a participant of the unitary patent regulations in September 2015.
As of March 2022, neither of the two remaining non-participants in the unitary patent (Spain and Croatia) had requested the European Commission to participate.
Although formally the Regulations will apply to all 25 participating states from the moment the UPC Agreement enters into force for the first group of ratifiers, the unitary effect of newly granted unitary patents will only extend to those of the 25 states where the UPC Agreement has entered into force, while patent coverage for other participating states without UPC Agreement ratification will be covered by a coexisting normal European patent in each of those states.
The unitary effect of unitary patents means a single renewal fee, a single ownership, a single object of property, a single court (the Unified Patent Court) and uniform protection, which means that revocation as well as infringement proceedings are to be decided for the unitary patent as a whole rather than for each country individually. Licensing is however to remain possible for part of the unitary territory.
Some administrative tasks relating to the European patents with unitary effect are performed by the European Patent Office, as authorized by Article 143(1) EPC. These tasks include the collection of renewal fees and registration of unitary effect upon grant, recording licenses and statements that licenses are available to any person. Decisions of the European Patent Office regarding the unitary patent are open to appeal to the Unified Patent Court, rather than to the EPO Boards of Appeal.
For a unitary patent, ultimately no translation will be required (except under certain circumstances in the event of a dispute), which is expected to significantly reduce the cost for protection in the whole area. However, Article 6 of Regulation 1260/2012 provides that, during a transitional period of minimum six years and no more than twelve years, one translation needs to be provided. Namely, a full translation of the European patent specification needs to be provided either into English if the language of the proceedings at the EPO was French or German, or into any other EU official language if the language of the proceedings at the EPO was English. Such translation will have no legal effect and will be "for information purposes only”. In addition, machine translations will be provided, which will be, in the words of the regulation, "for information purposes only and should not have any legal effect".
In several EPC contracting states, for the national part of a traditional bundle European patent (i.e., for a European patent without unitary effect), a translation has to be filed within a three-month time limit after the publication of grant in the European Patent Bulletin under Article 65 EPC, otherwise the patent is considered never to have existed (void ab initio) in that state. For the 22 parties to the London Agreement, this requirement has already been abolished or reduced (e.g. by dispensing with the requirement if the patent is available in English, and/or only requiring translation of the claims).
Translation requirements for the participating states in the enhanced cooperation for a unitary patent are shown below:
Article 7 of Regulation 1257/2012 provides that, as an object of property, a European patent with unitary effect will be treated "in its entirety and in all participating Member States as a national patent of the participating Member State in which that patent has unitary effect and in which the applicant had her/his residence or principal place of business or, by default, had a place of business on the date of filing the application for the European patent." When the applicant had no domicile in a participating Member State, German law will apply. Ullrich has the criticized the system, which is similar to the Community Trademark and the Community Design, as being "in conflict with both the purpose of the creation of unitary patent protection and with primary EU law."
The Agreement on a Unified Patent Court provides the legal basis for the Unified Patent Court (UPC): a patent court for European patents (with and without unitary effect), with jurisdiction in those countries where the Agreement is in effect. In addition to regulations regarding the court structure, it also contains substantive provisions relating to the right to prevent use of an invention and allowed use by non-patent proprietors (e.g. for private non-commercial use), preliminary and permanent injunctions. Entry into force for the UPC took place after Germany deposited its instrument of ratification of the UPC Agreement, which triggered the countdown until the Agreement's entry into force on June 1, 2023.
The UPC Agreement was signed on 19 February 2013 by 24 EU member states, including all states then participating in the enhanced co-operation measures except Bulgaria and Poland. Bulgaria signed the agreement on 5 March 2013 following internal administrative procedures. Italy, which did not originally join the enhanced co-operation measures but subsequently signed up, did sign the UPC agreement. The agreement remains open to accession for all remaining EU member states, with all European Union Member States except Spain and Poland having signed the Agreement. States which do not participate in the unitary patent regulations can still become parties to the UPC agreement, which would allow the new court to handle European patents validated in the country.
On 18 January 2019, Kluwer Patent Blog wrote, "a recurring theme for some years has been that 'the UPC will start next year'". Then, Brexit and German constitutional court complaint were considered as the main obstacles. The German constitutional court first decided in a decision of 13 February 2020 against the German ratification of the Agreement on the ground that the German Parliament did not vote with the required majority (2/3 according to the judgement). After a second vote and further, this time unsuccessful, constitutional complaints, Germany formally ratified the UPC Agreement on 7 August 2021. While the UK ratified the agreement in April 2018, the UK later withdrew from the Agreement following Brexit.
As of 21 February 2023, 17 countries finally ratified the Agreement.
The Unified Patent Court has exclusive jurisdiction in infringement and revocation proceedings involving European patents with unitary effect, and during a transition period non-exclusive jurisdiction regarding European patents without unitary effect in the states where the Agreement applies, unless the patent proprietor decides to opt out. It furthermore has jurisdiction to hear cases against decisions of the European Patent Office regarding unitary patents. As a court of several member states of the European Union it may (Court of First Instance) or must (Court of Appeal) ask prejudicial questions to the European Court of Justice when the interpretation of EU law (including the two unitary patent regulations, but excluding the UPC Agreement) is not obvious.
The court has two instances: a court of first instance and a court of appeal. The court of appeal and the registry have their seats in Luxembourg, while the central division of the court of first instance would have its seat in Paris. The central division has a thematic branch in Munich (the London location has yet to be replaced by a new location within the EU). The court of first instance may further have local and regional divisions in all member states that wish to set up such divisions.
While the regulations formally apply to all 25 member states participating in the enhanced cooperation for a unitary patent, from the date the UPC agreement has entered into force for the first group of ratifiers, unitary patents will only extend to the territory of those participating member states where the UPC Agreement had entered into force when the unitary effect was registered. If the unitary effect territory subsequently expands to additional participating member states for which the UPC Agreement later enters into force, this will be reflected for all subsequently registered unitary patents, but the territorial scope of the unitary effect of existing unitary patents will not be extended to these states.
Unitary effect can be requested up to one month after grant of the European patent directly at the EPO, with retroactive effect from the date of grant. However, according to the Draft Rules Relating to Unitary Patent Protection, unitary effect would be registered only if the European patent has been granted with the same set of claims for all the 25 participating member states in the regulations, whether the unitary effect applies to them or not. European patents automatically become a bundle of "national" European patents upon grant. Upon the grant of unitary effect, the "national" European patents will retroactively be considered to never have existed in the territories where the unitary patent has effect. The unitary effect does not affect "national" European patents in states where the unitary patent does not apply. Any "national" European patents applying outside the "unitary effect" zone will co-exist with the unitary patent.
As the unitary patent is introduced by an EU regulation, it is expected to not only be valid in the mainland territory of the participating member states that are party to the UPC, but also in those of their special territories that are part of the European Union. As of April 2014, this includes the following fourteen territories:
In addition to the territories above, the European Patent Convention has been extended by two member states participating in the enhanced cooperation for a unitary patent to cover some of their dependent territories outside the European Union: In following of those territories, the unitary patent is de facto extended through application of national (French, or Dutch) law:
However, the unitary patent does not apply in the French territories French Polynesia and New Caledonia as implementing legislation would need to be passed by those jurisdictions (rather than the French national legislation required in the other territories) and this has not been done.
The renewal fees are planned to be based on the cumulative renewal fees due in the four countries where European patents were most often validated in 2015 (Germany, France, the UK and the Netherlands). This is despite the UK leaving the unitary patent system following Brexit. The renewal fees of the unitary patent would thus be ranging from 35 Euro in the second year to 4855 in the 20th year. The renewal fees will be collected by the EPO, with the EPO keeping 50% of the fees and the other 50% being redistributed to the participating member states.
Translation requirements as well as the requirement to pay yearly patent maintenance fees in individual countries presently renders the European patent system costly to obtain protection in the whole of the European Union.
In an impact assessment from 2011, the European Commission estimated that the costs of obtaining a patent in all 27 EU countries would drop from over 32 000 euro (mainly due to translation costs) to 6 500 euro (for the combination of an EU, Spanish and Italian patent) due to introduction of the Unitary patent. Per capita costs of an EU patent were estimated at just 6 euro/million in the original 25 participating countries (and 12 euro/million in the 27 EU countries for protection with a Unitary, Italian and Spanish patent).
How the EU Commission has presented the expected cost savings has however been sharply criticized as exaggerated and based on unrealistic assumptions. The EU Commission has notably considered the costs for validating a European patent in 27 countries while in reality only about 1% of all granted European patents are currently validated in all 27 EU states. Based on more realistic assumptions, the cost savings are expected to be much lower than actually claimed by the commission. For example, the EPO calculated that for an average EP patent validated and maintained in 4 countries, the overall savings to be between 3% and 8%.
Work on a Community patent started in the 1970s, but the resulting Community Patent Convention (CPC) was a failure.
The "Luxembourg Conference on the Community Patent" took place in 1975 and the Convention for the European Patent for the common market, or (Luxembourg) Community Patent Convention (CPC), was signed at Luxembourg on 15 December 1975, by the 9 member states of the European Economic Community at that time. However, the CPC never entered into force. It was not ratified by enough countries.
Fourteen years later, the Agreement relating to Community patents was made at Luxembourg on 15 December 1989. It attempted to revive the CPC project, but also failed. This Agreement consisted of an amended version of the original Community Patent Convention. Twelve states signed the Agreement: Belgium, Denmark, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Portugal, Spain, and United Kingdom. All of those states would need to have ratified the Agreement to cause it to enter into force, but only seven did so: Denmark, France, Germany, Greece, Luxembourg, the Netherlands, and United Kingdom.
Nevertheless, a majority of member states of the EEC at that time introduced some harmonisation into their national patent laws in anticipation of the entry in force of the CPC. A more substantive harmonisation took place at around the same time to take account of the European Patent Convention and the Strasbourg Convention.
In 2000, renewed efforts from the European Union resulted in a Community Patent Regulation proposal, sometimes abbreviated as CPR. It provides that the patent, once it has been granted by the European Patent Office (EPO) in one of its procedural languages (English, German or French) and published in that language, with a translation of the claims into the two other procedural languages, will be valid without any further translation. This proposal is aimed to achieve a considerable reduction in translation costs.
Nevertheless, additional translations could become necessary in legal proceedings against a suspected infringer. In such a situation, a suspected infringer who has been unable to consult the text of the patent in the official language of the Member State in which he is domiciled, is presumed, until proven otherwise, not to have knowingly infringed the patent. To protect a suspected infringer who, in such a situation, has not acted in a deliberate manner, it is provided that the proprietor of the patent will not be able to obtain damages in respect of the period prior to the translation of the patent being notified to the infringer.
The proposed Community Patent Regulation should also establish a court holding exclusive jurisdiction to invalidate issued patents; thus, a Community Patent's validity will be the same in all EU member states. This court will be attached to the present European Court of Justice and Court of First Instance through use of provisions in the Treaty of Nice.
Discussion regarding the Community patent had made clear progress in 2003 when a political agreement was reached on 3 March 2003. However, one year later in March 2004 under the Irish presidency, the Competitiveness Council failed to agree on the details of the Regulation. In particular the time delays for translating the claims and the authentic text of the claims in case of an infringement remained problematic issues throughout discussions and in the end proved insoluble.
In view of the difficulties in reaching an agreement on the community patent, other legal agreements have been proposed outside the European Union legal framework to reduce the cost of translation (of patents when granted) and litigation, namely the London Agreement, which entered into force on 1 May 2008—and which has reduced the number of countries requiring translation of European patents granted nowadays under the European Patent Convention, and the corresponding costs to obtain a European patent—and the European Patent Litigation Agreement (EPLA), a proposal that has now lapsed.
After the council in March 2004, EU Commissioner Frits Bolkestein said that "The failure to agree on the Community Patent I am afraid undermines the credibility of the whole enterprise to make Europe the most competitive economy in the world by 2010." Adding:
It is a mystery to me how Ministers at the so-called 'Competitiveness Council' can keep a straight face when they adopt conclusions for the Spring European Council on making Europe more competitive and yet in the next breath backtrack on the political agreement already reached on the main principles of the Community Patent in March of last year. I can only hope that one day the vested, protectionist interests that stand in the way of agreement on this vital measure will be sidelined by the over-riding importance and interests of European manufacturing industry and Europe's competitiveness. That day has not yet come.
Jonathan Todd, Commission's Internal Market spokesman, declared:
Normally, after the common political approach, the text of the regulation is agreed very quickly. Instead, some Member States appear to have changed their positions. (...) It is extremely unfortunate that European industry's competitiveness, innovation and R&D are being sacrificed for the sake of preserving narrow vested interests.
European Commission President Romano Prodi, asked to evaluate his five-year term, cites as his weak point the failure of many EU governments to implement the "Lisbon Agenda", agreed in 2001. In particular, he cited the failure to agree on a Europewide patent, or even the languages to be used for such a patent, "because member states did not accept a change in the rules; they were not coherent".
There is support for the Community patent from various quarters. From the point of view of the European Commission the Community Patent is an essential step towards creating a level playing field for trade within the European Union. For smaller businesses, if the Community patent achieves its aim of providing a relatively inexpensive way of obtaining patent protection across a wide trading area, then there is also support.
For larger businesses, however, other issues come into play, which have tended to dilute overall support. In general, these businesses recognise that the current European Patent system provides the best possible protection given the need to satisfy national sovereignty requirements such as regarding translation and enforcement. The Community Patent proposal was generally supported if it would do away with both of these issues, but there was some concern about the level of competence of the proposed European Patent Court. A business would be reluctant to obtain a Europe-wide patent if it ran the risk of being revoked by an inexperienced judge. Also, the question of translations would not go away – unless the users of the system could see significant change in the position of some of the countries holding out for more of a patent specification to be translated on grant or before enforcement, it was understood that larger businesses (the bulk of the users of the patent system) would be unlikely to move away from the tried and tested European Patent.
Thus, in 2005, the Community patent looked unlikely to be implemented in the near future. However, on 16 January 2006 the European Commission "launched a public consultation on how future action in patent policy to create an EU-wide system of protection can best take account of stakeholders' needs." The Community patent was one of the issues the consultation focused on. More than 2500 replies were received. According to the European Commission, the consultation showed that there is widespread support for the Community patent but not at any cost, and "in particular not on the basis of the Common Political Approach reached by EU Ministers in 2003".
In February 2007, EU Commissioner Charlie McCreevy was quoted as saying:
The proposal for an EU-wide patent is stuck in the mud. It is clear to me from discussions with member states that there is no consensus at present on how to improve the situation.
The European Commission released a white paper in April 2007 seeking to "improve the patent system in Europe and revitalise the debate on this issue." On 18 April 2007, at the European Patent Forum in Munich, Germany, Günter Verheugen, Vice-President of the European Commission, said that his proposal to support the European economy was "to have the London Agreement ratified by all member states, and to have a European patent judiciary set up, in order to achieve rapid implementation of the Community patent, which is indispensable". He further said that he believed this could be done within five years.
In October 2007, the Portuguese presidency of the Council of the European Union proposed an EU patent jurisdiction, "borrowing heavily from the rejected draft European Patent Litigation Agreement (EPLA)". In November 2007, EU ministers were reported to have made some progress towards a community patent legal system, with "some specific results" expected in 2008.
In 2008, the idea of using machine translations to translate patents was proposed to solve the language issue, which is partially responsible for blocking progress on the community patent. Meanwhile, European Commissioner for Enterprise and Industry Günter Verheugen declared at the European Patent Forum in May 2008 that there was an "urgent need" for a community patent.
In December 2009, it was reported that the Swedish EU presidency had achieved a breakthrough in negotiations concerning the community patent. The breakthrough was reported to involve setting up a single patent court for the EU, however ministers conceded much work remained to be done before the community patent would become a reality.
According to the agreed plan, the EU would accede to the European Patent Convention as a contracting state, and patents granted by the European Patent Office will, when validated for the EU, have unitary effect in the territory of the European Union. On 10 November 2010, it was announced that no agreement had been reached and that, "in spite of the progress made, [the Competitiveness Council of the European Union had] fallen short of unanimity by a small margin," with commentators reporting that the Spanish representative, citing the aim to avoid any discrimination, had "re-iterated at length the stubborn rejection of the Madrid Government of taking the 'Munich' three languages regime (English, German, French) of the European Patent Convention (EPC) as a basis for a future EU Patent." | [
{
"paragraph_id": 0,
"text": "The European patent with unitary effect, also known as the unitary patent, is a European patent which benefits from unitary effect in the 17 participating member states of the European Union. Unitary effect may be requested by the proprietor within one month of grant of a European patent, replacing validation of the European patent in the individual countries concerned. Infringement and revocation proceedings are conducted before the Unified Patent Court (UPC), which decisions have a uniform effect for the unitary patent in the participating member states as a whole rather than in each country individually. The unitary patent may be only limited, transferred or revoked, or lapse, in respect of all the participating Member States. Licensing is however possible for part of the unitary territory. The unitary patent may coexist with nationally enforceable patents (\"classical\" patents) in the non-participating states. The unitary patent's stated aims are to make access to the patent system \"easier, less costly and legally secure within the European Union\" and \"the creation of uniform patent protection throughout the Union\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "European patents are granted in English, French, or German and the unitary effect will not require further translations after a transition period. The maintenance fees of the unitary patents are lower than the sum of the renewal fees for national patents of the corresponding area, being equivalent to the combined maintenance fees of Germany, France, the UK and the Netherlands (although the UK is no longer participating following Brexit).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The negotiations which resulted in the unitary patent can be traced back to various initiatives dating to the 1970s. At different times, the project, or very similar projects, have been referred to as the \"European Union patent\" (the name used in the EU treaties, which serve as the legal basis for EU competency), \"EU patent\", \"Community patent\", \"European Community Patent\", \"EC patent\" and \"COMPAT\".",
"title": ""
},
{
"paragraph_id": 3,
"text": "On 17 December 2012, agreement was reached between the European Council and European Parliament on the two EU regulations that made the unitary patent possible through enhanced cooperation at EU level. The legality of the two regulations was challenged by Spain and Italy, but all their claims were rejected by the European Court of Justice. Italy subsequently joined the unitary patent regulation in September 2015, so that all EU member states except Spain and Croatia now participate in the enhanced cooperation for a unitary patent. Unitary effect of newly granted European patents will be available from the date when the related Unified Patent Court Agreement enters into force for those EU countries that have also ratified the UPC, and will extend to those participating member states for which the UPC Agreement enters into force at the time of registration of the unitary patent. Previously granted unitary patents will not automatically get their unitary effect extended to the territory of participating states which ratify the UPC agreement at a later date.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The unitary patent system applies since 1 June 2023, the date of entry into force of the UPC Agreement.",
"title": ""
},
{
"paragraph_id": 5,
"text": "In 2009, three draft documents were published regarding a community patent: a European patent in which the European Community was designated:",
"title": "Background"
},
{
"paragraph_id": 6,
"text": "Based on those documents, the European Council requested on 6 July 2009 an opinion from the Court of Justice of the European Union, regarding the compatibility of the envisioned Agreement with EU law: \"'Is the envisaged agreement creating a Unified Patent Litigation System (currently named European and Community Patents Court) compatible with the provisions of the Treaty establishing the European Community?’\"",
"title": "Background"
},
{
"paragraph_id": 7,
"text": "In December 2010, the use of the enhanced co-operation procedure, under which Articles 326–334 of the Treaty on the Functioning of the European Union provides that a group of member states of the European Union can choose to co-operate on a specific topic, was proposed by twelve Member States to set up a unitary patent applicable in all participating European Union Member States. The use of this procedure had only been used once in the past, for harmonising rules regarding the applicable law in divorce across several EU Member States.",
"title": "Background"
},
{
"paragraph_id": 8,
"text": "In early 2011, the procedure leading to the enhanced co-operation was reported to be progressing. Twenty-five Member States had written to the European Commission requesting to participate, with Spain and Italy remaining outside, primarily on the basis of ongoing concerns over translation issues. On 15 February, the European Parliament approved the use of the enhanced co-operation procedure for unitary patent protection by a vote of 471 to 160, and on 10 March 2011 the Council gave their authorisation. Two days earlier, on 8 March 2011, the Court of Justice of the European Union had issued its opinion, stating that the draft Agreement creating the European and Community Patent Court would be incompatible with EU law. The same day, the Hungarian Presidency of the Council insisted that this opinion would not affect the enhanced co-operation procedure.",
"title": "Background"
},
{
"paragraph_id": 9,
"text": "In November 2011, negotiations on the enhanced co-operation system were reportedly advancing rapidly—too fast, in some views. It was announced that implementation required an enabling European Regulation, and a Court agreement between the states that elect to take part. The European Parliament approved the continuation of negotiations in September. A draft of the agreement was issued on 11 November 2011 and was open to all member states of the European Union, but not to other European Patent Convention states. However, serious criticisms of the proposal remained mostly unresolved. A meeting of the Competitiveness Council on 5 December failed to agree on the final text. In particular, there was no agreement on where the Central Division of a Unified Patent Court should be located, \"with London, Munich and Paris the candidate cities.\"",
"title": "Background"
},
{
"paragraph_id": 10,
"text": "The Polish Presidency acknowledged on 16 December 2011 the failure to reach an agreement \"on the question of the location of the seat of the central division.\" The Danish Presidency therefore inherited the issue. According to the President of the European Commission in January 2012, the only question remaining to be settled was the location of the Central Division of the Court. However, evidence presented to the UK House of Commons European Scrutiny Committee in February suggested that the position was more complicated. At an EU summit at the end of January 2012, participants agreed to press on and finalise the system by June. On 26 April, Herman Van Rompuy, President of the European Council, wrote to members of the council, saying \"This important file has been discussed for many years and we are now very close to a final deal,.... This deal is needed now, because this is an issue of crucial importance for innovation and growth. I very much hope that the last outstanding issue will be sorted out at the May Competitiveness Council. If not, I will take it up at the June European Council.\" The Competitiveness Council met on 30 May and failed to reach agreement.",
"title": "Background"
},
{
"paragraph_id": 11,
"text": "A compromise agreement on the seat(s) of the unified court was eventually reached at the June European Council (28–29 June 2012), splitting the central division according to technology between Paris (the main seat), London and Munich. However, on 2 July 2012, the European Parliament decided to postpone the vote following a move by the European Council to modify the arrangements previously approved by MEPs in negotiations with the European Council. The modification was considered controversial and included the deletion of three key articles (6–8) of the legislation, seeking to reduce the competence of the European Union Court of Justice in unitary patent litigation. On 9 July 2012, the Committee on Legal Affairs of the European Parliament debated the patent package following the decisions adopted by the General Council on 28–29 June 2012 in camera in the presence of MEP Bernhard Rapkay. A later press release by Rapkay quoted from a legal opinion submitted by the Legal Service of the European Parliament, which affirmed the concerns of MEPs to approve the decision of a recent EU summit to delete said articles as it \"nullifies central aspects of a substantive patent protection\". A Europe-wide uniform protection of intellectual property would thus not exist with the consequence that the requirements of the corresponding EU treaty would not be met and that the European Court of Justice could therefore invalidate the legislation. By the end of 2012 a new compromise was reached between the European Parliament and the European Council, including a limited role for the European Court of Justice. The Unified Court will apply the Unified Patent Court Agreement, which is considered national patent law from an EU law point of view, but still is equal for each participant. [However the draft statutory instrument aimed at implementation of the Unified Court and UPC in the UK provides for different infringement laws for: European patents (unitary or not) litigated through the Unified Court; European patents (UK) litigated before UK courts; and national patents]. The legislation for the enhanced co-operation mechanism was approved by the European Parliament on 11 December 2012 and the regulations were signed by the European Council and European Parliament officials on 17 December 2012.",
"title": "Background"
},
{
"paragraph_id": 12,
"text": "On 30 May 2011, Italy and Spain challenged the council's authorisation of the use of enhanced co-operation to introduce the trilingual (English, French, German) system for the unitary patent, which they viewed as discriminatory to their languages, with the CJEU on the grounds that it did not comply with the EU treaties. In January 2013, Advocate General Yves Bot delivered his recommendation that the court reject the complaint. Suggestions by the Advocate General are advisory only, but are generally followed by the court. The case was dismissed by the court in April 2013, however Spain launched two new challenges with the EUCJ in March 2013 against the regulations implementing the unitary patent package. The court hearing for both cases was scheduled for 1 July 2014. Advocate-General Yves Bot published his opinion on 18 November 2014, suggesting that both actions be dismissed (ECLI:EU:C:2014:2380 and ECLI:EU:C:2014:2381). The court handed down its decisions on 5 May 2015 as ECLI:EU:C:2015:298 and ECLI:EU:C:2015:299 fully dismissing the Spanish claims. Following a request by its government, Italy became a participant of the unitary patent regulations in September 2015.",
"title": "Background"
},
{
"paragraph_id": 13,
"text": "European patents are granted in accordance with the provisions of the European Patent Convention (EPC), via a unified procedure before the European Patent Office (EPO). While upon filing of a European patent application, all 39 Contracting States are automatically designated, a European patent becomes a bundle of \"national\" European patents upon grant. In contrast to the unified character of a European patent application, a granted European patent has, in effect, no unitary character, except for the centralized opposition procedure (which can be initiated within 9 months from grant, by somebody else than the patent proprietor), and the centralized limitation and revocation procedures (which can only be instituted by the patent proprietor). In other words, a European patent in one Contracting State, i.e. a \"national\" European patent, is effectively independent of the same European patent in each other Contracting State, except for the opposition, limitation and revocation procedures. The enforcement of a European patent is dealt with by national law. The abandonment, revocation or limitation of the European patent in one state does not affect the European patent in other states.",
"title": "Background"
},
{
"paragraph_id": 14,
"text": "While the EPC already provided the possibility for a group of member states to allow European patents to have a unitary character also after grant, until now, only Liechtenstein and Switzerland have opted to create a unified protection area (see Unitary patent (Switzerland and Liechtenstein)).",
"title": "Background"
},
{
"paragraph_id": 15,
"text": "By requesting unitary effect within one month of grant, the patent proprietor is now able to obtain uniform protection in the participating members states of the European Union in a single step, considerably simplifying obtaining patent protection in a large part of the EU. The unitary patent system co-exists with national patent systems and European patent without unitary effects. The unitary patent does not cover EPC countries that are not member of the European Union, such as UK or Turkey.",
"title": "Background"
},
{
"paragraph_id": 16,
"text": "The implementation of the unitary patent is based on three legal instruments:",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 17,
"text": "Thus the unitary patent is based on EU law as well as the European Patent Convention (EPC). Article 142 EPC provides the legal basis for establishing a common system of patents for Parties to the EPC. Previously, only Liechtenstein and Switzerland had used this possibility to create a unified protection area (see Unitary patent (Switzerland and Liechtenstein)).",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 18,
"text": "The first two regulations were approved by the European Parliament on 11 December 2012, with future application set for the 25 member states then participating in the enhanced cooperation for a unitary patent (all then EU member states except Croatia, Italy and Spain). The instruments were adopted as regulations EU 1257/2012 and 1260/2012 on 17 December 2012, and entered into force in January 2013. Following a request by its government, Italy became a participant of the unitary patent regulations in September 2015.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 19,
"text": "As of March 2022, neither of the two remaining non-participants in the unitary patent (Spain and Croatia) had requested the European Commission to participate.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 20,
"text": "Although formally the Regulations will apply to all 25 participating states from the moment the UPC Agreement enters into force for the first group of ratifiers, the unitary effect of newly granted unitary patents will only extend to those of the 25 states where the UPC Agreement has entered into force, while patent coverage for other participating states without UPC Agreement ratification will be covered by a coexisting normal European patent in each of those states.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 21,
"text": "The unitary effect of unitary patents means a single renewal fee, a single ownership, a single object of property, a single court (the Unified Patent Court) and uniform protection, which means that revocation as well as infringement proceedings are to be decided for the unitary patent as a whole rather than for each country individually. Licensing is however to remain possible for part of the unitary territory.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 22,
"text": "Some administrative tasks relating to the European patents with unitary effect are performed by the European Patent Office, as authorized by Article 143(1) EPC. These tasks include the collection of renewal fees and registration of unitary effect upon grant, recording licenses and statements that licenses are available to any person. Decisions of the European Patent Office regarding the unitary patent are open to appeal to the Unified Patent Court, rather than to the EPO Boards of Appeal.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 23,
"text": "For a unitary patent, ultimately no translation will be required (except under certain circumstances in the event of a dispute), which is expected to significantly reduce the cost for protection in the whole area. However, Article 6 of Regulation 1260/2012 provides that, during a transitional period of minimum six years and no more than twelve years, one translation needs to be provided. Namely, a full translation of the European patent specification needs to be provided either into English if the language of the proceedings at the EPO was French or German, or into any other EU official language if the language of the proceedings at the EPO was English. Such translation will have no legal effect and will be \"for information purposes only”. In addition, machine translations will be provided, which will be, in the words of the regulation, \"for information purposes only and should not have any legal effect\".",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 24,
"text": "In several EPC contracting states, for the national part of a traditional bundle European patent (i.e., for a European patent without unitary effect), a translation has to be filed within a three-month time limit after the publication of grant in the European Patent Bulletin under Article 65 EPC, otherwise the patent is considered never to have existed (void ab initio) in that state. For the 22 parties to the London Agreement, this requirement has already been abolished or reduced (e.g. by dispensing with the requirement if the patent is available in English, and/or only requiring translation of the claims).",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 25,
"text": "Translation requirements for the participating states in the enhanced cooperation for a unitary patent are shown below:",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 26,
"text": "Article 7 of Regulation 1257/2012 provides that, as an object of property, a European patent with unitary effect will be treated \"in its entirety and in all participating Member States as a national patent of the participating Member State in which that patent has unitary effect and in which the applicant had her/his residence or principal place of business or, by default, had a place of business on the date of filing the application for the European patent.\" When the applicant had no domicile in a participating Member State, German law will apply. Ullrich has the criticized the system, which is similar to the Community Trademark and the Community Design, as being \"in conflict with both the purpose of the creation of unitary patent protection and with primary EU law.\"",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 27,
"text": "The Agreement on a Unified Patent Court provides the legal basis for the Unified Patent Court (UPC): a patent court for European patents (with and without unitary effect), with jurisdiction in those countries where the Agreement is in effect. In addition to regulations regarding the court structure, it also contains substantive provisions relating to the right to prevent use of an invention and allowed use by non-patent proprietors (e.g. for private non-commercial use), preliminary and permanent injunctions. Entry into force for the UPC took place after Germany deposited its instrument of ratification of the UPC Agreement, which triggered the countdown until the Agreement's entry into force on June 1, 2023.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 28,
"text": "The UPC Agreement was signed on 19 February 2013 by 24 EU member states, including all states then participating in the enhanced co-operation measures except Bulgaria and Poland. Bulgaria signed the agreement on 5 March 2013 following internal administrative procedures. Italy, which did not originally join the enhanced co-operation measures but subsequently signed up, did sign the UPC agreement. The agreement remains open to accession for all remaining EU member states, with all European Union Member States except Spain and Poland having signed the Agreement. States which do not participate in the unitary patent regulations can still become parties to the UPC agreement, which would allow the new court to handle European patents validated in the country.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 29,
"text": "On 18 January 2019, Kluwer Patent Blog wrote, \"a recurring theme for some years has been that 'the UPC will start next year'\". Then, Brexit and German constitutional court complaint were considered as the main obstacles. The German constitutional court first decided in a decision of 13 February 2020 against the German ratification of the Agreement on the ground that the German Parliament did not vote with the required majority (2/3 according to the judgement). After a second vote and further, this time unsuccessful, constitutional complaints, Germany formally ratified the UPC Agreement on 7 August 2021. While the UK ratified the agreement in April 2018, the UK later withdrew from the Agreement following Brexit.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 30,
"text": "As of 21 February 2023, 17 countries finally ratified the Agreement.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 31,
"text": "The Unified Patent Court has exclusive jurisdiction in infringement and revocation proceedings involving European patents with unitary effect, and during a transition period non-exclusive jurisdiction regarding European patents without unitary effect in the states where the Agreement applies, unless the patent proprietor decides to opt out. It furthermore has jurisdiction to hear cases against decisions of the European Patent Office regarding unitary patents. As a court of several member states of the European Union it may (Court of First Instance) or must (Court of Appeal) ask prejudicial questions to the European Court of Justice when the interpretation of EU law (including the two unitary patent regulations, but excluding the UPC Agreement) is not obvious.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 32,
"text": "The court has two instances: a court of first instance and a court of appeal. The court of appeal and the registry have their seats in Luxembourg, while the central division of the court of first instance would have its seat in Paris. The central division has a thematic branch in Munich (the London location has yet to be replaced by a new location within the EU). The court of first instance may further have local and regional divisions in all member states that wish to set up such divisions.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 33,
"text": "While the regulations formally apply to all 25 member states participating in the enhanced cooperation for a unitary patent, from the date the UPC agreement has entered into force for the first group of ratifiers, unitary patents will only extend to the territory of those participating member states where the UPC Agreement had entered into force when the unitary effect was registered. If the unitary effect territory subsequently expands to additional participating member states for which the UPC Agreement later enters into force, this will be reflected for all subsequently registered unitary patents, but the territorial scope of the unitary effect of existing unitary patents will not be extended to these states.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 34,
"text": "Unitary effect can be requested up to one month after grant of the European patent directly at the EPO, with retroactive effect from the date of grant. However, according to the Draft Rules Relating to Unitary Patent Protection, unitary effect would be registered only if the European patent has been granted with the same set of claims for all the 25 participating member states in the regulations, whether the unitary effect applies to them or not. European patents automatically become a bundle of \"national\" European patents upon grant. Upon the grant of unitary effect, the \"national\" European patents will retroactively be considered to never have existed in the territories where the unitary patent has effect. The unitary effect does not affect \"national\" European patents in states where the unitary patent does not apply. Any \"national\" European patents applying outside the \"unitary effect\" zone will co-exist with the unitary patent.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 35,
"text": "As the unitary patent is introduced by an EU regulation, it is expected to not only be valid in the mainland territory of the participating member states that are party to the UPC, but also in those of their special territories that are part of the European Union. As of April 2014, this includes the following fourteen territories:",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 36,
"text": "In addition to the territories above, the European Patent Convention has been extended by two member states participating in the enhanced cooperation for a unitary patent to cover some of their dependent territories outside the European Union: In following of those territories, the unitary patent is de facto extended through application of national (French, or Dutch) law:",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 37,
"text": "However, the unitary patent does not apply in the French territories French Polynesia and New Caledonia as implementing legislation would need to be passed by those jurisdictions (rather than the French national legislation required in the other territories) and this has not been done.",
"title": "Legal basis and implementation"
},
{
"paragraph_id": 38,
"text": "The renewal fees are planned to be based on the cumulative renewal fees due in the four countries where European patents were most often validated in 2015 (Germany, France, the UK and the Netherlands). This is despite the UK leaving the unitary patent system following Brexit. The renewal fees of the unitary patent would thus be ranging from 35 Euro in the second year to 4855 in the 20th year. The renewal fees will be collected by the EPO, with the EPO keeping 50% of the fees and the other 50% being redistributed to the participating member states.",
"title": "Costs"
},
{
"paragraph_id": 39,
"text": "Translation requirements as well as the requirement to pay yearly patent maintenance fees in individual countries presently renders the European patent system costly to obtain protection in the whole of the European Union.",
"title": "Costs"
},
{
"paragraph_id": 40,
"text": "In an impact assessment from 2011, the European Commission estimated that the costs of obtaining a patent in all 27 EU countries would drop from over 32 000 euro (mainly due to translation costs) to 6 500 euro (for the combination of an EU, Spanish and Italian patent) due to introduction of the Unitary patent. Per capita costs of an EU patent were estimated at just 6 euro/million in the original 25 participating countries (and 12 euro/million in the 27 EU countries for protection with a Unitary, Italian and Spanish patent).",
"title": "Costs"
},
{
"paragraph_id": 41,
"text": "How the EU Commission has presented the expected cost savings has however been sharply criticized as exaggerated and based on unrealistic assumptions. The EU Commission has notably considered the costs for validating a European patent in 27 countries while in reality only about 1% of all granted European patents are currently validated in all 27 EU states. Based on more realistic assumptions, the cost savings are expected to be much lower than actually claimed by the commission. For example, the EPO calculated that for an average EP patent validated and maintained in 4 countries, the overall savings to be between 3% and 8%.",
"title": "Costs"
},
{
"paragraph_id": 42,
"text": "Work on a Community patent started in the 1970s, but the resulting Community Patent Convention (CPC) was a failure.",
"title": "Earlier attempts"
},
{
"paragraph_id": 43,
"text": "The \"Luxembourg Conference on the Community Patent\" took place in 1975 and the Convention for the European Patent for the common market, or (Luxembourg) Community Patent Convention (CPC), was signed at Luxembourg on 15 December 1975, by the 9 member states of the European Economic Community at that time. However, the CPC never entered into force. It was not ratified by enough countries.",
"title": "Earlier attempts"
},
{
"paragraph_id": 44,
"text": "Fourteen years later, the Agreement relating to Community patents was made at Luxembourg on 15 December 1989. It attempted to revive the CPC project, but also failed. This Agreement consisted of an amended version of the original Community Patent Convention. Twelve states signed the Agreement: Belgium, Denmark, France, Germany, Greece, Ireland, Italy, Luxembourg, the Netherlands, Portugal, Spain, and United Kingdom. All of those states would need to have ratified the Agreement to cause it to enter into force, but only seven did so: Denmark, France, Germany, Greece, Luxembourg, the Netherlands, and United Kingdom.",
"title": "Earlier attempts"
},
{
"paragraph_id": 45,
"text": "Nevertheless, a majority of member states of the EEC at that time introduced some harmonisation into their national patent laws in anticipation of the entry in force of the CPC. A more substantive harmonisation took place at around the same time to take account of the European Patent Convention and the Strasbourg Convention.",
"title": "Earlier attempts"
},
{
"paragraph_id": 46,
"text": "In 2000, renewed efforts from the European Union resulted in a Community Patent Regulation proposal, sometimes abbreviated as CPR. It provides that the patent, once it has been granted by the European Patent Office (EPO) in one of its procedural languages (English, German or French) and published in that language, with a translation of the claims into the two other procedural languages, will be valid without any further translation. This proposal is aimed to achieve a considerable reduction in translation costs.",
"title": "Earlier attempts"
},
{
"paragraph_id": 47,
"text": "Nevertheless, additional translations could become necessary in legal proceedings against a suspected infringer. In such a situation, a suspected infringer who has been unable to consult the text of the patent in the official language of the Member State in which he is domiciled, is presumed, until proven otherwise, not to have knowingly infringed the patent. To protect a suspected infringer who, in such a situation, has not acted in a deliberate manner, it is provided that the proprietor of the patent will not be able to obtain damages in respect of the period prior to the translation of the patent being notified to the infringer.",
"title": "Earlier attempts"
},
{
"paragraph_id": 48,
"text": "The proposed Community Patent Regulation should also establish a court holding exclusive jurisdiction to invalidate issued patents; thus, a Community Patent's validity will be the same in all EU member states. This court will be attached to the present European Court of Justice and Court of First Instance through use of provisions in the Treaty of Nice.",
"title": "Earlier attempts"
},
{
"paragraph_id": 49,
"text": "Discussion regarding the Community patent had made clear progress in 2003 when a political agreement was reached on 3 March 2003. However, one year later in March 2004 under the Irish presidency, the Competitiveness Council failed to agree on the details of the Regulation. In particular the time delays for translating the claims and the authentic text of the claims in case of an infringement remained problematic issues throughout discussions and in the end proved insoluble.",
"title": "Earlier attempts"
},
{
"paragraph_id": 50,
"text": "In view of the difficulties in reaching an agreement on the community patent, other legal agreements have been proposed outside the European Union legal framework to reduce the cost of translation (of patents when granted) and litigation, namely the London Agreement, which entered into force on 1 May 2008—and which has reduced the number of countries requiring translation of European patents granted nowadays under the European Patent Convention, and the corresponding costs to obtain a European patent—and the European Patent Litigation Agreement (EPLA), a proposal that has now lapsed.",
"title": "Earlier attempts"
},
{
"paragraph_id": 51,
"text": "After the council in March 2004, EU Commissioner Frits Bolkestein said that \"The failure to agree on the Community Patent I am afraid undermines the credibility of the whole enterprise to make Europe the most competitive economy in the world by 2010.\" Adding:",
"title": "Earlier attempts"
},
{
"paragraph_id": 52,
"text": "It is a mystery to me how Ministers at the so-called 'Competitiveness Council' can keep a straight face when they adopt conclusions for the Spring European Council on making Europe more competitive and yet in the next breath backtrack on the political agreement already reached on the main principles of the Community Patent in March of last year. I can only hope that one day the vested, protectionist interests that stand in the way of agreement on this vital measure will be sidelined by the over-riding importance and interests of European manufacturing industry and Europe's competitiveness. That day has not yet come.",
"title": "Earlier attempts"
},
{
"paragraph_id": 53,
"text": "Jonathan Todd, Commission's Internal Market spokesman, declared:",
"title": "Earlier attempts"
},
{
"paragraph_id": 54,
"text": "Normally, after the common political approach, the text of the regulation is agreed very quickly. Instead, some Member States appear to have changed their positions. (...) It is extremely unfortunate that European industry's competitiveness, innovation and R&D are being sacrificed for the sake of preserving narrow vested interests.",
"title": "Earlier attempts"
},
{
"paragraph_id": 55,
"text": "European Commission President Romano Prodi, asked to evaluate his five-year term, cites as his weak point the failure of many EU governments to implement the \"Lisbon Agenda\", agreed in 2001. In particular, he cited the failure to agree on a Europewide patent, or even the languages to be used for such a patent, \"because member states did not accept a change in the rules; they were not coherent\".",
"title": "Earlier attempts"
},
{
"paragraph_id": 56,
"text": "There is support for the Community patent from various quarters. From the point of view of the European Commission the Community Patent is an essential step towards creating a level playing field for trade within the European Union. For smaller businesses, if the Community patent achieves its aim of providing a relatively inexpensive way of obtaining patent protection across a wide trading area, then there is also support.",
"title": "Earlier attempts"
},
{
"paragraph_id": 57,
"text": "For larger businesses, however, other issues come into play, which have tended to dilute overall support. In general, these businesses recognise that the current European Patent system provides the best possible protection given the need to satisfy national sovereignty requirements such as regarding translation and enforcement. The Community Patent proposal was generally supported if it would do away with both of these issues, but there was some concern about the level of competence of the proposed European Patent Court. A business would be reluctant to obtain a Europe-wide patent if it ran the risk of being revoked by an inexperienced judge. Also, the question of translations would not go away – unless the users of the system could see significant change in the position of some of the countries holding out for more of a patent specification to be translated on grant or before enforcement, it was understood that larger businesses (the bulk of the users of the patent system) would be unlikely to move away from the tried and tested European Patent.",
"title": "Earlier attempts"
},
{
"paragraph_id": 58,
"text": "Thus, in 2005, the Community patent looked unlikely to be implemented in the near future. However, on 16 January 2006 the European Commission \"launched a public consultation on how future action in patent policy to create an EU-wide system of protection can best take account of stakeholders' needs.\" The Community patent was one of the issues the consultation focused on. More than 2500 replies were received. According to the European Commission, the consultation showed that there is widespread support for the Community patent but not at any cost, and \"in particular not on the basis of the Common Political Approach reached by EU Ministers in 2003\".",
"title": "Earlier attempts"
},
{
"paragraph_id": 59,
"text": "In February 2007, EU Commissioner Charlie McCreevy was quoted as saying:",
"title": "Earlier attempts"
},
{
"paragraph_id": 60,
"text": "The proposal for an EU-wide patent is stuck in the mud. It is clear to me from discussions with member states that there is no consensus at present on how to improve the situation.",
"title": "Earlier attempts"
},
{
"paragraph_id": 61,
"text": "The European Commission released a white paper in April 2007 seeking to \"improve the patent system in Europe and revitalise the debate on this issue.\" On 18 April 2007, at the European Patent Forum in Munich, Germany, Günter Verheugen, Vice-President of the European Commission, said that his proposal to support the European economy was \"to have the London Agreement ratified by all member states, and to have a European patent judiciary set up, in order to achieve rapid implementation of the Community patent, which is indispensable\". He further said that he believed this could be done within five years.",
"title": "Earlier attempts"
},
{
"paragraph_id": 62,
"text": "In October 2007, the Portuguese presidency of the Council of the European Union proposed an EU patent jurisdiction, \"borrowing heavily from the rejected draft European Patent Litigation Agreement (EPLA)\". In November 2007, EU ministers were reported to have made some progress towards a community patent legal system, with \"some specific results\" expected in 2008.",
"title": "Earlier attempts"
},
{
"paragraph_id": 63,
"text": "In 2008, the idea of using machine translations to translate patents was proposed to solve the language issue, which is partially responsible for blocking progress on the community patent. Meanwhile, European Commissioner for Enterprise and Industry Günter Verheugen declared at the European Patent Forum in May 2008 that there was an \"urgent need\" for a community patent.",
"title": "Earlier attempts"
},
{
"paragraph_id": 64,
"text": "In December 2009, it was reported that the Swedish EU presidency had achieved a breakthrough in negotiations concerning the community patent. The breakthrough was reported to involve setting up a single patent court for the EU, however ministers conceded much work remained to be done before the community patent would become a reality.",
"title": "Earlier attempts"
},
{
"paragraph_id": 65,
"text": "According to the agreed plan, the EU would accede to the European Patent Convention as a contracting state, and patents granted by the European Patent Office will, when validated for the EU, have unitary effect in the territory of the European Union. On 10 November 2010, it was announced that no agreement had been reached and that, \"in spite of the progress made, [the Competitiveness Council of the European Union had] fallen short of unanimity by a small margin,\" with commentators reporting that the Spanish representative, citing the aim to avoid any discrimination, had \"re-iterated at length the stubborn rejection of the Madrid Government of taking the 'Munich' three languages regime (English, German, French) of the European Patent Convention (EPC) as a basis for a future EU Patent.\"",
"title": "Earlier attempts"
}
] | The European patent with unitary effect, also known as the unitary patent, is a European patent which benefits from unitary effect in the 17 participating member states of the European Union. Unitary effect may be requested by the proprietor within one month of grant of a European patent, replacing validation of the European patent in the individual countries concerned. Infringement and revocation proceedings are conducted before the Unified Patent Court (UPC), which decisions have a uniform effect for the unitary patent in the participating member states as a whole rather than in each country individually. The unitary patent may be only limited, transferred or revoked, or lapse, in respect of all the participating Member States. Licensing is however possible for part of the unitary territory. The unitary patent may coexist with nationally enforceable patents in the non-participating states. The unitary patent's stated aims are to make access to the patent system "easier, less costly and legally secure within the European Union" and "the creation of uniform patent protection throughout the Union". European patents are granted in English, French, or German and the unitary effect will not require further translations after a transition period. The maintenance fees of the unitary patents are lower than the sum of the renewal fees for national patents of the corresponding area, being equivalent to the combined maintenance fees of Germany, France, the UK and the Netherlands. The negotiations which resulted in the unitary patent can be traced back to various initiatives dating to the 1970s. At different times, the project, or very similar projects, have been referred to as the "European Union patent", "EU patent", "Community patent", "European Community Patent", "EC patent" and "COMPAT". On 17 December 2012, agreement was reached between the European Council and European Parliament on the two EU regulations that made the unitary patent possible through enhanced cooperation at EU level. The legality of the two regulations was challenged by Spain and Italy, but all their claims were rejected by the European Court of Justice. Italy subsequently joined the unitary patent regulation in September 2015, so that all EU member states except Spain and Croatia now participate in the enhanced cooperation for a unitary patent. Unitary effect of newly granted European patents will be available from the date when the related Unified Patent Court Agreement enters into force for those EU countries that have also ratified the UPC, and will extend to those participating member states for which the UPC Agreement enters into force at the time of registration of the unitary patent. Previously granted unitary patents will not automatically get their unitary effect extended to the territory of participating states which ratify the UPC agreement at a later date. The unitary patent system applies since 1 June 2023, the date of entry into force of the UPC Agreement. | 2002-02-25T15:51:15Z | 2023-09-28T10:07:47Z | [
"Template:CELEX",
"Template:EPC Rule",
"Template:Cite journal",
"Template:European Patent Organisation",
"Template:Infobox EU legislation",
"Template:Blockquote",
"Template:Cite magazine",
"Template:Dead link",
"Template:Use British English",
"Template:Cite press release",
"Template:Cite news",
"Template:EPO Guidelines",
"Template:Intellectual property laws of the European Union",
"Template:Use dmy dates",
"Template:Main",
"Template:Citation needed",
"Template:Reflist",
"Template:Webarchive",
"Template:Legend",
"Template:European patent law",
"Template:ECLI",
"Template:Cite book",
"Template:About",
"Template:EPC Article",
"Template:Cite web",
"Template:In lang",
"Template:Short description",
"Template:Refn",
"Template:Location map "
] | https://en.wikipedia.org/wiki/Unitary_patent |
6,763 | Cistron | A cistron is an alternative term for "gene". The word cistron is used to emphasize that genes exhibit a specific behavior in a cis-trans test; distinct positions (or loci) within a genome are cistronic.
The words cistron and gene were coined before the advancing state of biology made it clear that the concepts they refer to are practically equivalent. The same historical naming practices are responsible for many of the synonyms in the life sciences.
The term cistron was coined by Seymour Benzer in an article entitled The elementary units of heredity. The cistron was defined by an operational test applicable to most organisms that is sometimes referred to as a cis-trans test, but more often as a complementation test.
For example, suppose a mutation at a chromosome position x {\displaystyle x} is responsible for a change in recessive trait in a diploid organism (where chromosomes come in pairs). We say that the mutation is recessive because the organism will exhibit the wild type phenotype (ordinary trait) unless both chromosomes of a pair have the mutation (homozygous mutation). Similarly, suppose a mutation at another position, y {\displaystyle y} , is responsible for the same recessive trait. The positions x {\displaystyle x} and y {\displaystyle y} are said to be within the same cistron when an organism that has the mutation at x {\displaystyle x} on one chromosome and has the mutation at position y {\displaystyle y} on the paired chromosome exhibits the recessive trait even though the organism is not homozygous for either mutation. When instead the wild type trait is expressed, the positions are said to belong to distinct cistrons / genes. Or simply put, mutations on the same cistrons will not complement; as opposed to mutations on different cistrons may complement (see Benzer's T4 bacteriophage experiments T4 rII system).
For example, an operon is a stretch of DNA that is transcribed to create a contiguous segment of RNA, but contains more than one cistron / gene. The operon is said to be polycistronic, whereas ordinary genes are said to be monocistronic. | [
{
"paragraph_id": 0,
"text": "A cistron is an alternative term for \"gene\". The word cistron is used to emphasize that genes exhibit a specific behavior in a cis-trans test; distinct positions (or loci) within a genome are cistronic.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The words cistron and gene were coined before the advancing state of biology made it clear that the concepts they refer to are practically equivalent. The same historical naming practices are responsible for many of the synonyms in the life sciences.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "The term cistron was coined by Seymour Benzer in an article entitled The elementary units of heredity. The cistron was defined by an operational test applicable to most organisms that is sometimes referred to as a cis-trans test, but more often as a complementation test.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "For example, suppose a mutation at a chromosome position x {\\displaystyle x} is responsible for a change in recessive trait in a diploid organism (where chromosomes come in pairs). We say that the mutation is recessive because the organism will exhibit the wild type phenotype (ordinary trait) unless both chromosomes of a pair have the mutation (homozygous mutation). Similarly, suppose a mutation at another position, y {\\displaystyle y} , is responsible for the same recessive trait. The positions x {\\displaystyle x} and y {\\displaystyle y} are said to be within the same cistron when an organism that has the mutation at x {\\displaystyle x} on one chromosome and has the mutation at position y {\\displaystyle y} on the paired chromosome exhibits the recessive trait even though the organism is not homozygous for either mutation. When instead the wild type trait is expressed, the positions are said to belong to distinct cistrons / genes. Or simply put, mutations on the same cistrons will not complement; as opposed to mutations on different cistrons may complement (see Benzer's T4 bacteriophage experiments T4 rII system).",
"title": "Definition"
},
{
"paragraph_id": 4,
"text": "For example, an operon is a stretch of DNA that is transcribed to create a contiguous segment of RNA, but contains more than one cistron / gene. The operon is said to be polycistronic, whereas ordinary genes are said to be monocistronic.",
"title": "Definition"
}
] | A cistron is an alternative term for "gene". The word cistron is used to emphasize that genes exhibit a specific behavior in a cis-trans test; distinct positions within a genome are cistronic. | 2002-02-25T15:43:11Z | 2023-10-09T17:23:33Z | [
"Template:Short description",
"Template:More citations needed",
"Template:Reflist",
"Template:Cite book",
"Template:Genetics-stub"
] | https://en.wikipedia.org/wiki/Cistron |
6,766 | Commonwealth | A commonwealth is a traditional English term for a political community founded for the common good. The noun "commonwealth", meaning "public welfare, general good or advantage", dates from the 15th century. Originally a phrase (the common-wealth or the common wealth – echoed in the modern synonym "public wealth"), it comes from the old meaning of "wealth", which is "well-being", and is itself a loose translation of the Latin res publica. The term literally meant "common well-being". In the 17th century, the definition of "commonwealth" expanded from its original sense of "public welfare" or "commonweal" to mean "a state in which the supreme power is vested in the people; a republic or democratic state".
The term evolved to become a title to a number of political entities. Three countries – Australia, the Bahamas, and Dominica – have the official title "Commonwealth", as do four U.S. states and two U.S. territories. Since the early 20th century, the term has been used to name some fraternal associations of states, most notably the Commonwealth of Nations, an organisation primarily of former territories of the British Empire. The organisation is not to be confused with the realms of the Commonwealth.
Translations of Ancient Roman writers' works to English have on occasion translated "Res publica", and variants thereof, to "the commonwealth", a term referring to the Roman state as a whole.
The Commonwealth of England was the official name of the political unit (de facto military rule in the name of parliamentary supremacy) that replaced the Kingdom of England (after the English Civil War) from 1649–53 and 1659–60, under the rule of Oliver Cromwell and his son and successor Richard. From 1653 to 1659, although still legally known as a Commonwealth, the republic, united with the former Kingdom of Scotland, operated under different institutions (at times as a de facto monarchy) and is known by historians as the Protectorate. In a British context, it is sometimes referred to as the "Old Commonwealth".
In the later 20th century a socialist political party known as the Common Wealth Party was active. Previously a similarly named party, the Commonwealth Land Party, was in existence.
The Icelandic Commonwealth or the Icelandic Free State (Icelandic: Þjóðveldið) was the state existing in Iceland between the establishment of the Althing in 930 and the pledge of fealty to the Norwegian king in 1262. It was initially established by a public consisting largely of recent immigrants from Norway who had fled the unification of that country under King Harald Fairhair.
The Commonwealth of the Philippines was the administrative body that governed the Philippines from 1935 to 1946, aside from a period of exile in the Second World War from 1942 to 1945 when Japan occupied the country. It replaced the Insular Government, a United States territorial government, and was established by the Tydings–McDuffie Act. The Commonwealth was designed as a transitional administration in preparation for the country's full achievement of independence, which was achieved in 1946. The Commonwealth of the Philippines was a founding member of the United Nations.
Republic is still an alternative translation of the traditional name Rzeczpospolita of the Polish–Lithuanian Commonwealth. Wincenty Kadłubek (Vincent Kadlubo, 1160–1223) used for the first time the original Latin term res publica in the context of Poland in his "Chronicles of the Kings and Princes of Poland". The name was used officially for the confederal union formed by Poland and Lithuania 1569–1795.
It is also often referred as "Nobles' Commonwealth" (1505–1795, i.e., before the union). In the contemporary political doctrine of the Polish–Lithuanian Commonwealth, "our state is a Republic (or Commonwealth) under the presidency of the King". The Commonwealth introduced a doctrine of religious tolerance called Warsaw Confederation, had its own parliament Sejm (although elections were restricted to nobility and elected kings, who were bound to certain contracts Pacta conventa from the beginning of the reign).
"A commonwealth of good counsaile" was the title of the 1607 English translation of the work of Wawrzyniec Grzymała Goślicki "De optimo senatore" that presented to English readers many of the ideas present in the political system of the Polish–Lithuanian Commonwealth.
Between 1914 and 1925, Catalonia was an autonomous region of Spain. Its government during that time was given the title mancomunidad (Catalan: mancomunitat), which is translated into English as "commonwealth". The Commonwealth of Catalonia had limited powers and was formed as a federation of the four Catalan provinces. A number of Catalan-language institutions were created during its existence.
Between 1838 and 1847, Liberia was officially known as the "Commonwealth of Liberia". It changed its name to the "Republic of Liberia" when it declared independence (and adopted a new constitution) in 1847.
"Commonwealth" was first proposed as a term for a federation of the six Australian crown colonies at the 1891 constitutional convention in Sydney. Its adoption was initially controversial, as it was associated by some with the republicanism of Oliver Cromwell (see above), but it was retained in all subsequent drafts of the constitution. The term was finally incorporated into law in the Commonwealth of Australia Constitution Act 1900, which established the federation. Australia operates under a federal system, in which power is divided between the federal (national) government and the state governments (the successors of the six colonies). So, in an Australian context, the term "Commonwealth" (capitalised), which is often abbreviated to Cth, refers to the federal government, and "Commonwealth of Australia" is the official name of the country.
The Bahamas, a Commonwealth realm, has used the official style Commonwealth of The Bahamas since its independence in 1973.
The small Caribbean republic of Dominica has used the official style Commonwealth of Dominica since 1978.
Four states of the United States of America officially designate themselves as "commonwealths". All four were part of Great Britain's possessions along the Atlantic coast of North America prior to the American Revolution. As such, they share a strong influence of English common law in some of their laws and institutions. The four are:
Two organized but unincorporated U.S. territories are called commonwealths. The two are:
In 2016, the Washington, D.C. city council also selected "Douglass Commonwealth" as the potential name of State of Washington, D.C., following the 2016 statehood referendum, at least partially in order to retain the initials "D.C." as the state's abbreviation.
The Commonwealth of Nations—formerly the British Commonwealth—is a voluntary association of 54 independent sovereign states, most of which were once part of the British Empire. The Commonwealth's membership includes both republics and monarchies. The Head of the Commonwealth was Queen Elizabeth II, who also reigned as monarch directly in the 16 member states known as Commonwealth realms until her death in 2022.
The Commonwealth of Independent States (CIS) is a loose alliance or confederation consisting of nine of the 15 former Soviet Republics, the exceptions being Turkmenistan (a CIS associate member), Lithuania, Latvia, Estonia, Ukraine, and Georgia. Georgia left the CIS in August 2008 following the 2008 invasion of the Russian military into South Ossetia and Abkhazia. Its creation signalled the dissolution of the Soviet Union, its purpose being to "allow a civilised divorce" between the Soviet Republics. The CIS has developed as a forum by which the member-states can co-operate in economics, defence, and foreign policy.
Labour MP Tony Benn sponsored a Commonwealth of Britain Bill several times between 1991 and 2001, intended to abolish the monarchy and establish a British republic. It never reached second reading. | [
{
"paragraph_id": 0,
"text": "A commonwealth is a traditional English term for a political community founded for the common good. The noun \"commonwealth\", meaning \"public welfare, general good or advantage\", dates from the 15th century. Originally a phrase (the common-wealth or the common wealth – echoed in the modern synonym \"public wealth\"), it comes from the old meaning of \"wealth\", which is \"well-being\", and is itself a loose translation of the Latin res publica. The term literally meant \"common well-being\". In the 17th century, the definition of \"commonwealth\" expanded from its original sense of \"public welfare\" or \"commonweal\" to mean \"a state in which the supreme power is vested in the people; a republic or democratic state\".",
"title": ""
},
{
"paragraph_id": 1,
"text": "The term evolved to become a title to a number of political entities. Three countries – Australia, the Bahamas, and Dominica – have the official title \"Commonwealth\", as do four U.S. states and two U.S. territories. Since the early 20th century, the term has been used to name some fraternal associations of states, most notably the Commonwealth of Nations, an organisation primarily of former territories of the British Empire. The organisation is not to be confused with the realms of the Commonwealth.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Translations of Ancient Roman writers' works to English have on occasion translated \"Res publica\", and variants thereof, to \"the commonwealth\", a term referring to the Roman state as a whole.",
"title": "Historical use"
},
{
"paragraph_id": 3,
"text": "The Commonwealth of England was the official name of the political unit (de facto military rule in the name of parliamentary supremacy) that replaced the Kingdom of England (after the English Civil War) from 1649–53 and 1659–60, under the rule of Oliver Cromwell and his son and successor Richard. From 1653 to 1659, although still legally known as a Commonwealth, the republic, united with the former Kingdom of Scotland, operated under different institutions (at times as a de facto monarchy) and is known by historians as the Protectorate. In a British context, it is sometimes referred to as the \"Old Commonwealth\".",
"title": "Historical use"
},
{
"paragraph_id": 4,
"text": "In the later 20th century a socialist political party known as the Common Wealth Party was active. Previously a similarly named party, the Commonwealth Land Party, was in existence.",
"title": "Historical use"
},
{
"paragraph_id": 5,
"text": "The Icelandic Commonwealth or the Icelandic Free State (Icelandic: Þjóðveldið) was the state existing in Iceland between the establishment of the Althing in 930 and the pledge of fealty to the Norwegian king in 1262. It was initially established by a public consisting largely of recent immigrants from Norway who had fled the unification of that country under King Harald Fairhair.",
"title": "Historical use"
},
{
"paragraph_id": 6,
"text": "The Commonwealth of the Philippines was the administrative body that governed the Philippines from 1935 to 1946, aside from a period of exile in the Second World War from 1942 to 1945 when Japan occupied the country. It replaced the Insular Government, a United States territorial government, and was established by the Tydings–McDuffie Act. The Commonwealth was designed as a transitional administration in preparation for the country's full achievement of independence, which was achieved in 1946. The Commonwealth of the Philippines was a founding member of the United Nations.",
"title": "Historical use"
},
{
"paragraph_id": 7,
"text": "Republic is still an alternative translation of the traditional name Rzeczpospolita of the Polish–Lithuanian Commonwealth. Wincenty Kadłubek (Vincent Kadlubo, 1160–1223) used for the first time the original Latin term res publica in the context of Poland in his \"Chronicles of the Kings and Princes of Poland\". The name was used officially for the confederal union formed by Poland and Lithuania 1569–1795.",
"title": "Historical use"
},
{
"paragraph_id": 8,
"text": "It is also often referred as \"Nobles' Commonwealth\" (1505–1795, i.e., before the union). In the contemporary political doctrine of the Polish–Lithuanian Commonwealth, \"our state is a Republic (or Commonwealth) under the presidency of the King\". The Commonwealth introduced a doctrine of religious tolerance called Warsaw Confederation, had its own parliament Sejm (although elections were restricted to nobility and elected kings, who were bound to certain contracts Pacta conventa from the beginning of the reign).",
"title": "Historical use"
},
{
"paragraph_id": 9,
"text": "\"A commonwealth of good counsaile\" was the title of the 1607 English translation of the work of Wawrzyniec Grzymała Goślicki \"De optimo senatore\" that presented to English readers many of the ideas present in the political system of the Polish–Lithuanian Commonwealth.",
"title": "Historical use"
},
{
"paragraph_id": 10,
"text": "Between 1914 and 1925, Catalonia was an autonomous region of Spain. Its government during that time was given the title mancomunidad (Catalan: mancomunitat), which is translated into English as \"commonwealth\". The Commonwealth of Catalonia had limited powers and was formed as a federation of the four Catalan provinces. A number of Catalan-language institutions were created during its existence.",
"title": "Historical use"
},
{
"paragraph_id": 11,
"text": "Between 1838 and 1847, Liberia was officially known as the \"Commonwealth of Liberia\". It changed its name to the \"Republic of Liberia\" when it declared independence (and adopted a new constitution) in 1847.",
"title": "Historical use"
},
{
"paragraph_id": 12,
"text": "\"Commonwealth\" was first proposed as a term for a federation of the six Australian crown colonies at the 1891 constitutional convention in Sydney. Its adoption was initially controversial, as it was associated by some with the republicanism of Oliver Cromwell (see above), but it was retained in all subsequent drafts of the constitution. The term was finally incorporated into law in the Commonwealth of Australia Constitution Act 1900, which established the federation. Australia operates under a federal system, in which power is divided between the federal (national) government and the state governments (the successors of the six colonies). So, in an Australian context, the term \"Commonwealth\" (capitalised), which is often abbreviated to Cth, refers to the federal government, and \"Commonwealth of Australia\" is the official name of the country.",
"title": "Current use"
},
{
"paragraph_id": 13,
"text": "The Bahamas, a Commonwealth realm, has used the official style Commonwealth of The Bahamas since its independence in 1973.",
"title": "Current use"
},
{
"paragraph_id": 14,
"text": "The small Caribbean republic of Dominica has used the official style Commonwealth of Dominica since 1978.",
"title": "Current use"
},
{
"paragraph_id": 15,
"text": "Four states of the United States of America officially designate themselves as \"commonwealths\". All four were part of Great Britain's possessions along the Atlantic coast of North America prior to the American Revolution. As such, they share a strong influence of English common law in some of their laws and institutions. The four are:",
"title": "Current use"
},
{
"paragraph_id": 16,
"text": "Two organized but unincorporated U.S. territories are called commonwealths. The two are:",
"title": "Current use"
},
{
"paragraph_id": 17,
"text": "In 2016, the Washington, D.C. city council also selected \"Douglass Commonwealth\" as the potential name of State of Washington, D.C., following the 2016 statehood referendum, at least partially in order to retain the initials \"D.C.\" as the state's abbreviation.",
"title": "Current use"
},
{
"paragraph_id": 18,
"text": "The Commonwealth of Nations—formerly the British Commonwealth—is a voluntary association of 54 independent sovereign states, most of which were once part of the British Empire. The Commonwealth's membership includes both republics and monarchies. The Head of the Commonwealth was Queen Elizabeth II, who also reigned as monarch directly in the 16 member states known as Commonwealth realms until her death in 2022.",
"title": "Current use"
},
{
"paragraph_id": 19,
"text": "The Commonwealth of Independent States (CIS) is a loose alliance or confederation consisting of nine of the 15 former Soviet Republics, the exceptions being Turkmenistan (a CIS associate member), Lithuania, Latvia, Estonia, Ukraine, and Georgia. Georgia left the CIS in August 2008 following the 2008 invasion of the Russian military into South Ossetia and Abkhazia. Its creation signalled the dissolution of the Soviet Union, its purpose being to \"allow a civilised divorce\" between the Soviet Republics. The CIS has developed as a forum by which the member-states can co-operate in economics, defence, and foreign policy.",
"title": "Current use"
},
{
"paragraph_id": 20,
"text": "Labour MP Tony Benn sponsored a Commonwealth of Britain Bill several times between 1991 and 2001, intended to abolish the monarchy and establish a British republic. It never reached second reading.",
"title": "Proposed use"
}
] | A commonwealth is a traditional English term for a political community founded for the common good. The noun "commonwealth", meaning "public welfare, general good or advantage", dates from the 15th century. Originally a phrase, it comes from the old meaning of "wealth", which is "well-being", and is itself a loose translation of the Latin res publica. The term literally meant "common well-being". In the 17th century, the definition of "commonwealth" expanded from its original sense of "public welfare" or "commonweal" to mean "a state in which the supreme power is vested in the people; a republic or democratic state". The term evolved to become a title to a number of political entities. Three countries – Australia, the Bahamas, and Dominica – have the official title "Commonwealth", as do four U.S. states and two U.S. territories. Since the early 20th century, the term has been used to name some fraternal associations of states, most notably the Commonwealth of Nations, an organisation primarily of former territories of the British Empire. The organisation is not to be confused with the realms of the Commonwealth. | 2001-10-18T06:33:36Z | 2023-12-25T12:55:43Z | [
"Template:Webarchive",
"Template:Wiktionary",
"Template:Main",
"Template:Cn",
"Template:Cite web",
"Template:ISBN",
"Template:About",
"Template:Reflist",
"Template:Cite book",
"Template:Lang-is",
"Template:See also",
"Template:Short description",
"Template:Refimprove",
"Template:Cite news"
] | https://en.wikipedia.org/wiki/Commonwealth |
6,767 | Commodore 1541 | The Commodore 1541 (also known as the CBM 1541 and VIC-1541) is a floppy disk drive which was made by Commodore International for the Commodore 64 (C64), Commodore's most popular home computer. The best-known floppy disk drive for the C64, the 1541 is a single-sided 170-kilobyte drive for 5¼" disks. The 1541 directly followed the Commodore 1540 (meant for the VIC-20).
The disk drive uses group coded recording (GCR) and contains a MOS Technology 6502 microprocessor, doubling as a disk controller and on-board disk operating system processor. The number of sectors per track varies from 17 to 21 (an early implementation of zone bit recording). The drive's built-in disk operating system is CBM DOS 2.6.
The 1541 was priced at under US$400 at its introduction. A C64 plus a 1541 cost about $900, while an Apple II with no disk drive cost $1,295. The first 1541 drives produced in 1982 have a label on the front reading VIC-1541 and have an off-white case to match the VIC-20. In 1983, the 1541 switched to having the familiar beige case and a front label reading simply "1541" along with rainbow stripes to match the Commodore 64.
By 1983 a 1541 sold for $300 or less. After a home-computer price war instigated by Commodore, the C64 and 1541 together cost under $500. The drive became very popular and difficult to find. The company said that the shortage occurred because 90% of C64 owners bought the 1541 compared to its 30% expectation, but the press discussed what Creative Computing described as "an absolutely alarming return rate" because of defects. The magazine reported in March 1984 that it received three defective drives in two weeks, and Compute!'s Gazette reported in December 1983 that four of the magazine's seven drives had failed; "COMPUTE! Publications sorely needs additional 1541s for in-house use, yet we can't find any to buy. After numerous phone calls over several days, we were able to locate only two units in the entire continental United States", reportedly because of Commodore's attempt to resolve a manufacturing issue that caused the high failures.
The early (1982 to 1983) 1541s have a spring-eject mechanism (Alps drive), and the disks often fail to release. This style of drive has the popular nickname "Toaster Drive", because it requires the use of a knife or other hard thin object to pry out the stuck media just like a piece of toast stuck in an actual toaster. This was fixed later when Commodore changed the vendor of the drive mechanism (Mitsumi) and adopted the flip-lever Newtronics mechanism, greatly improving reliability. In addition, Commodore made the drive's controller board smaller and reduced its chip count compared to the early 1541s (which had a large PCB running the length of the case, with dozens of TTL chips). The beige-case Newtronics 1541 was produced from 1984 to 1986.
All but the very earliest non-II model 1541s can use either the Alps or Newtronics mechanism. Visually, the first models, of the VIC-1541 denomination, have an off-white color like the VIC-20 and VIC-1540. Then, to match the look of the C64, CBM changed the drive's color to brown-beige and the name to Commodore 1541.
The 1541's numerous shortcomings opened a market for a number of third-party clones of the disk drive. Examples include the Oceanic OC-118 a.k.a. Excelerator+, the MSD Super Disk single and dual drives, the Enhancer 2000, the Indus GT, Blue Chip Electronics's BCD/5.25, and CMD's FD-2000 and FD-4000. Nevertheless, the 1541 became the first disk drive to see widespread use in the home and Commodore sold millions of the units.
In 1986, Commodore released the 1541C, a revised version that offers quieter and slightly more reliable operation and a light beige case matching the color scheme of the Commodore 64C. It was replaced in 1988 by the 1541-II, which uses an external power supply to provide cooler operation and allows the drive to have a smaller desktop footprint (the power supply "brick" being placed elsewhere, typically on the floor). Later ROM revisions fixed assorted problems, including a software bug that causes the save-and-replace command to corrupt data.
The Commodore 1570 is an upgrade from the 1541 for use with the Commodore 128, available in Europe. It offers MFM capability for accessing CP/M disks, improved speed, and somewhat quieter operation, but was only manufactured until Commodore got its production lines going with the 1571, the double-sided drive. Finally, the small, external-power-supply-based, MFM-based Commodore 1581 3½-inch drive was made, giving 800 KB access to the C128 and C64.
The 1541 does not have DIP switches to change the device number. If a user adds more than one drive to a system, the user has to cut a trace in the circuit board to permanently change the drive's device number, or hand-wire an external switch to allow it to be changed externally. It is also possible to change the drive number via a software command, which is temporary and would be erased as soon as the drive was powered off.
1541 drives at power up always default to device #8. If multiple drives in a chain are used, then the startup procedure is to power on the first drive in the chain, alter its device number via a software command to the highest number in the chain (if three drives were used, then the first drive in the chain would be set to device #10), then power on the next drive, alter its device number to the next lowest, and repeat the procedure until the final drive at the end of the chain was powered on and left as device #8.
Unlike the Apple II, where support for two drives is normal, it is relatively uncommon for Commodore software to support this setup, and the CBM DOS copy file command is not able to copy files between drives – a third party copy utility is necessary.
The pre-II 1541s also have an internal power source, which generates a lot of heat. The heat generation was a frequent source of humour. For example, Compute! stated in 1988 that "Commodore 64s used to be a favorite with amateur and professional chefs since they could compute and cook on top of their 1500-series disk drives at the same time". A series of humorous tips in MikroBitti in 1989 said "When programming late, coffee and kebab keep nicely warm on top of the 1541." The MikroBitti review of the 1541-II said that its external power source "should end the jokes about toasters".
The drive-head mechanism installed in the early production years is notoriously easy to misalign. The most common cause of the 1541's drive head knocking and subsequent misalignment is copy-protection schemes on commercial software. The main cause of the problem is that the disk drive itself does not feature any means of detecting when the read/write head reaches track zero. Accordingly, when a disk is not formatted or a disk error occurs, the unit tries to move the head 40 times in the direction of track zero (although the 1541 DOS only uses 35 tracks, the drive mechanism itself is a 40-track unit, so this ensured track zero would be reached no matter where the head was before). Once track zero is reached, every further attempt to move the head in that direction would cause it to be rammed against a solid stop: for example, if the head happened to be on track 18 (where the directory is located) before this procedure, the head would be actually moved 18 times, and then rammed against the stop 22 times. This ramming gives the characteristic "machine gun" noise and sooner or later throws the head out of alignment.
A defective head-alignment part likely caused many of the reliability issues in early 1541 drives; one dealer told Compute!'s Gazette in 1983 that the part had caused all but three of several hundred drive failures that he had repaired. The drives were so unreliable that Info magazine joked, "Sometimes it seems as if one of the original design specs ... must have said 'Mean time between failure: 10 accesses.'" Users can realign the drive themselves with a software program and a calibration disk. The user can remove the drive from its case and then loosen the screws holding the stepper motor that move the head, then with the calibration disk in the drive gently turn the stepper motor back and forth until the program shows a good alignment. The screws are then tightened and the drive is put back into its case.
A third-party fix for the 1541 appeared in which the solid head stop was replaced by a sprung stop, giving the head a much easier life. The later 1571 drive (which is 1541-compatible) incorporates track-zero detection by photo-interrupter and is thus immune to the problem. Also, a software solution, which resides in the drive controller's ROM, prevents the rereads from occurring, though this can cause problems when genuine errors do occur.
Due to the alignment issues on the Alps drive mechanisms, Commodore switched suppliers to Newtronics in 1984. The Newtronics mechanism drives have a lever rather than a pull-down tab to close the drive door. Although the alignment issues were resolved after the switch, the Newtronics drives add a new reliability problem in that many of the read/write heads are improperly sealed, causing moisture to penetrate the head and short it out.
The 1541's PCB consists mainly of a 6502 CPU, two 6522 VIA chips, and 2k of work RAM. Up to 48k of RAM can be added; this is mainly useful for defeating copy protection schemes since an entire disk track could be loaded into drive RAM, while the standard 2k only accommodates a few sectors (theoretically eight, but some of the RAM was used by CBM DOS as work space). Some Commodore users use 1541s as an impromptu math coprocessor by uploading math-intensive code to the drive for background processing.
The 1541 uses a proprietary serialized derivative of the IEEE-488 parallel interface, found in previous disk drives for the PET/CBM range of personal and business computers, but when the VIC-20 was in development, a cheaper alternative to the expensive IEEE-488 cables was sought. To ensure a ready supply of inexpensive cabling for its home computer peripherals, Commodore chose standard DIN connectors for the serial interface. Disk drives and other peripherals such as printers connect to the computer via a daisy chain setup, necessitating only a single connector on the computer itself.
IEEE Spectrum in 1985 stated that:
The one major flaw of the C-64 is not in the machine itself, but in its disk drive. With a reasonably fast disk drive and an adequate disk-operating system (DOS), the C-64 could compete in the business market with the Apple and perhaps with other business computers. With the present disk drive, though, it is hard-pressed to lose its image as a toy.
The C-64's designers blamed the 1541's slow speed on the marketing department's insistence that the computer be compatible with the 1540, which is slow because of a flaw in the 6522 VIA interface controller. Initially, Commodore intended to use a hardware shift register (one component of the 6522) to maintain fast drive speeds with the new serial interface. However, a hardware bug with this chip prevents the initial design from working as anticipated, and the ROM code was hastily rewritten to handle the entire operation in software. According to Jim Butterfield, this causes a speed reduction by a factor of five; had 1540 compatibility not been a requirement, the disk interface would have been much faster. In any case, the C64 normally cannot work with a 1540 unless the VIC-II display output is disabled via a register write to the DEN bit (register $D011, bit 4), which stops the halting of the CPU during certain video lines to ensure correct serial timing.
As implemented on the VIC-20 and C64, Commodore DOS transfers 300 bytes per second, compared to the Atari 810's 2,400 bytes per second, the Apple Disk II's 15,000 bytes per second, and the 300-baud data rate of the Commodore Datasette storage system. About 20 minutes are needed to copy one disk—10 minutes of reading time, and 10 minutes of writing time. However, since both the computer and the drive can easily be reprogrammed, third parties quickly wrote more efficient firmware that would speed up drive operations drastically. Without hardware modifications, some "fast loader" utilities (which bypassed routines in the 1541's onboard ROM) managed to achieve speeds of up to 4 KB/s. The most common of these products are the Epyx Fast Load, the Final Cartridge, and the Action Replay plug-in ROM cartridges, which all have machine code monitor and disk editor software on board as well. The popular Commodore computer magazines of the era also entered the arena with type-in fast-load utilities, with Compute!'s Gazette publishing TurboDisk in 1985 and RUN publishing Sizzle in 1987.
Even though each 1541 has its own on-board disk controller and disk operating system, it is not possible for a user to command two 1541 drives to copy a disk (one drive reading and the other writing) as with older dual drives like the 4040 that was often found with the PET computer, and which the 1541 is backward-compatible with (it can read 4040 disks but not write to them as a minor difference in the number of header bytes makes the 4040 and 1541 only read-compatible). Originally, to copy from drive to drive, software running on the C64 was needed and it would first read from one drive into computer memory, then write out to the other. Only when Fast Hack'em and, later, other disk backup programs were released, was true drive-to-drive copying possible for a pair of 1541s. The user could, if they wished, unplug the C64 from the drives (i.e., from the first drive in the daisy chain) and do something else with the computer as the drives proceeded to copy the entire disk.
The 1541 drive uses standard 5¼-inch double-density floppy media; high-density media will not work due to its different magnetic coating requiring a higher magnetic coercivity. As the GCR encoding scheme does not use the index hole, the drive was also compatible with hard-sectored disks. The standard CBM DOS format is 170 KB with 35 tracks and 256-byte sectors. It is similar to the format used on the PET 2031, 2040 & 4040 drives, but a minor difference in the number of header bytes makes these drives and the 1541 only read-compatible; disks formatted with one drive cannot be written to by the other. The drives will allow writes to occur, but the inconsistent header size will damage the data in the data portions of each track.
The 4040 drives use Shugart SA-400s, which were 35-track units, thus the format there is due to physical limitations of the drive mechanism. The 1541 uses 40 track mechanisms, but Commodore intentionally limited the CBM DOS format to 35 tracks because of reliability issues with the early units. It is possible via low-level programming to move the drive head to tracks 36–40 and write on them, this is sometimes done by commercial software for copy protection purposes and/or to get additional data on the disk.
However, one track is reserved by DOS for directory and file allocation information (the BAM, block availability map). And since for normal files, two bytes of each physical sector are used by DOS as a pointer to the next physical track and sector of the file, only 254 out of the 256 bytes of a block are used for file contents.
If the disk side is not otherwise prepared with a custom format, (e.g. for data disks), 664 blocks would be free after formatting, giving 664 × 254 = 168,656 bytes (or almost 165 KB) for user data.
By using custom formatting and load/save routines (sometimes included in third-party DOSes, see below), all of the mechanically possible 40 tracks can be used.
Owing to the drive's non-use of the index hole, it is also possible to make "flippy floppies" by inserting the diskette upside-down and formatting the other side, and it is commonplace and normal for commercial software to be distributed on such disks.
Tracks 36–42 are non-standard. The bitrate is the raw one between the read/write head and signal circuitry so actual useful data rate is a factor 5/4 less due to GCR encoding.
The 1541 disk typically has 35 tracks. Track 18 is reserved; the remaining tracks are available for data storage. The header is on 18/0 (track 18, sector 0) along with the BAM, and the directory starts on 18/1 (track 18, sector 1). The file interleave is 10 blocks, while the directory interleave is 3 blocks.
Header contents: The header is similar to other Commodore disk headers, the structural differences being the BAM offset ($04) and size, and the label+ID+type offset ($90).
Early copy protection schemes deliberately introduce read errors on the disk, the software refusing to load unless the correct error message is returned. The general idea is that simple disk-copy programs are incapable of copying the errors. When one of these errors is encountered, the disk drive (as do many floppy disk drives) will attempt one or more reread attempts after first resetting the head to track zero. Few of these schemes have much deterrent effect, as various software companies soon released "nibbler" utilities that enable protected disks to be copied and, in some cases, the protection removed.
Commodore copy protection sometimes fails on specific hardware configurations. Gunship, for example, does not load if a second disk drive or printer is connected to the computer. Similarly Roland's Ratrace will crash if additional hardware is detected. The tape version will even crash if a floppy drive is switched on while the game is running. | [
{
"paragraph_id": 0,
"text": "The Commodore 1541 (also known as the CBM 1541 and VIC-1541) is a floppy disk drive which was made by Commodore International for the Commodore 64 (C64), Commodore's most popular home computer. The best-known floppy disk drive for the C64, the 1541 is a single-sided 170-kilobyte drive for 5¼\" disks. The 1541 directly followed the Commodore 1540 (meant for the VIC-20).",
"title": ""
},
{
"paragraph_id": 1,
"text": "The disk drive uses group coded recording (GCR) and contains a MOS Technology 6502 microprocessor, doubling as a disk controller and on-board disk operating system processor. The number of sectors per track varies from 17 to 21 (an early implementation of zone bit recording). The drive's built-in disk operating system is CBM DOS 2.6.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The 1541 was priced at under US$400 at its introduction. A C64 plus a 1541 cost about $900, while an Apple II with no disk drive cost $1,295. The first 1541 drives produced in 1982 have a label on the front reading VIC-1541 and have an off-white case to match the VIC-20. In 1983, the 1541 switched to having the familiar beige case and a front label reading simply \"1541\" along with rainbow stripes to match the Commodore 64.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "By 1983 a 1541 sold for $300 or less. After a home-computer price war instigated by Commodore, the C64 and 1541 together cost under $500. The drive became very popular and difficult to find. The company said that the shortage occurred because 90% of C64 owners bought the 1541 compared to its 30% expectation, but the press discussed what Creative Computing described as \"an absolutely alarming return rate\" because of defects. The magazine reported in March 1984 that it received three defective drives in two weeks, and Compute!'s Gazette reported in December 1983 that four of the magazine's seven drives had failed; \"COMPUTE! Publications sorely needs additional 1541s for in-house use, yet we can't find any to buy. After numerous phone calls over several days, we were able to locate only two units in the entire continental United States\", reportedly because of Commodore's attempt to resolve a manufacturing issue that caused the high failures.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The early (1982 to 1983) 1541s have a spring-eject mechanism (Alps drive), and the disks often fail to release. This style of drive has the popular nickname \"Toaster Drive\", because it requires the use of a knife or other hard thin object to pry out the stuck media just like a piece of toast stuck in an actual toaster. This was fixed later when Commodore changed the vendor of the drive mechanism (Mitsumi) and adopted the flip-lever Newtronics mechanism, greatly improving reliability. In addition, Commodore made the drive's controller board smaller and reduced its chip count compared to the early 1541s (which had a large PCB running the length of the case, with dozens of TTL chips). The beige-case Newtronics 1541 was produced from 1984 to 1986.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "All but the very earliest non-II model 1541s can use either the Alps or Newtronics mechanism. Visually, the first models, of the VIC-1541 denomination, have an off-white color like the VIC-20 and VIC-1540. Then, to match the look of the C64, CBM changed the drive's color to brown-beige and the name to Commodore 1541.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The 1541's numerous shortcomings opened a market for a number of third-party clones of the disk drive. Examples include the Oceanic OC-118 a.k.a. Excelerator+, the MSD Super Disk single and dual drives, the Enhancer 2000, the Indus GT, Blue Chip Electronics's BCD/5.25, and CMD's FD-2000 and FD-4000. Nevertheless, the 1541 became the first disk drive to see widespread use in the home and Commodore sold millions of the units.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1986, Commodore released the 1541C, a revised version that offers quieter and slightly more reliable operation and a light beige case matching the color scheme of the Commodore 64C. It was replaced in 1988 by the 1541-II, which uses an external power supply to provide cooler operation and allows the drive to have a smaller desktop footprint (the power supply \"brick\" being placed elsewhere, typically on the floor). Later ROM revisions fixed assorted problems, including a software bug that causes the save-and-replace command to corrupt data.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The Commodore 1570 is an upgrade from the 1541 for use with the Commodore 128, available in Europe. It offers MFM capability for accessing CP/M disks, improved speed, and somewhat quieter operation, but was only manufactured until Commodore got its production lines going with the 1571, the double-sided drive. Finally, the small, external-power-supply-based, MFM-based Commodore 1581 3½-inch drive was made, giving 800 KB access to the C128 and C64.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The 1541 does not have DIP switches to change the device number. If a user adds more than one drive to a system, the user has to cut a trace in the circuit board to permanently change the drive's device number, or hand-wire an external switch to allow it to be changed externally. It is also possible to change the drive number via a software command, which is temporary and would be erased as soon as the drive was powered off.",
"title": "Design"
},
{
"paragraph_id": 10,
"text": "1541 drives at power up always default to device #8. If multiple drives in a chain are used, then the startup procedure is to power on the first drive in the chain, alter its device number via a software command to the highest number in the chain (if three drives were used, then the first drive in the chain would be set to device #10), then power on the next drive, alter its device number to the next lowest, and repeat the procedure until the final drive at the end of the chain was powered on and left as device #8.",
"title": "Design"
},
{
"paragraph_id": 11,
"text": "Unlike the Apple II, where support for two drives is normal, it is relatively uncommon for Commodore software to support this setup, and the CBM DOS copy file command is not able to copy files between drives – a third party copy utility is necessary.",
"title": "Design"
},
{
"paragraph_id": 12,
"text": "The pre-II 1541s also have an internal power source, which generates a lot of heat. The heat generation was a frequent source of humour. For example, Compute! stated in 1988 that \"Commodore 64s used to be a favorite with amateur and professional chefs since they could compute and cook on top of their 1500-series disk drives at the same time\". A series of humorous tips in MikroBitti in 1989 said \"When programming late, coffee and kebab keep nicely warm on top of the 1541.\" The MikroBitti review of the 1541-II said that its external power source \"should end the jokes about toasters\".",
"title": "Design"
},
{
"paragraph_id": 13,
"text": "The drive-head mechanism installed in the early production years is notoriously easy to misalign. The most common cause of the 1541's drive head knocking and subsequent misalignment is copy-protection schemes on commercial software. The main cause of the problem is that the disk drive itself does not feature any means of detecting when the read/write head reaches track zero. Accordingly, when a disk is not formatted or a disk error occurs, the unit tries to move the head 40 times in the direction of track zero (although the 1541 DOS only uses 35 tracks, the drive mechanism itself is a 40-track unit, so this ensured track zero would be reached no matter where the head was before). Once track zero is reached, every further attempt to move the head in that direction would cause it to be rammed against a solid stop: for example, if the head happened to be on track 18 (where the directory is located) before this procedure, the head would be actually moved 18 times, and then rammed against the stop 22 times. This ramming gives the characteristic \"machine gun\" noise and sooner or later throws the head out of alignment.",
"title": "Design"
},
{
"paragraph_id": 14,
"text": "A defective head-alignment part likely caused many of the reliability issues in early 1541 drives; one dealer told Compute!'s Gazette in 1983 that the part had caused all but three of several hundred drive failures that he had repaired. The drives were so unreliable that Info magazine joked, \"Sometimes it seems as if one of the original design specs ... must have said 'Mean time between failure: 10 accesses.'\" Users can realign the drive themselves with a software program and a calibration disk. The user can remove the drive from its case and then loosen the screws holding the stepper motor that move the head, then with the calibration disk in the drive gently turn the stepper motor back and forth until the program shows a good alignment. The screws are then tightened and the drive is put back into its case.",
"title": "Design"
},
{
"paragraph_id": 15,
"text": "A third-party fix for the 1541 appeared in which the solid head stop was replaced by a sprung stop, giving the head a much easier life. The later 1571 drive (which is 1541-compatible) incorporates track-zero detection by photo-interrupter and is thus immune to the problem. Also, a software solution, which resides in the drive controller's ROM, prevents the rereads from occurring, though this can cause problems when genuine errors do occur.",
"title": "Design"
},
{
"paragraph_id": 16,
"text": "Due to the alignment issues on the Alps drive mechanisms, Commodore switched suppliers to Newtronics in 1984. The Newtronics mechanism drives have a lever rather than a pull-down tab to close the drive door. Although the alignment issues were resolved after the switch, the Newtronics drives add a new reliability problem in that many of the read/write heads are improperly sealed, causing moisture to penetrate the head and short it out.",
"title": "Design"
},
{
"paragraph_id": 17,
"text": "The 1541's PCB consists mainly of a 6502 CPU, two 6522 VIA chips, and 2k of work RAM. Up to 48k of RAM can be added; this is mainly useful for defeating copy protection schemes since an entire disk track could be loaded into drive RAM, while the standard 2k only accommodates a few sectors (theoretically eight, but some of the RAM was used by CBM DOS as work space). Some Commodore users use 1541s as an impromptu math coprocessor by uploading math-intensive code to the drive for background processing.",
"title": "Design"
},
{
"paragraph_id": 18,
"text": "The 1541 uses a proprietary serialized derivative of the IEEE-488 parallel interface, found in previous disk drives for the PET/CBM range of personal and business computers, but when the VIC-20 was in development, a cheaper alternative to the expensive IEEE-488 cables was sought. To ensure a ready supply of inexpensive cabling for its home computer peripherals, Commodore chose standard DIN connectors for the serial interface. Disk drives and other peripherals such as printers connect to the computer via a daisy chain setup, necessitating only a single connector on the computer itself.",
"title": "Design"
},
{
"paragraph_id": 19,
"text": "IEEE Spectrum in 1985 stated that:",
"title": "Throughput and software"
},
{
"paragraph_id": 20,
"text": "The one major flaw of the C-64 is not in the machine itself, but in its disk drive. With a reasonably fast disk drive and an adequate disk-operating system (DOS), the C-64 could compete in the business market with the Apple and perhaps with other business computers. With the present disk drive, though, it is hard-pressed to lose its image as a toy.",
"title": "Throughput and software"
},
{
"paragraph_id": 21,
"text": "The C-64's designers blamed the 1541's slow speed on the marketing department's insistence that the computer be compatible with the 1540, which is slow because of a flaw in the 6522 VIA interface controller. Initially, Commodore intended to use a hardware shift register (one component of the 6522) to maintain fast drive speeds with the new serial interface. However, a hardware bug with this chip prevents the initial design from working as anticipated, and the ROM code was hastily rewritten to handle the entire operation in software. According to Jim Butterfield, this causes a speed reduction by a factor of five; had 1540 compatibility not been a requirement, the disk interface would have been much faster. In any case, the C64 normally cannot work with a 1540 unless the VIC-II display output is disabled via a register write to the DEN bit (register $D011, bit 4), which stops the halting of the CPU during certain video lines to ensure correct serial timing.",
"title": "Throughput and software"
},
{
"paragraph_id": 22,
"text": "As implemented on the VIC-20 and C64, Commodore DOS transfers 300 bytes per second, compared to the Atari 810's 2,400 bytes per second, the Apple Disk II's 15,000 bytes per second, and the 300-baud data rate of the Commodore Datasette storage system. About 20 minutes are needed to copy one disk—10 minutes of reading time, and 10 minutes of writing time. However, since both the computer and the drive can easily be reprogrammed, third parties quickly wrote more efficient firmware that would speed up drive operations drastically. Without hardware modifications, some \"fast loader\" utilities (which bypassed routines in the 1541's onboard ROM) managed to achieve speeds of up to 4 KB/s. The most common of these products are the Epyx Fast Load, the Final Cartridge, and the Action Replay plug-in ROM cartridges, which all have machine code monitor and disk editor software on board as well. The popular Commodore computer magazines of the era also entered the arena with type-in fast-load utilities, with Compute!'s Gazette publishing TurboDisk in 1985 and RUN publishing Sizzle in 1987.",
"title": "Throughput and software"
},
{
"paragraph_id": 23,
"text": "Even though each 1541 has its own on-board disk controller and disk operating system, it is not possible for a user to command two 1541 drives to copy a disk (one drive reading and the other writing) as with older dual drives like the 4040 that was often found with the PET computer, and which the 1541 is backward-compatible with (it can read 4040 disks but not write to them as a minor difference in the number of header bytes makes the 4040 and 1541 only read-compatible). Originally, to copy from drive to drive, software running on the C64 was needed and it would first read from one drive into computer memory, then write out to the other. Only when Fast Hack'em and, later, other disk backup programs were released, was true drive-to-drive copying possible for a pair of 1541s. The user could, if they wished, unplug the C64 from the drives (i.e., from the first drive in the daisy chain) and do something else with the computer as the drives proceeded to copy the entire disk.",
"title": "Throughput and software"
},
{
"paragraph_id": 24,
"text": "The 1541 drive uses standard 5¼-inch double-density floppy media; high-density media will not work due to its different magnetic coating requiring a higher magnetic coercivity. As the GCR encoding scheme does not use the index hole, the drive was also compatible with hard-sectored disks. The standard CBM DOS format is 170 KB with 35 tracks and 256-byte sectors. It is similar to the format used on the PET 2031, 2040 & 4040 drives, but a minor difference in the number of header bytes makes these drives and the 1541 only read-compatible; disks formatted with one drive cannot be written to by the other. The drives will allow writes to occur, but the inconsistent header size will damage the data in the data portions of each track.",
"title": "Media"
},
{
"paragraph_id": 25,
"text": "The 4040 drives use Shugart SA-400s, which were 35-track units, thus the format there is due to physical limitations of the drive mechanism. The 1541 uses 40 track mechanisms, but Commodore intentionally limited the CBM DOS format to 35 tracks because of reliability issues with the early units. It is possible via low-level programming to move the drive head to tracks 36–40 and write on them, this is sometimes done by commercial software for copy protection purposes and/or to get additional data on the disk.",
"title": "Media"
},
{
"paragraph_id": 26,
"text": "However, one track is reserved by DOS for directory and file allocation information (the BAM, block availability map). And since for normal files, two bytes of each physical sector are used by DOS as a pointer to the next physical track and sector of the file, only 254 out of the 256 bytes of a block are used for file contents.",
"title": "Media"
},
{
"paragraph_id": 27,
"text": "If the disk side is not otherwise prepared with a custom format, (e.g. for data disks), 664 blocks would be free after formatting, giving 664 × 254 = 168,656 bytes (or almost 165 KB) for user data.",
"title": "Media"
},
{
"paragraph_id": 28,
"text": "By using custom formatting and load/save routines (sometimes included in third-party DOSes, see below), all of the mechanically possible 40 tracks can be used.",
"title": "Media"
},
{
"paragraph_id": 29,
"text": "Owing to the drive's non-use of the index hole, it is also possible to make \"flippy floppies\" by inserting the diskette upside-down and formatting the other side, and it is commonplace and normal for commercial software to be distributed on such disks.",
"title": "Media"
},
{
"paragraph_id": 30,
"text": "Tracks 36–42 are non-standard. The bitrate is the raw one between the read/write head and signal circuitry so actual useful data rate is a factor 5/4 less due to GCR encoding.",
"title": "Media"
},
{
"paragraph_id": 31,
"text": "The 1541 disk typically has 35 tracks. Track 18 is reserved; the remaining tracks are available for data storage. The header is on 18/0 (track 18, sector 0) along with the BAM, and the directory starts on 18/1 (track 18, sector 1). The file interleave is 10 blocks, while the directory interleave is 3 blocks.",
"title": "Media"
},
{
"paragraph_id": 32,
"text": "Header contents: The header is similar to other Commodore disk headers, the structural differences being the BAM offset ($04) and size, and the label+ID+type offset ($90).",
"title": "Media"
},
{
"paragraph_id": 33,
"text": "Early copy protection schemes deliberately introduce read errors on the disk, the software refusing to load unless the correct error message is returned. The general idea is that simple disk-copy programs are incapable of copying the errors. When one of these errors is encountered, the disk drive (as do many floppy disk drives) will attempt one or more reread attempts after first resetting the head to track zero. Few of these schemes have much deterrent effect, as various software companies soon released \"nibbler\" utilities that enable protected disks to be copied and, in some cases, the protection removed.",
"title": "Uses"
},
{
"paragraph_id": 34,
"text": "Commodore copy protection sometimes fails on specific hardware configurations. Gunship, for example, does not load if a second disk drive or printer is connected to the computer. Similarly Roland's Ratrace will crash if additional hardware is detected. The tape version will even crash if a floppy drive is switched on while the game is running.",
"title": "Uses"
}
] | The Commodore 1541 is a floppy disk drive which was made by Commodore International for the Commodore 64 (C64), Commodore's most popular home computer. The best-known floppy disk drive for the C64, the 1541 is a single-sided 170-kilobyte drive for 5¼" disks. The 1541 directly followed the Commodore 1540. The disk drive uses group coded recording (GCR) and contains a MOS Technology 6502 microprocessor, doubling as a disk controller and on-board disk operating system processor. The number of sectors per track varies from 17 to 21. The drive's built-in disk operating system is CBM DOS 2.6. | 2001-10-12T16:12:13Z | 2023-09-01T07:54:21Z | [
"Template:Refbegin",
"Template:ISBN",
"Template:Short description",
"Template:Cn",
"Template:Further",
"Template:Val",
"Template:Cite web",
"Template:Cite magazine",
"Template:Authority control",
"Template:US$",
"Template:Citation needed",
"Template:R",
"Template:Resx",
"Template:Cite news",
"Template:Cite journal",
"Template:'",
"Template:Block quote",
"Template:Infobox information appliance",
"Template:Mono",
"Template:Reflist",
"Template:Refend",
"Template:Commodore disk drives"
] | https://en.wikipedia.org/wiki/Commodore_1541 |
6,769 | Commodore 1581 | The Commodore 1581 is a 3½-inch double-sided double-density floppy disk drive that was released by Commodore Business Machines (CBM) in 1987, primarily for its C64 and C128 home/personal computers. The drive stores 800 kilobytes using an MFM encoding but formats different from the MS-DOS (720 kB), Amiga (880 kB), and Mac Plus (800 kB) formats. With special software it's possible to read C1581 disks on an x86 PC system, and likewise, read MS-DOS and other formats of disks in the C1581 (using Big Blue Reader), provided that the PC or other floppy handles the "720 kB" size format. This capability was most frequently used to read MS-DOS disks. The drive was released in the summer of 1987 and quickly became popular with bulletin board system (BBS) operators and other users.
Like the 1541 and 1571, the 1581 has an onboard MOS Technology 6502 CPU with its own ROM and RAM, and uses a serial version of the IEEE-488 interface. Inexplicably, the drive's ROM contains commands for parallel use, although no parallel interface was available. Unlike the 1571, which is nearly 100% backward-compatible with the 1541, the 1581 is only compatible with previous Commodore drives at the DOS level and cannot utilize software that performs low-level disk access (as the vast majority of Commodore 64 games do).
The version of Commodore DOS built into the 1581 added support for partitions, which could also function as fixed-allocation subdirectories. PC-style subdirectories were rejected as being too difficult to work with in terms of block availability maps, then still much in vogue, and which for some time had been the traditional way of inquiring into block availability. The 1581 supports the C128's burst mode for fast disk access, but not when connected to an older Commodore machine like the Commodore 64. The 1581 provides a total of 3160 blocks free when formatted (a block being equal to 256 bytes). The number of permitted directory entries was also increased, to 296 entries. With a storage capacity of 800 kB, the 1581 is the highest-capacity serial-bus drive that was ever made by Commodore (the 1-MB SFD-1001 uses the parallel IEEE-488), and the only 3½" one. However, starting in 1991, Creative Micro Designs (CMD) made the FD-2000 high density (1.6 MB) and FD-4000 extra-high density (3.2 MB) 3½" drives, both of which offered not only a 1581-emulation mode but also 1541- and 1571-compatibility modes.
Like the 1541 and 1571, a nearly identical job queue is available to the user in zero page (except for job 0), providing for exceptional degrees of compatibility.
Unlike the cases of the 1541 and 1571, the low-level disk format used by the 1581 is similar enough to the MS-DOS format as the 1581 is built around a WD1770 FM/MFM floppy controller chip. The 1581 disk format consists of 80 tracks and ten 512 byte sectors per track, used as 20 logical sectors of 256 bytes each. Special software is required to read 1581 disks on a PC due to the different file system. An internal floppy drive and controller are required as well; USB floppy drives operate strictly at the file system level and do not allow low-level disk access. The WD1770 controller chip, however, was the seat of some early problems with 1581 drives when the first production runs were recalled due to a high failure rate; the problem was quickly corrected. Later versions of the 1581 drive have a smaller, more streamlined-looking external power supply provided with them.
The 1581 disk has 80 logical tracks, each with 40 logical sectors (the actual physical layout of the diskette is abstracted and managed by a hardware translation layer). The directory starts on 40/3 (track 40, sector 3). The disk header is on 40/0, and the BAM (block availability map) resides on 40/1 and 40/2.
Header Contents
BAM Contents, 40/1
BAM Contents, 40/2 | [
{
"paragraph_id": 0,
"text": "The Commodore 1581 is a 3½-inch double-sided double-density floppy disk drive that was released by Commodore Business Machines (CBM) in 1987, primarily for its C64 and C128 home/personal computers. The drive stores 800 kilobytes using an MFM encoding but formats different from the MS-DOS (720 kB), Amiga (880 kB), and Mac Plus (800 kB) formats. With special software it's possible to read C1581 disks on an x86 PC system, and likewise, read MS-DOS and other formats of disks in the C1581 (using Big Blue Reader), provided that the PC or other floppy handles the \"720 kB\" size format. This capability was most frequently used to read MS-DOS disks. The drive was released in the summer of 1987 and quickly became popular with bulletin board system (BBS) operators and other users.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Like the 1541 and 1571, the 1581 has an onboard MOS Technology 6502 CPU with its own ROM and RAM, and uses a serial version of the IEEE-488 interface. Inexplicably, the drive's ROM contains commands for parallel use, although no parallel interface was available. Unlike the 1571, which is nearly 100% backward-compatible with the 1541, the 1581 is only compatible with previous Commodore drives at the DOS level and cannot utilize software that performs low-level disk access (as the vast majority of Commodore 64 games do).",
"title": ""
},
{
"paragraph_id": 2,
"text": "The version of Commodore DOS built into the 1581 added support for partitions, which could also function as fixed-allocation subdirectories. PC-style subdirectories were rejected as being too difficult to work with in terms of block availability maps, then still much in vogue, and which for some time had been the traditional way of inquiring into block availability. The 1581 supports the C128's burst mode for fast disk access, but not when connected to an older Commodore machine like the Commodore 64. The 1581 provides a total of 3160 blocks free when formatted (a block being equal to 256 bytes). The number of permitted directory entries was also increased, to 296 entries. With a storage capacity of 800 kB, the 1581 is the highest-capacity serial-bus drive that was ever made by Commodore (the 1-MB SFD-1001 uses the parallel IEEE-488), and the only 3½\" one. However, starting in 1991, Creative Micro Designs (CMD) made the FD-2000 high density (1.6 MB) and FD-4000 extra-high density (3.2 MB) 3½\" drives, both of which offered not only a 1581-emulation mode but also 1541- and 1571-compatibility modes.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Like the 1541 and 1571, a nearly identical job queue is available to the user in zero page (except for job 0), providing for exceptional degrees of compatibility.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Unlike the cases of the 1541 and 1571, the low-level disk format used by the 1581 is similar enough to the MS-DOS format as the 1581 is built around a WD1770 FM/MFM floppy controller chip. The 1581 disk format consists of 80 tracks and ten 512 byte sectors per track, used as 20 logical sectors of 256 bytes each. Special software is required to read 1581 disks on a PC due to the different file system. An internal floppy drive and controller are required as well; USB floppy drives operate strictly at the file system level and do not allow low-level disk access. The WD1770 controller chip, however, was the seat of some early problems with 1581 drives when the first production runs were recalled due to a high failure rate; the problem was quickly corrected. Later versions of the 1581 drive have a smaller, more streamlined-looking external power supply provided with them.",
"title": ""
},
{
"paragraph_id": 5,
"text": "The 1581 disk has 80 logical tracks, each with 40 logical sectors (the actual physical layout of the diskette is abstracted and managed by a hardware translation layer). The directory starts on 40/3 (track 40, sector 3). The disk header is on 40/0, and the BAM (block availability map) resides on 40/1 and 40/2.",
"title": "Specifications"
},
{
"paragraph_id": 6,
"text": "Header Contents",
"title": "Specifications"
},
{
"paragraph_id": 7,
"text": "BAM Contents, 40/1",
"title": "Specifications"
},
{
"paragraph_id": 8,
"text": "BAM Contents, 40/2",
"title": "Specifications"
}
] | The Commodore 1581 is a 3½-inch double-sided double-density floppy disk drive that was released by Commodore Business Machines (CBM) in 1987, primarily for its C64 and C128 home/personal computers. The drive stores 800 kilobytes using an MFM encoding but formats different from the MS-DOS, Amiga, and Mac Plus formats. With special software it's possible to read C1581 disks on an x86 PC system, and likewise, read MS-DOS and other formats of disks in the C1581, provided that the PC or other floppy handles the "720 kB" size format. This capability was most frequently used to read MS-DOS disks. The drive was released in the summer of 1987 and quickly became popular with bulletin board system (BBS) operators and other users. Like the 1541 and 1571, the 1581 has an onboard MOS Technology 6502 CPU with its own ROM and RAM, and uses a serial version of the IEEE-488 interface. Inexplicably, the drive's ROM contains commands for parallel use, although no parallel interface was available. Unlike the 1571, which is nearly 100% backward-compatible with the 1541, the 1581 is only compatible with previous Commodore drives at the DOS level and cannot utilize software that performs low-level disk access. The version of Commodore DOS built into the 1581 added support for partitions, which could also function as fixed-allocation subdirectories. PC-style subdirectories were rejected as being too difficult to work with in terms of block availability maps, then still much in vogue, and which for some time had been the traditional way of inquiring into block availability. The 1581 supports the C128's burst mode for fast disk access, but not when connected to an older Commodore machine like the Commodore 64. The 1581 provides a total of 3160 blocks free when formatted. The number of permitted directory entries was also increased, to 296 entries. With a storage capacity of 800 kB, the 1581 is the highest-capacity serial-bus drive that was ever made by Commodore, and the only 3½" one. However, starting in 1991, Creative Micro Designs (CMD) made the FD-2000 high density (1.6 MB) and FD-4000 extra-high density (3.2 MB) 3½" drives, both of which offered not only a 1581-emulation mode but also 1541- and 1571-compatibility modes. Like the 1541 and 1571, a nearly identical job queue is available to the user in zero page, providing for exceptional degrees of compatibility. Unlike the cases of the 1541 and 1571, the low-level disk format used by the 1581 is similar enough to the MS-DOS format as the 1581 is built around a WD1770 FM/MFM floppy controller chip. The 1581 disk format consists of 80 tracks and ten 512 byte sectors per track, used as 20 logical sectors of 256 bytes each. Special software is required to read 1581 disks on a PC due to the different file system. An internal floppy drive and controller are required as well; USB floppy drives operate strictly at the file system level and do not allow low-level disk access. The WD1770 controller chip, however, was the seat of some early problems with 1581 drives when the first production runs were recalled due to a high failure rate; the problem was quickly corrected. Later versions of the 1581 drive have a smaller, more streamlined-looking external power supply provided with them. | 2001-10-12T16:19:52Z | 2023-10-02T19:45:57Z | [
"Template:Redirect",
"Template:Cite book",
"Template:Refend",
"Template:Commodore disk drives",
"Template:More citations needed",
"Template:Infobox information appliance",
"Template:Nowrap",
"Template:R",
"Template:Reflist",
"Template:Cite web",
"Template:Refbegin"
] | https://en.wikipedia.org/wiki/Commodore_1581 |
6,771 | College football | Nick Saban is known as the best coach of all time College football refers to gridiron football that is played by teams of amateur student-athletes at universities and colleges. It was through collegiate competition that gridiron football first gained popularity in the United States.
Like gridiron football generally, college football is most popular in the United States and Canada. While no single governing body exists for college football in the United States, most schools, especially those at the highest levels of play, are members of the NCAA. In Canada, collegiate football competition is governed by U Sports for universities. The Canadian Collegiate Athletic Association (for colleges) governs soccer and other sports but not gridiron football. Other countries, such as Mexico, Japan and South Korea, also host college football leagues with modest levels of support.
Unlike most other major sports in North America, no official minor league farm organizations exist for American football or Canadian football. Therefore, college football is generally considered to be the second tier of American and Canadian football; ahead of high school competition, but below professional competition. In some parts of the United States, especially the South and Midwest, college football is more popular than professional football. For much of the 20th century, college football was generally considered to be more prestigious than professional football.
As the second highest tier of gridiron football competition in the United States, many college football players later play professionally in the NFL or other leagues. The NFL draft each spring sees 224 players selected and offered a contract to play in the league, with the vast majority coming from the NCAA. Other professional leagues, such as the CFL and XFL, additionally hold their own drafts each year which see many college players selected. Players who are not selected can still attempt to land a professional roster spot as an undrafted free agent. Despite these opportunities, only around 1.6% of NCAA college football players end up playing professionally in the NFL.
Even after the emergence of the professional National Football League (NFL), college football has remained extremely popular throughout the U.S. Although the college game has a much larger margin for talent than its pro counterpart, the sheer number of fans following major colleges provides a financial equalizer for the game, with Division I programs — the highest level — playing in huge stadiums, six of which have seating capacity exceeding 100,000 people. In many cases, college stadiums employ bench-style seating, as opposed to individual seats with backs and arm rests (although many stadiums do have a small number of chair back seats in addition to the bench seating). This allows them to seat more fans in a given amount of space than the typical professional stadium, which tends to have more features and comforts for fans. (Only three stadiums owned by U.S. colleges or universities — L&N Stadium at the University of Louisville, Center Parc Stadium at Georgia State University, and FAU Stadium at Florida Atlantic University — consist entirely of chair back seating.)
College athletes, unlike players in the NFL, are not permitted by the NCAA to be paid salaries. Colleges are only allowed to provide non-monetary compensation such as athletic scholarships that provide for tuition, housing, and books. With new bylaws made by the NCAA, college athletes can now receive "name, image, and likeness" (NIL) deals, a way to get sponsorships and money before their pro debut.
Modern North American football has its origins in various games, all known as "football", played at public schools in Great Britain in the mid-19th century. By the 1840s, students at Rugby School were playing a game in which players were able to pick up the ball and run with it, a sport later known as rugby football. The game was taken to Canada by British soldiers stationed there and was soon being played at Canadian colleges.
The first documented gridiron football game was played at University College, a college of the University of Toronto, on November 9, 1861. One of the participants in the game involving University of Toronto students was (Sir) William Mulock, later Chancellor of the school. A football club was formed at the university soon afterward, although its rules of play at this stage are unclear.
In 1864, at Trinity College, also a college of the University of Toronto, F. Barlow Cumberland and Frederick A. Bethune devised rules based on rugby football. Modern Canadian football is widely regarded as having originated with a game played in Montreal, in 1865, when British Army officers played local civilians. The game gradually gained a following, and the Montreal Football Club was formed in 1868, the first recorded non-university football club in Canada.
Early games appear to have had much in common with the traditional "mob football" played in Great Britain. The games remained largely unorganized until the 19th century, when intramural games of football began to be played on college campuses. Each school played its own variety of football. Princeton University students played a game called "ballown" as early as 1820. A Harvard tradition known as "Bloody Monday" began in 1827, which consisted of a mass ballgame between the freshman and sophomore classes. In 1860, both the town police and the college authorities agreed the Bloody Monday had to go. The Harvard students responded by going into mourning for a mock figure called "Football Fightum", for whom they conducted funeral rites. The authorities held firm and it was a dozen years before football was once again played at Harvard. Dartmouth played its own version called "Old division football", the rules of which were first published in 1871, though the game dates to at least the 1830s. All of these games, and others, shared certain commonalities. They remained largely "mob" style games, with huge numbers of players attempting to advance the ball into a goal area, often by any means necessary. Rules were simple, and violence and injury were common. The violence of these mob-style games led to widespread protests and a decision to abandon them. Yale, under pressure from the city of New Haven, banned the play of all forms of football in 1860.
American football historian Parke H. Davis described the period between 1869 and 1875 as the 'Pioneer Period'; the years 1876–93 he called the 'Period of the American Intercollegiate Football Association'; and the years 1894–1933 he dubbed the "Period of Rules Committees and Conferences".
On November 6, 1869, Rutgers University faced Princeton University (then known as the College of New Jersey) in the first game of intercollegiate football that resembled more the game of soccer than "football" as it is played today. It was played with a round ball and, like all early games, used a set of rules suggested by Rutgers captain William J. Leggett, based on The Football Association's first set of rules, which were an early attempt by the former pupils of England's public schools, to unify the rules of their public schools games and create a universal and standardized set of rules for the game of football and bore little resemblance to the American game which would be developed in the following decades. It is still usually regarded as the first game of college football. The game was played at a Rutgers field. Two teams of 25 players attempted to score by kicking the ball into the opposing team's goal. Throwing or carrying the ball was not allowed, but there was plenty of physical contact between players. The first team to reach six goals was declared the winner. Rutgers won by a score of six to four. A rematch was played at Princeton a week later under Princeton's own set of rules (one notable difference was the awarding of a "free kick" to any player that caught the ball on the fly, which was a feature adopted from The Football Association's rules; the fair catch kick rule has survived through to modern American game). Princeton won that game by a score of 8 – 0. Columbia joined the series in 1870 and by 1872 several schools were fielding intercollegiate teams, including Yale and Stevens Institute of Technology.
Columbia University was the third school to field a team. The Lions traveled from New York City to New Brunswick on November 12, 1870, and were defeated by Rutgers 6 to 3. The game suffered from disorganization and the players kicked and battled each other as much as the ball. Later in 1870, Princeton and Rutgers played again with Princeton defeating Rutgers 6–0. This game's violence caused such an outcry that no games at all were played in 1871. Football came back in 1872, when Columbia played Yale for the first time. The Yale team was coached and captained by David Schley Schaff, who had learned to play football while attending Rugby School. Schaff himself was injured and unable to play the game, but Yale won the game 3-0 nonetheless. Later in 1872, Stevens Tech became the fifth school to field a team. Stevens lost to Columbia, but beat both New York University and City College of New York during the following year.
By 1873, the college students playing football had made significant efforts to standardize their fledgling game. Teams had been scaled down from 25 players to 20. The only way to score was still to bat or kick the ball through the opposing team's goal, and the game was played in two 45 minute halves on fields 140 yards long and 70 yards wide. On October 20, 1873, representatives from Yale, Columbia, Princeton, and Rutgers met at the Fifth Avenue Hotel in New York City to codify the first set of intercollegiate football rules. Before this meeting, each school had its own set of rules and games were usually played using the home team's own particular code. At this meeting, a list of rules, based more on the Football Association's rules than the rules of the recently founded Rugby Football Union, was drawn up for intercollegiate football games.
Old "Football Fightum" had been resurrected at Harvard in 1872, when Harvard resumed playing football. Harvard, however, preferred to play a rougher version of football called "the Boston Game" in which the kicking of a round ball was the most prominent feature though a player could run with the ball, pass it, or dribble it (known as "babying"). The man with the ball could be tackled, although hitting, tripping, "hacking" and other unnecessary roughness was prohibited. There was no limit to the number of players, but there were typically ten to fifteen per side. A player could carry the ball only when being pursued.
As a result of this, Harvard refused to attend the rules conference organized by Rutgers, Princeton and Columbia at the Fifth Avenue Hotel in New York City on October 20, 1873, to agree on a set of rules and regulations that would allow them to play a form of football that was essentially Association football; and continued to play under its own code. While Harvard's voluntary absence from the meeting made it hard for them to schedule games against other American universities, it agreed to a challenge to play the rugby team of McGill University, from Montreal, in a two-game series. It was agreed that two games would be played on Harvard's Jarvis baseball field in Cambridge, Massachusetts on May 14 and 15, 1874: one to be played under Harvard rules, another under the stricter rugby regulations of McGill. Jarvis Field was at the time a patch of land at the northern point of the Harvard campus, bordered by Everett and Jarvis Streets to the north and south, and Oxford Street and Massachusetts Avenue to the east and west. Harvard beat McGill in the "Boston Game" on the Thursday and held McGill to a 0–0 tie on the Friday. The Harvard students took to the rugby rules and adopted them as their own, The games featured a round ball instead of a rugby-style oblong ball. This series of games represents an important milestone in the development of the modern game of American football. In October 1874, the Harvard team once again traveled to Montreal to play McGill in rugby, where they won by three tries.
In as much as Rugby football had been transplanted to Canada from England, the McGill team played under a set of rules which allowed a player to pick up the ball and run with it whenever he wished. Another rule, unique to McGill, was to count tries (the act of grounding the football past the opposing team's goal line; it is important to note that there was no end zone during this time), as well as goals, in the scoring. In the Rugby rules of the time, a try only provided the attempt to kick a free goal from the field. If the kick was missed, the try did not score any points itself.
Harvard quickly took a liking to the rugby game, and its use of the try which, until that time, was not used in American football. The try would later evolve into the score known as the touchdown. On June 4, 1875, Harvard faced Tufts University in the first game between two American colleges played under rules similar to the McGill/Harvard contest, which was won by Tufts. The rules included each side fielding 11 men at any given time, the ball was advanced by kicking or carrying it, and tackles of the ball carrier stopped play – actions of which have carried over to the modern version of football played today
Harvard later challenged its closest rival, Yale, to which the Bulldogs accepted. The two teams agreed to play under a set of rules called the "Concessionary Rules", which involved Harvard conceding something to Yale's soccer and Yale conceding a great deal to Harvard's rugby. They decided to play with 15 players on each team. On November 13, 1875, Yale and Harvard played each other for the first time ever, where Harvard won 4–0. At the first The Game (as the annual contest between Harvard and Yale came to be named) the future "father of American football" Walter Camp was among the 2000 spectators in attendance. Walter, who would enroll at Yale the next year, was torn between an admiration for Harvard's style of play and the misery of the Yale defeat, and became determined to avenge Yale's defeat. Spectators from Princeton also carried the game back home, where it quickly became the most popular version of football.
On November 23, 1876, representatives from Harvard, Yale, Princeton, and Columbia met at the Massasoit House hotel in Springfield, Massachusetts to standardize a new code of rules based on the rugby game first introduced to Harvard by McGill University in 1874. Three of the schools—Harvard, Columbia, and Princeton—formed the Intercollegiate Football Association, as a result of the meeting. Yale initially refused to join this association because of a disagreement over the number of players to be allowed per team (relenting in 1879) and Rutgers were not invited to the meeting. The rules that they agreed upon were essentially those of rugby union at the time with the exception that points be awarded for scoring a try, not just the conversion afterwards (extra point). Incidentally, rugby was to make a similar change to its scoring system 10 years later.
Walter Camp is widely considered to be the most important figure in the development of American football. As a youth, he excelled in sports like track, baseball, and association football, and after enrolling at Yale in 1876, he earned varsity honors in every sport the school offered.
Following the introduction of rugby-style rules to American football, Camp became a fixture at the Massasoit House conventions where rules were debated and changed. Dissatisfied with what seemed to him to be a disorganized mob, he proposed his first rule change at the first meeting he attended in 1878: a reduction from fifteen players to eleven. The motion was rejected at that time but passed in 1880. The effect was to open up the game and emphasize speed over strength. Camp's most famous change, the establishment of the line of scrimmage and the snap from center to quarterback, was also passed in 1880. Originally, the snap was executed with the foot of the center. Later changes made it possible to snap the ball with the hands, either through the air or by a direct hand-to-hand pass. Rugby league followed Camp's example, and in 1906 introduced the play-the-ball rule, which greatly resembled Camp's early scrimmage and center-snap rules. In 1966, rugby league introduced a four-tackle rule (changed in 1972 to a six-tackle rule) based on Camp's early down-and-distance rules.
Camp's new scrimmage rules revolutionized the game, though not always as intended. Princeton, in particular, used scrimmage play to slow the game, making incremental progress towards the end zone during each down. Rather than increase scoring, which had been Camp's original intent, the rule was exploited to maintain control of the ball for the entire game, resulting in slow, unexciting contests. At the 1882 rules meeting, Camp proposed that a team be required to advance the ball a minimum of five yards within three downs. These down-and-distance rules, combined with the establishment of the line of scrimmage, transformed the game from a variation of rugby football into the distinct sport of American football.
Camp was central to several more significant rule changes that came to define American football. In 1881, the field was reduced in size to its modern dimensions of 120 by 531⁄3 yards (109.7 by 48.8 meters). Several times in 1883, Camp tinkered with the scoring rules, finally arriving at four points for a touchdown, two points for kicks after touchdowns, two points for safeties, and five for field goals. Camp's innovations in the area of point scoring influenced rugby union's move to point scoring in 1890. In 1887, game time was set at two halves of 45 minutes each. Also in 1887, two paid officials—a referee and an umpire—were mandated for each game. A year later, the rules were changed to allow tackling below the waist, and in 1889, the officials were given whistles and stopwatches.
After leaving Yale in 1882, Camp was employed by the New Haven Clock Company until his death in 1925. Though no longer a player, he remained a fixture at annual rules meetings for most of his life, and he personally selected an annual All-American team every year from 1889 through 1924. The Walter Camp Football Foundation continues to select All-American teams in his honor.
College football expanded greatly during the last two decades of the 19th century. Several major rivalries date from this time period.
November 1890 was an active time in the sport. In Baldwin City, Kansas, on November 22, 1890, college football was first played in the state of Kansas. Baker beat Kansas 22–9. On the 27th, Vanderbilt played Nashville (Peabody) at Athletic Park and won 40–0. It was the first time organized football played in the state of Tennessee. The 29th also saw the first instance of the Army–Navy Game. Navy won 24–0.
Rutgers was first to extend the reach of the game. An intercollegiate game was first played in the state of New York when Rutgers played Columbia on November 2, 1872. It was also the first scoreless tie in the history of the fledgling sport. Yale football starts the same year and has its first match against Columbia, the nearest college to play football. It took place at Hamilton Park in New Haven and was the first game in New England. The game was essentially soccer with 20-man sides, played on a field 400 by 250 feet. Yale wins 3–0, Tommy Sherman scoring the first goal and Lew Irwin the other two.
After the first game against Harvard, Tufts took its squad to Bates College in Lewiston, Maine for the first football game played in Maine. This occurred on November 6, 1875.
Penn's Athletic Association was looking to pick "a twenty" to play a game of football against Columbia. This "twenty" never played Columbia, but did play twice against Princeton. Princeton won both games 6 to 0. The first of these happened on November 11, 1876, in Philadelphia and was the first intercollegiate game in the state of Pennsylvania.
Brown enters the intercollegiate game in 1878.
The first game where one team scored over 100 points happened on October 25, 1884, when Yale routed Dartmouth 113–0. It was also the first time one team scored over 100 points and the opposing team was shut out. The next week, Princeton outscored Lafayette 140 to 0.
The first intercollegiate game in the state of Vermont happened on November 6, 1886, between Dartmouth and Vermont at Burlington, Vermont. Dartmouth won 91 to 0.
Penn State played its first season in 1887, but had no head coach for their first five years, from 1887 to 1891. The teams played its home games on the Old Main lawn on campus in State College, Pennsylvania. They compiled a 12–8–1 record in these seasons, playing as an independent from 1887 to 1890.
In 1891, the Pennsylvania Intercollegiate Football Association (PIFA) was formed. It consisted of Bucknell (University of Lewisburg), Dickinson, Franklin & Marshall, Haverford, Penn State and Swarthmore. Lafayette and Lehigh were excluded because it was felt they would dominate the Association. Penn State won the championship with a 4–1–0 record. Bucknell's record was 3–1–1 (losing to Franklin & Marshall and tying Dickinson). The Association was dissolved prior to the 1892 season.
The first nighttime football game was played in Mansfield, Pennsylvania on September 28, 1892, between Mansfield State Normal and Wyoming Seminary and ended at halftime in a 0–0 tie. The Army–Navy game of 1893 saw the first documented use of a football helmet by a player in a game. Joseph M. Reeves had a crude leather helmet made by a shoemaker in Annapolis and wore it in the game after being warned by his doctor that he risked death if he continued to play football after suffering an earlier kick to the head.
In 1879, the University of Michigan became the first school west of Pennsylvania to establish a college football team. On May 30, 1879, Michigan beat Racine College 1–0 in a game played in Chicago. The Chicago Daily Tribune called it "the first rugby-football game to be played west of the Alleghenies." Other Midwestern schools soon followed suit, including the University of Chicago, Northwestern University, and the University of Minnesota. The first western team to travel east was the 1881 Michigan team, which played at Harvard, Yale and Princeton. The nation's first college football league, the Intercollegiate Conference of Faculty Representatives (also known as the Western Conference), a precursor to the Big Ten Conference, was founded in 1895.
Led by coach Fielding H. Yost, Michigan became the first "western" national power. From 1901 to 1905, Michigan had a 56-game undefeated streak that included a 1902 trip to play in the first college football bowl game, which later became the Rose Bowl Game. During this streak, Michigan scored 2,831 points while allowing only 40.
Organized intercollegiate football was first played in the state of Minnesota on September 30, 1882, when Hamline was convinced to play Minnesota. Minnesota won 2 to 0. It was the first game west of the Mississippi River.
November 30, 1905, saw Chicago defeat Michigan 2 to 0. Dubbed "The First Greatest Game of the Century", it broke Michigan's 56-game unbeaten streak and marked the end of the "Point-a-Minute" years.
Organized intercollegiate football was first played in the state of Virginia and the south on November 2, 1873, in Lexington between Washington and Lee and VMI. Washington and Lee won 4–2. Some industrious students of the two schools organized a game for October 23, 1869, but it was rained out. Students of the University of Virginia were playing pickup games of the kicking-style of football as early as 1870, and some accounts even claim it organized a game against Washington and Lee College in 1871; but no record has been found of the score of this contest. Due to scantiness of records of the prior matches some will claim Virginia v. Pantops Academy November 13, 1887, as the first game in Virginia.
On April 9, 1880, at Stoll Field, Transylvania University (then called Kentucky University) beat Centre College by the score of 13+3⁄4–0 in what is often considered the first recorded game played in the South. The first game of "scientific football" in the South was the first instance of the Victory Bell rivalry between North Carolina and Duke (then known as Trinity College) held on Thanksgiving Day, 1888, at the North Carolina State Fairgrounds in Raleigh, North Carolina.
On November 13, 1887, the Virginia Cavaliers and Pantops Academy fought to a scoreless tie in the first organized football game in the state of Virginia. Students at UVA were playing pickup games of the kicking-style of football as early as 1870, and some accounts even claim that some industrious ones organized a game against Washington and Lee College in 1871, just two years after Rutgers and Princeton's historic first game in 1869. But no record has been found of the score of this contest. Washington and Lee also claims a 4 to 2 win over VMI in 1873.
On October 18, 1888, the Wake Forest Demon Deacons defeated the North Carolina Tar Heels 6 to 4 in the first intercollegiate game in the state of North Carolina.
On December 14, 1889, Wofford defeated Furman 5 to 1 in the first intercollegiate game in the state of South Carolina. The game featured no uniforms, no positions, and the rules were formulated before the game.
January 30, 1892, saw the first football game played in the Deep South when the Georgia Bulldogs defeated Mercer 50–0 at Herty Field.
The beginnings of the contemporary Southeastern Conference and Atlantic Coast Conference start in 1894. The Southern Intercollegiate Athletic Association (SIAA) was founded on December 21, 1894, by William Dudley, a chemistry professor at Vanderbilt. The original members were Alabama, Auburn, Georgia, Georgia Tech, North Carolina, Sewanee, and Vanderbilt. Clemson, Cumberland, Kentucky, LSU, Mercer, Mississippi, Mississippi A&M (Mississippi State), Southwestern Presbyterian University, Tennessee, Texas, Tulane, and the University of Nashville joined the following year in 1895 as invited charter members. The conference was originally formed for "the development and purification of college athletics throughout the South".
It is thought that the first forward pass in football occurred on October 26, 1895, in a game between Georgia and North Carolina when, out of desperation, the ball was thrown by the North Carolina back Joel Whitaker instead of punted and George Stephens caught the ball. On November 9, 1895, John Heisman executed a hidden ball trick utilizing quarterback Reynolds Tichenor to get Auburn's only touchdown in a 6 to 9 loss to Vanderbilt. It was the first game in the south decided by a field goal. Heisman later used the trick against Pop Warner's Georgia team. Warner picked up the trick and later used it at Cornell against Penn State in 1897. He then used it in 1903 at Carlisle against Harvard and garnered national attention.
The 1899 Sewanee Tigers are one of the all-time great teams of the early sport. The team went 12–0, outscoring opponents 322 to 10. Known as the "Iron Men", with just 13 men they had a six-day road trip with five shutout wins over Texas A&M; Texas; Tulane; LSU; and Ole Miss. It is recalled memorably with the phrase "... and on the seventh day they rested." Grantland Rice called them "the most durable football team I ever saw."
Organized intercollegiate football was first played in the state of Florida in 1901. A 7-game series between intramural teams from Stetson and Forbes occurred in 1894. The first intercollegiate game between official varsity teams was played on November 22, 1901. Stetson beat Florida Agricultural College at Lake City, one of the four forerunners of the University of Florida, 6–0, in a game played as part of the Jacksonville Fair.
On September 27, 1902, Georgetown beat Navy 4 to 0. It is claimed by Georgetown authorities as the game with the first ever "roving center" or linebacker when Percy Given stood up, in contrast to the usual tale of Germany Schulz. The first linebacker in the South is often considered to be Frank Juhan.
On Thanksgiving Day 1903, a game was scheduled in Montgomery, Alabama between the best teams from each region of the Southern Intercollegiate Athletic Association for an "SIAA championship game", pitting Cumberland against Heisman's Clemson. The game ended in an 11–11 tie causing many teams to claim the title. Heisman pressed hardest for Cumberland to get the claim of champion. It was his last game as Clemson head coach.
1904 saw big coaching hires in the south: Mike Donahue at Auburn, John Heisman at Georgia Tech, and Dan McGugin at Vanderbilt were all hired that year. Both Donahue and McGugin just came from the north that year, Donahue from Yale and McGugin from Michigan, and were among the initial inductees of the College Football Hall of Fame. The undefeated 1904 Vanderbilt team scored an average of 52.7 points per game, the most in college football that season, and allowed just four points.
The first college football game in Oklahoma Territory occurred on November 7, 1895, when the "Oklahoma City Terrors" defeated the Oklahoma Sooners 34 to 0. The Terrors were a mix of Methodist college and high school students. The Sooners did not manage a single first down. By next season, Oklahoma coach John A. Harts had left to prospect for gold in the Arctic. Organized football was first played in the territory on November 29, 1894, between the Oklahoma City Terrors and Oklahoma City High School. The high school won 24 to 0.
The University of Southern California first fielded an American football team in 1888. Playing its first game on November 14 of that year against the Alliance Athletic Club, in which USC gained a 16–0 victory. Frank Suffel and Henry H. Goddard were playing coaches for the first team which was put together by quarterback Arthur Carroll; who in turn volunteered to make the pants for the team and later became a tailor. USC faced its first collegiate opponent the following year in fall 1889, playing St. Vincent's College to a 40–0 victory. In 1893, USC joined the Intercollegiate Football Association of Southern California (the forerunner of the SCIAC), which was composed of USC, Occidental College, Throop Polytechnic Institute (Caltech), and Chaffey College. Pomona College was invited to enter, but declined to do so. An invitation was also extended to Los Angeles High School.
In 1891, the first Stanford football team was hastily organized and played a four-game season beginning in January 1892 with no official head coach. Following the season, Stanford captain John Whittemore wrote to Yale coach Walter Camp asking him to recommend a coach for Stanford. To Whittemore's surprise, Camp agreed to coach the team himself, on the condition that he finish the season at Yale first. As a result of Camp's late arrival, Stanford played just three official games, against San Francisco's Olympic Club and rival California. The team also played exhibition games against two Los Angeles area teams that Stanford does not include in official results. Camp returned to the East Coast following the season, then returned to coach Stanford in 1894 and 1895.
On December 25, 1894, Amos Alonzo Stagg's Chicago Maroons agreed to play Camp's Stanford football team in San Francisco in the first postseason intersectional contest, foreshadowing the modern bowl game. Future president Herbert Hoover was Stanford's student financial manager. Chicago won 24 to 4. Stanford won a rematch in Los Angeles on December 29 by 12 to 0.
The Big Game between Stanford and California is the oldest college football rivalry in the West. The first game was played on San Francisco's Haight Street Grounds on March 19, 1892, with Stanford winning 14–10. The term "Big Game" was first used in 1900, when it was played on Thanksgiving Day in San Francisco. During that game, a large group of men and boys, who were observing from the roof of the nearby S.F. and Pacific Glass Works, fell into the fiery interior of the building when the roof collapsed, resulting in 13 dead and 78 injured. On December 4, 1900, the last victim of the disaster (Fred Lilly) died, bringing the death toll to 22; and, to this day, the "Thanksgiving Day Disaster" remains the deadliest accident to kill spectators at a U.S. sporting event.
The University of Oregon began playing American football in 1894 and played its first game on March 24, 1894, defeating Albany College 44–3 under head coach Cal Young. Cal Young left after that first game and J.A. Church took over the coaching position in the fall for the rest of the season. Oregon finished the season with two additional losses and a tie, but went undefeated the following season, winning all four of its games under head coach Percy Benson. In 1899, the Oregon football team left the state for the first time, playing the California Golden Bears in Berkeley, California.
American football at Oregon State University started in 1893 shortly after athletics were initially authorized at the college. Athletics were banned at the school in May 1892, but when the strict school president, Benjamin Arnold, died, President John Bloss reversed the ban. Bloss's son William started the first team, on which he served as both coach and quarterback. The team's first game was an easy 63–0 defeat over the home team, Albany College.
In May 1900, Yost was hired as the football coach at Stanford University, and, after traveling home to West Virginia, he arrived in Palo Alto, California, on August 21, 1900. Yost led the 1900 Stanford team to a 7–2–1, outscoring opponents 154 to 20. The next year in 1901, Yost was hired by Charles A. Baird as the head football coach for the Michigan Wolverines football team. On January 1, 1902, Yost's dominating 1901 Michigan Wolverines football team agreed to play a 3–1–2 team from Stanford University in the inaugural "Tournament East-West football game" what is now known as the Rose Bowl Game by a score of 49–0 after Stanford captain Ralph Fisher requested to quit with eight minutes remaining.
The 1905 season marked the first meeting between Stanford and USC. Consequently, Stanford is USC's oldest existing rival. The Big Game between Stanford and Cal on November 11, 1905, was the first played at Stanford Field, with Stanford winning 12–5.
In 1906, citing concerns about the violence in American Football, universities on the West Coast, led by California and Stanford, replaced the sport with rugby union. At the time, the future of American football was very much in doubt and these schools believed that rugby union would eventually be adopted nationwide. Other schools followed suit and also made the switch included Nevada, St. Mary's, Santa Clara, and USC (in 1911). However, due to the perception that West Coast football was inferior to the game played on the East Coast anyway, East Coast and Midwest teams shrugged off the loss of the teams and continued playing American football. With no nationwide movement, the available pool of rugby teams to play remained small. The schools scheduled games against local club teams and reached out to rugby union powers in Australia, New Zealand, and especially, due to its proximity, Canada. The annual Big Game between Stanford and California continued as rugby, with the winner invited by the British Columbia Rugby Union to a tournament in Vancouver over the Christmas holidays, with the winner of that tournament receiving the Cooper Keith Trophy.
During 12 seasons of playing rugby union, Stanford was remarkably successful: the team had three undefeated seasons, three one-loss seasons, and an overall record of 94 wins, 20 losses, and 3 ties for a winning percentage of .816. However, after a few years, the school began to feel the isolation of its newly adopted sport, which was not spreading as many had hoped. Students and alumni began to clamor for a return to American football to allow wider intercollegiate competition. The pressure at rival California was stronger (especially as the school had not been as successful in the Big Game as they had hoped), and in 1915 California returned to American football. As reasons for the change, the school cited rule change back to American football, the overwhelming desire of students and supporters to play American football, interest in playing other East Coast and Midwest schools, and a patriotic desire to play an "American" game. California's return to American football increased the pressure on Stanford to also change back in order to maintain the rivalry. Stanford played its 1915, 1916, and 1917 "Big Games" as rugby union against Santa Clara and California's football "Big Game" in those years was against Washington, but both schools desired to restore the old traditions. The onset of American involvement in World War I gave Stanford an out: In 1918, the Stanford campus was designated as the Students' Army Training Corps headquarters for all of California, Nevada, and Utah, and the commanding officer Sam M. Parker decreed that American football was the appropriate athletic activity to train soldiers and rugby union was dropped.
The University of Colorado began playing American football in 1890. Colorado found much success in its early years, winning eight Colorado Football Association Championships (1894–97, 1901–08).
The following was taken from the Silver & Gold newspaper of December 16, 1898. It was a recollection of the birth of Colorado football written by one of CU's original gridders, John C. Nixon, also the school's second captain. It appears here in its original form:
At the beginning of the first semester in the fall of '90 the boys rooming at the dormitory on the campus of the U. of C. being afflicted with a super-abundance of penned up energy, or perhaps having recently drifted from under the parental wing and delighting in their newly found freedom, decided among other wild schemes, to form an athletic association. Messrs Carney, Whittaker, Layton and others, who at that time constituted a majority of the male population of the University, called a meeting of the campus boys in the old medical building. Nixon was elected president and Holden secretary of the association.
It was voted that the officers constitute a committee to provide uniform suits in which to play what was called "association football". Suits of flannel were ultimately procured and paid for assessments on the members of the association and generous contributions from members of the faculty. ...
The Athletic Association should now invigorate its base-ball and place it at par with its football team; and it certainly has the material with which to do it. The U of C should henceforth lead the state and possibly the west in athletic sports. ...
The style of football playing has altered considerably; by the old rules, all men in front of the runner with the ball, were offside, consequently we could not send backs through and break the line ahead of the ball as is done at present. The notorious V was then in vogue, which gave a heavy team too much advantage. The mass plays being now barred, skill on the football field is more in demand than mere weight and strength.
In 1909, the Rocky Mountain Athletic Conference was founded, featuring four members: Colorado, Colorado College, Colorado School of Mines, and Colorado Agricultural College. The University of Denver and the University of Utah joined the RMAC in 1910. For its first thirty years, the RMAC was considered a major conference equivalent to today's Division I, before 7 larger members left and formed the Mountain States Conference (also called the Skyline Conference).
College football increased in popularity through the remainder of the 19th and early 20th century. It also became increasingly violent. Between 1890 and 1905, 330 college athletes died as a direct result of injuries sustained on the football field. These deaths could be attributed to the mass formations and gang tackling that characterized the sport in its early years.
No sport is wholesome in which ungenerous or mean acts which easily escape detection contribute to victory.
Charles William Eliot, President of Harvard University (1869–1909) opposing football in 1905.
The 1894 Harvard–Yale game, known as the "Hampden Park Blood Bath", resulted in crippling injuries for four players; the contest was suspended until 1897. The annual Army–Navy game was suspended from 1894 to 1898 for similar reasons. One of the major problems was the popularity of mass-formations like the flying wedge, in which a large number of offensive players charged as a unit against a similarly arranged defense. The resultant collisions often led to serious injuries and sometimes even death. Georgia fullback Richard Von Albade Gammon notably died on the field from concussions received against Virginia in 1897, causing Georgia, Georgia Tech, and Mercer to suspend their football programs.
The situation came to a head in 1905 when there were 19 fatalities nationwide. President Theodore Roosevelt reportedly threatened to shut down the game if drastic changes were not made. However, the threat by Roosevelt to eliminate football is disputed by sports historians. What is absolutely certain is that on October 9, 1905, Roosevelt held a meeting of football representatives from Harvard, Yale, and Princeton. Though he lectured on eliminating and reducing injuries, he never threatened to ban football. He also lacked the authority to abolish football and was, in fact, actually a fan of the sport and wanted to preserve it. The President's sons were also playing football at the college and secondary levels at the time.
Meanwhile, John H. Outland held an experimental game in Wichita, Kansas that reduced the number of scrimmage plays to earn a first down from four to three in an attempt to reduce injuries. The Los Angeles Times reported an increase in punts and considered the game much safer than regular play but that the new rule was not "conducive to the sport". In 1906, President Roosevelt organized a meeting among thirteen school leaders at the White House to find solutions to make the sport safer for the athletes. Because the college officials could not agree upon a change in rules, it was decided over the course of several subsequent meetings that an external governing body should be responsible. Finally, on December 28, 1905, 62 schools met in New York City to discuss rule changes to make the game safer. As a result of this meeting, the Intercollegiate Athletic Association of the United States was formed in 1906. The IAAUS was the original rule making body of college football, but would go on to sponsor championships in other sports. The IAAUS would get its current name of National Collegiate Athletic Association (NCAA) in 1910, and still sets rules governing the sport.
The rules committee considered widening the playing field to "open up" the game, but Harvard Stadium (the first large permanent football stadium) had recently been built at great expense; it would be rendered useless by a wider field. The rules committee legalized the forward pass instead. Though it was underutilized for years, this proved to be one of the most important rule changes in the establishment of the modern game. Another rule change banned "mass momentum" plays (many of which, like the infamous "flying wedge", were sometimes literally deadly).
As a result of the 1905–1906 reforms, mass formation plays became illegal and forward passes legal. Bradbury Robinson, playing for visionary coach Eddie Cochems at Saint Louis University, threw the first legal pass in a September 5, 1906, game against Carroll College at Waukesha. Other important changes, formally adopted in 1910, were the requirements that at least seven offensive players be on the line of scrimmage at the time of the snap, that there be no pushing or pulling, and that interlocking interference (arms linked or hands on belts and uniforms) was not allowed. These changes greatly reduced the potential for collision injuries. Several coaches emerged who took advantage of these sweeping changes. Amos Alonzo Stagg introduced such innovations as the huddle, the tackling dummy, and the pre-snap shift. Other coaches, such as Pop Warner and Knute Rockne, introduced new strategies that still remain part of the game.
Besides these coaching innovations, several rules changes during the first third of the 20th century had a profound impact on the game, mostly in opening up the passing game. In 1914, the first roughing-the-passer penalty was implemented. In 1918, the rules on eligible receivers were loosened to allow eligible players to catch the ball anywhere on the field—previously strict rules were in place allowing passes to only certain areas of the field. Scoring rules also changed during this time: field goals were lowered to three points in 1909 and touchdowns raised to six points in 1912.
Star players that emerged in the early 20th century include Jim Thorpe, Red Grange, and Bronko Nagurski; these three made the transition to the fledgling NFL and helped turn it into a successful league. Sportswriter Grantland Rice helped popularize the sport with his poetic descriptions of games and colorful nicknames for the game's biggest players, including Notre Dame's "Four Horsemen" backfield and Fordham University's linemen, known as the "Seven Blocks of Granite".
In 1907 at Champaign, Illinois Chicago and Illinois played in the first game to have a halftime show featuring a marching band. Chicago won 42–6. On November 25, 1911 Kansas played at Missouri in the first homecoming football game. The game was "broadcast" play-by-play over telegraph to at least 1,000 fans in Lawrence, Kansas. It ended in a 3–3 tie. The game between West Virginia and Pittsburgh on October 8, 1921, saw the first live radio broadcast of a college football game when Harold W. Arlin announced that year's Backyard Brawl played at Forbes Field on KDKA. Pitt won 21–13. On October 28, 1922, Princeton and Chicago played the first game to be nationally broadcast on radio. Princeton won 21–18 in a hotly contested game which had Princeton dubbed the "Team of Destiny".
One publication claims "The first scouting done in the South was in 1905, when Dan McGugin and Captain Innis Brown, of Vanderbilt went to Atlanta to see Sewanee play Georgia Tech." Fuzzy Woodruff claims Davidson was the first in the south to throw a legal forward pass in 1906. The following season saw Vanderbilt execute a double pass play to set up the touchdown that beat Sewanee in a meeting of the unbeaten for the SIAA championship. Grantland Rice cited this event as the greatest thrill he ever witnessed in his years of watching sports. Vanderbilt coach Dan McGugin in Spalding's Football Guide's summation of the season in the SIAA wrote "The standing. First, Vanderbilt; second, Sewanee, a might good second;" and that Aubrey Lanier "came near winning the Vanderbilt game by his brilliant dashes after receiving punts." Bob Blake threw the final pass to center Stein Stone, catching it near the goal amongst defenders. Honus Craig then ran in the winning touchdown.
Utilizing the "jump shift" offense, John Heisman's Georgia Tech Golden Tornado won 222 to 0 over Cumberland on October 7, 1916, at Grant Field in the most lopsided victory in college football history. Tech went on a 33-game winning streak during this period. The 1917 team was the first national champion from the South, led by a powerful backfield. It also had the first two players from the Deep South selected first-team All-American in Walker Carpenter and Everett Strupper. Pop Warner's Pittsburgh Panthers were also undefeated, but declined a challenge by Heisman to a game. When Heisman left Tech after 1919, his shift was still employed by protégé William Alexander.
In 1906, Vanderbilt defeated Carlisle 4 to 0, the result of a Bob Blake field goal. In 1907 Vanderbilt fought Navy to a 6 to 6 tie. In 1910 Vanderbilt held defending national champion Yale to a scoreless tie.
Helping Georgia Tech's claim to a title in 1917, the Auburn Tigers held undefeated, Chic Harley-led Big Ten champion Ohio State to a scoreless tie the week before Georgia Tech beat the Tigers 68 to 7. The next season, with many players gone due to World War I, a game was finally scheduled at Forbes Field with Pittsburgh. The Panthers, led by freshman Tom Davies, defeated Georgia Tech 32 to 0. Tech center Bum Day was the first player on a Southern team ever selected first-team All-American by Walter Camp.
1917 saw the rise of another Southern team in Centre of Danville, Kentucky. In 1921 Bo McMillin-led Centre upset defending national champion Harvard 6 to 0 in what is widely considered one of the greatest upsets in college football history. The next year Vanderbilt fought Michigan to a scoreless tie at the inaugural game at Dudley Field (now Vanderbilt Stadium), the first stadium in the South made exclusively for college football. Michigan coach Fielding Yost and Vanderbilt coach Dan McGugin were brothers-in-law, and the latter the protégé of the former. The game featured the season's two best defenses and included a goal line stand by Vanderbilt to preserve the tie. Its result was "a great surprise to the sporting world". Commodore fans celebrated by throwing some 3,000 seat cushions onto the field. The game features prominently in Vanderbilt's history. That same year, Alabama upset Penn 9 to 7.
Vanderbilt's line coach then was Wallace Wade, who coached Alabama to the South's first Rose Bowl victory in 1925. This game is commonly referred to as "the game that changed the south". Wade followed up the next season with an undefeated record and Rose Bowl tie. Georgia's 1927 "dream and wonder team" defeated Yale for the first time. Georgia Tech, led by Heisman protégé William Alexander, gave the dream and wonder team its only loss, and the next year were national and Rose Bowl champions. The Rose Bowl included Roy Riegels' wrong-way run. On October 12, 1929, Yale lost to Georgia in Sanford Stadium in its first trip to the south. Wade's Alabama again won a national championship and Rose Bowl in 1930.
Glenn "Pop" Warner coached at several schools throughout his career, including the University of Georgia, Cornell University, University of Pittsburgh, Stanford University, Iowa State University, and Temple University. One of his most famous stints was at the Carlisle Indian Industrial School, where he coached Jim Thorpe, who went on to become the first president of the National Football League, an Olympic Gold Medalist, and is widely considered one of the best overall athletes in history. Warner wrote one of the first important books of football strategy, Football for Coaches and Players, published in 1927. Though the shift was invented by Stagg, Warner's single wing and double wing formations greatly improved upon it; for almost 40 years, these were among the most important formations in football. As part of his single and double wing formations, Warner was one of the first coaches to effectively utilize the forward pass. Among his other innovations are modern blocking schemes, the three-point stance, and the reverse play. The youth football league, Pop Warner Little Scholars, was named in his honor.
Knute Rockne rose to prominence in 1913 as an end for the University of Notre Dame, then a largely unknown Midwestern Catholic school. When Army scheduled Notre Dame as a warm-up game, they thought little of the small school. Rockne and quarterback Gus Dorais made innovative use of the forward pass, still at that point a relatively unused weapon, to defeat Army 35–13 and helped establish the school as a national power. Rockne returned to coach the team in 1918, and devised the powerful Notre Dame Box offense, based on Warner's single wing. He is credited with being the first major coach to emphasize offense over defense. Rockne is also credited with popularizing and perfecting the forward pass, a seldom used play at the time. The 1924 team featured the Four Horsemen backfield. In 1927, his complex shifts led directly to a rule change whereby all offensive players had to stop for a full second before the ball could be snapped. Rather than simply a regional team, Rockne's "Fighting Irish" became famous for barnstorming and played any team at any location. It was during Rockne's tenure that the annual Notre Dame-University of Southern California rivalry began. He led his team to an impressive 105–12–5 record before his premature death in a plane crash in 1931. He was so famous at that point that his funeral was broadcast nationally on radio.
In the early 1930s, the college game continued to grow, particularly in the South, bolstered by fierce rivalries such as the "South's Oldest Rivalry", between Virginia and North Carolina and the "Deep South's Oldest Rivalry", between Georgia and Auburn. Although before the mid-1920s most national powers came from the Northeast or the Midwest, the trend changed when several teams from the South and the West Coast achieved national success. Wallace William Wade's 1925 Alabama team won the 1926 Rose Bowl after receiving its first national title and William Alexander's 1928 Georgia Tech team defeated California in the 1929 Rose Bowl. College football quickly became the most popular spectator sport in the South.
Several major modern college football conferences rose to prominence during this time period. The Southwest Athletic Conference had been founded in 1915. Consisting mostly of schools from Texas, the conference saw back-to-back national champions with Texas Christian University (TCU) in 1938 and Texas A&M in 1939. The Pacific Coast Conference (PCC), a precursor to the Pac-12 Conference (Pac-12), had its own back-to-back champion in the University of Southern California which was awarded the title in 1931 and 1932. The Southeastern Conference (SEC) formed in 1932 and consisted mostly of schools in the Deep South. As in previous decades, the Big Ten continued to dominate in the 1930s and 1940s, with Minnesota winning 5 titles between 1934 and 1941, and Michigan (1933, 1947, and 1948) and Ohio State (1942) also winning titles.
As it grew beyond its regional affiliations in the 1930s, college football garnered increased national attention. Four new bowl games were created: the Orange Bowl, Sugar Bowl, the Sun Bowl in 1935, and the Cotton Bowl in 1937. In lieu of an actual national championship, these bowl games, along with the earlier Rose Bowl, provided a way to match up teams from distant regions of the country that did not otherwise play. In 1936, the Associated Press began its weekly poll of prominent sports writers, ranking all of the nation's college football teams. Since there was no national championship game, the final version of the AP poll was used to determine who was crowned the National Champion of college football.
The 1930s saw growth in the passing game. Though some coaches, such as General Robert Neyland at Tennessee, continued to eschew its use, several rules changes to the game had a profound effect on teams' ability to throw the ball. In 1934, the rules committee removed two major penalties—a loss of five yards for a second incomplete pass in any series of downs and a loss of possession for an incomplete pass in the end zone—and shrunk the circumference of the ball, making it easier to grip and throw. Players who became famous for taking advantage of the easier passing game included Alabama end Don Hutson and TCU passer "Slingin" Sammy Baugh.
In 1935, New York City's Downtown Athletic Club awarded the first Heisman Trophy to University of Chicago halfback Jay Berwanger, who was also the first ever NFL Draft pick in 1936. The trophy was designed by sculptor Frank Eliscu and modeled after New York University player Ed Smith. The trophy recognizes the nation's "most outstanding" college football player and has become one of the most coveted awards in all of American sports.
During World War II, college football players enlisted in the armed forces, some playing in Europe during the war. As most of these players had eligibility left on their college careers, some of them returned to college at West Point, bringing Army back-to-back national titles in 1944 and 1945 under coach Red Blaik. Doc Blanchard (known as "Mr. Inside") and Glenn Davis (known as "Mr. Outside") both won the Heisman Trophy, in 1945 and 1946. On the coaching staff of those 1944–1946 Army teams was future Pro Football Hall of Fame coach Vince Lombardi.
The 1950s saw the rise of yet more dynasties and power programs. Oklahoma, under coach Bud Wilkinson, won three national titles (1950, 1955, 1956) and all ten Big Eight Conference championships in the decade while building a record 47-game winning streak. Woody Hayes led Ohio State to two national titles, in 1954 and 1957, and won three Big Ten titles. The Michigan State Spartans were known as the "football factory" during the 1950s, where coaches Clarence Munn and Duffy Daugherty led the Spartans to two national titles and two Big Ten titles after joining the Big Ten athletically in 1953. Wilkinson and Hayes, along with Robert Neyland of Tennessee, oversaw a revival of the running game in the 1950s. Passing numbers dropped from an average of 18.9 attempts in 1951 to 13.6 attempts in 1955, while teams averaged just shy of 50 running plays per game. Nine out of ten Heisman Trophy winners in the 1950s were runners. Notre Dame, one of the biggest passing teams of the decade, saw a substantial decline in success; the 1950s were the only decade between 1920 and 1990 when the team did not win at least a share of the national title. Paul Hornung, Notre Dame quarterback, did, however, win the Heisman in 1956, becoming the only player from a losing team ever to do so.
The 1956 Sugar Bowl also gained international attention when Georgia's pro-segregationist Gov. Griffin publicly threatened Georgia Tech and its President Blake Van Leer over allowing the first African American player to play in a collegiate bowl game in the south.
Following the enormous success of the 1958 NFL Championship Game, college football no longer enjoyed the same popularity as the NFL, at least on a national level. While both games benefited from the advent of television, since the late 1950s, the NFL has become a nationally popular sport while college football has maintained strong regional ties.
As professional football became a national television phenomenon, college football did as well. In the 1950s, Notre Dame, which had a large national following, formed its own network to broadcast its games, but by and large the sport still retained a mostly regional following. In 1952, the NCAA claimed all television broadcasting rights for the games of its member institutions, and it alone negotiated television rights. This situation continued until 1984, when several schools brought a suit under the Sherman Antitrust Act; the Supreme Court ruled against the NCAA and schools are now free to negotiate their own television deals. ABC Sports began broadcasting a national Game of the Week in 1966, bringing key matchups and rivalries to a national audience for the first time.
New formations and play sets continued to be developed. Emory Bellard, an assistant coach under Darrell Royal at the University of Texas, developed a three-back option style offense known as the wishbone. The wishbone is a run-heavy offense that depends on the quarterback making last second decisions on when and to whom to hand or pitch the ball to. Royal went on to teach the offense to other coaches, including Bear Bryant at Alabama, Chuck Fairbanks at Oklahoma and Pepper Rodgers at UCLA; who all adapted and developed it to their own tastes. The strategic opposite of the wishbone is the spread offense, developed by professional and college coaches throughout the 1960s and 1970s. Though some schools play a run-based version of the spread, its most common use is as a passing offense designed to "spread" the field both horizontally and vertically. Some teams have managed to adapt with the times to keep winning consistently. In the rankings of the most victorious programs, Michigan, Ohio State, and Alabama ranked first, second, and third in total wins.
In 1940, for the highest level of college football, there were only five bowl games (Rose, Orange, Sugar, Sun, and Cotton). By 1950, three more had joined that number and in 1970, there were still only eight major college bowl games. The number grew to eleven in 1976. At the birth of cable television and cable sports networks like ESPN, there were fifteen bowls in 1980. With more national venues and increased available revenue, the bowls saw an explosive growth throughout the 1980s and 1990s. In the thirty years from 1950 to 1980, seven bowl games were added to the schedule. From 1980 to 2008, an additional 20 bowl games were added to the schedule. Some have criticized this growth, claiming that the increased number of games has diluted the significance of playing in a bowl game. Yet others have countered that the increased number of games has increased exposure and revenue for a greater number of schools, and see it as a positive development. Teams participating in bowl games also get to practice up to four hours per day or 20 hours per week until their bowl game concludes. There is no limit on the number of practices during the bowl season, so teams that play later in the season (usually ones with more wins) get more opportunity to practice than ones that play earlier. This bowl practice period can be compared to the spring practice schedule when teams can have 15 on-field practice sessions. Many teams that play late in the bowl season use the first few practices for evaluation and development of younger players while resting the starters.
Currently, the NCAA Division I football teams are divided into two divisions - the "football bowl subdivision" (FBS) and the "football championship subdivision"(FCS). As indicated by the name, the FBS teams are eligible to play in post-season bowls. The FCS teams, Division II, Division III, National Junior College teams play in sanctioned tournaments to determine their annual champions. There is not now, and never has been, an NCAA-sanctioned tournament to determine the champion of the top-level football teams.
With the growth of bowl games, it became difficult to determine a national champion in a fair and equitable manner. As conferences became contractually bound to certain bowl games (a situation known as a tie-in), match-ups that guaranteed a consensus national champion became increasingly rare.
In 1992, seven conferences and independent Notre Dame formed the Bowl Coalition, which attempted to arrange an annual No.1 versus No.2 matchup based on the final AP poll standings. The Coalition lasted for three years; however, several scheduling issues prevented much success; tie-ins still took precedence in several cases. For example, the Big Eight and SEC champions could never meet, since they were contractually bound to different bowl games. The coalition also excluded the Rose Bowl, arguably the most prestigious game in the nation, and two major conferences—the Pac-10 and Big Ten—meaning that it had limited success.
In 1995, the Coalition was replaced by the Bowl Alliance, which reduced the number of bowl games to host a national championship game to three—the Fiesta, Sugar, and Orange Bowls—and the participating conferences to five—the ACC, SEC, Southwest, Big Eight, and Big East. It was agreed that the No.1 and No.2 ranked teams gave up their prior bowl tie-ins and were guaranteed to meet in the national championship game, which rotated between the three participating bowls. The system still did not include the Big Ten, Pac-10, or the Rose Bowl, and thus still lacked the legitimacy of a true national championship. However, one positive side effect is that if there were three teams at the end of the season vying for a national title, but one of them was a Pac-10/Big Ten team bound to the Rose Bowl, then there would be no difficulty in deciding which teams to place in the Bowl Alliance "national championship" bowl; if the Pac-10 / Big Ten team won the Rose Bowl and finished with the same record as whichever team won the other bowl game, they could have a share of the national title. This happened in the final year of the Bowl Alliance, with Michigan winning the 1998 Rose Bowl and Nebraska winning the 1998 Orange Bowl. Without the Pac-10/Big Ten team bound to a bowl game, it would be difficult to decide which two teams should play for the national title.
In 1998, a new system was put into place called the Bowl Championship Series. For the first time, it included all major conferences (ACC, Big East, Big 12, Big Ten, Pac-10, and SEC) and four major bowl games (Rose, Orange, Sugar and Fiesta). The champions of these six conferences, along with two "at-large" selections, were invited to play in the four bowl games. Each year, one of the four bowl games served as a national championship game. Also, a complex system of human polls, computer rankings, and strength of schedule calculations was instituted to rank schools. Based on this ranking system, the No.1 and No.2 teams met each year in the national championship game. Traditional tie-ins were maintained for schools and bowls not part of the national championship. For example, in years when not a part of the national championship, the Rose Bowl still hosted the Big Ten and Pac-10 champions.
The system continued to change, as the formula for ranking teams was tweaked from year to year. At-large teams could be chosen from any of the Division I-A conferences, though only one selection—Utah in 2005—came from a BCS non-AQ conference. Starting with the 2006 season, a fifth game—simply called the BCS National Championship Game—was added to the schedule, to be played at the site of one of the four BCS bowl games on a rotating basis, one week after the regular bowl game. This opened up the BCS to two additional at-large teams. Also, rules were changed to add the champions of five additional conferences (Conference USA [C-USA], the Mid-American Conference [MAC], the Mountain West Conference [MW], the Sun Belt Conference and the Western Athletic Conference [WAC]), provided that said champion ranked in the top twelve in the final BCS rankings, or was within the top 16 of the BCS rankings and ranked higher than the champion of at least one of the BCS Automatic Qualifying (AQ) conferences. Several times since this rule change was implemented, schools from non-AQ conferences have played in BCS bowl games. In 2009, Boise State played TCU in the Fiesta Bowl, the first time two schools from non-AQ conferences played each other in a BCS bowl game. The last team from the non-AQ ranks to reach a BCS bowl game in the BCS era was Northern Illinois in 2012, which played in (and lost) the 2013 Orange Bowl.
The longtime resistance to a playoff system at the FBS level finally ended with the creation of the College Football Playoff (CFP) beginning with the 2014 season. The CFP is a Plus-One system, a concept that became popular as a BCS alternative following controversies in 2003 and 2004. The CFP is a four-team tournament whose participants are chosen and seeded by a 13-member selection committee. The semifinals are hosted by two of a group of traditional bowl games known as the New Year's Six, with semifinal hosting rotating annually among three pairs of games in the following order: Rose/Sugar, Orange/Cotton, and Fiesta/Peach. The two semifinal winners then advance to the College Football Playoff National Championship, whose host is determined by open bidding several years in advance.
The 10 FBS conferences are formally and popularly divided into two groups:
Although rules for the high school, college, and NFL games are generally consistent, there are several minor differences. Before 2023, a single NCAA Football Rules Committee determined the playing rules for Division I (both Bowl and Championship Subdivisions), II, and III games (the National Association of Intercollegiate Athletics (NAIA) is a separate organization, but uses the NCAA rules). As part of an NCAA initiative to give each division more autonomy over its governance, separate rules committees have been established for each NCAA division.
College teams mostly play other similarly sized schools through the NCAA's divisional system. Division I generally consists of the major collegiate athletic powers with larger budgets, more elaborate facilities, and (with the exception of a few conferences such as the Pioneer Football League) more athletic scholarships. Division II primarily consists of smaller public and private institutions that offer fewer scholarships than those in Division I. Division III institutions also field teams, but do not offer any scholarships.
Football teams in Division I are further divided into the Bowl Subdivision (consisting of the largest programs) and the Championship Subdivision. The Bowl Subdivision has historically not used an organized tournament to determine its champion, and instead teams compete in post-season bowl games. That changed with the debut of the four-team College Football Playoff at the end of the 2014 season.
Teams in each of these four divisions are further divided into various regional conferences.
Several organizations operate college football programs outside the jurisdiction of the NCAA:
A college that fields a team in the NCAA is not restricted from fielding teams in club or sprint football, and several colleges field two teams, a varsity (NCAA) squad and a club or sprint squad (no schools, as of 2023, field both club and sprint teams at the same time).
Started in the 2014 season, four Division I FBS teams are selected at the end of regular season to compete in a playoff for the FBS national championship. The inaugural champion was Ohio State University. The College Football Playoff replaced the Bowl Championship Series, which had been used as a selection method to determine the national championship game participants since in the 1998 season. The Georgia Bulldogs won the most recent playoff 65–7 over the TCU Horned Frogs in the 2023 College Football Playoff.
At the Division I FCS level, the teams participate in a 24-team playoff (most recently expanded from 20 teams in 2013) to determine the national championship. Under the current playoff structure, the top eight teams are all seeded, and receive a bye week in the first round. The highest seed receives automatic home field advantage. Starting in 2013, non-seeded teams can only host a playoff game if both teams involved are unseeded; in such a matchup, the schools must bid for the right to host the game. Selection for the playoffs is determined by a selection committee, although usually a team must have an 8–4 record to even be considered. Losses to an FBS team count against their playoff eligibility, while wins against a Division II opponent do not count towards playoff consideration. Thus, only Division I wins (whether FBS, FCS, or FCS non-scholarship) are considered for playoff selection. The Division I National Championship game is held in Frisco, Texas.
Division II and Division III of the NCAA also participate in their own respective playoffs, crowning national champions at the end of the season. The National Association of Intercollegiate Athletics also holds a playoff.
Unlike other college football divisions and most other sports—collegiate or professional—the Football Bowl Subdivision, formerly known as Division I-A college football, has historically not employed a playoff system to determine a champion. Instead, it has a series of postseason "bowl games". The annual National Champion in the Football Bowl Subdivision is then instead traditionally determined by a vote of sports writers and other non-players.
This system has been challenged often, beginning with an NCAA committee proposal in 1979 to have a four-team playoff following the bowl games. However, little headway was made in instituting a playoff tournament until 2014, given the entrenched vested economic interests in the various bowls. Although the NCAA publishes lists of claimed FBS-level national champions in its official publications, it has never recognized an official FBS national championship; this policy continues even after the establishment of the College Football Playoff (which is not directly run by the NCAA) in 2014. As a result, the official Division I National Champion is the winner of the Football Championship Subdivision, as it is the highest level of football with an NCAA-administered championship tournament. (This also means that FBS student-athletes are the only NCAA athletes who are ineligible for the Elite 90 Award, an academic award presented to the upper class player with the highest grade-point average among the teams that advance to the championship final site.)
The first bowl game was the 1902 Rose Bowl, played between Michigan and Stanford; Michigan won 49–0. It ended when Stanford requested and Michigan agreed to end it with 8 minutes on the clock. That game was so lopsided that the game was not played annually until 1916, when the Tournament of Roses decided to reattempt the postseason game. The term "bowl" originates from the shape of the Rose Bowl stadium in Pasadena, California, which was built in 1923 and resembled the Yale Bowl, built in 1915. This is where the name came into use, as it became known as the Rose Bowl Game. Other games came along and used the term "bowl", whether the stadium was shaped like a bowl or not.
At the Division I FBS level, teams must earn the right to be bowl eligible by winning at least 6 games during the season (teams that play 13 games in a season, which is allowed for Hawaii and any of its home opponents, must win 7 games). They are then invited to a bowl game based on their conference ranking and the tie-ins that the conference has to each bowl game. For the 2009 season, there were 34 bowl games, so 68 of the 120 Division I FBS teams were invited to play at a bowl. These games are played from mid-December to early January and most of the later bowl games are typically considered more prestigious.
After the Bowl Championship Series, additional all-star bowl games round out the post-season schedule through the beginning of February.
Partly as a compromise between both bowl game and playoff supporters, the NCAA created the Bowl Championship Series (BCS) in 1998 in order to create a definitive national championship game for college football. The series included the four most prominent bowl games (Rose Bowl, Orange Bowl, Sugar Bowl, Fiesta Bowl), while the national championship game rotated each year between one of these venues. The BCS system was slightly adjusted in 2006, as the NCAA added a fifth game to the series, called the National Championship Game. This allowed the four other BCS bowls to use their normal selection process to select the teams in their games while the top two teams in the BCS rankings would play in the new National Championship Game.
The BCS selection committee used a complicated, and often controversial, computer system to rank all Division I-FBS teams and the top two teams at the end of the season played for the national championship. This computer system, which factored in newspaper polls, online polls, coaches' polls, strength of schedule, and various other factors of a team's season, led to much dispute over whether the two best teams in the country were being selected to play in the National Championship Game.
The BCS ended after the 2013 season and, since the 2014 season, the FBS national champion has been determined by a four-team tournament known as the College Football Playoff (CFP). A selection committee of college football experts decides the participating teams. Six major bowl games known as the New Year's Six (NY6)—the Rose, Sugar, Cotton, Orange, Peach, and Fiesta Bowls—rotate on a three-year cycle as semifinal games, with the winners advancing to the College Football Playoff National Championship. This arrangement was contractually locked in until the 2026 season, but an agreement was reached on CFP expansion to 12 teams effective with the 2024 season.
In the new CFP format, no conferences will receive automatic bids. Playoff berths will be awarded to the top six conference champions in the CFP rankings, plus the top six remaining teams (which may include other conference champions). The top four conference champions receive first-round byes. All first-round games will be played at the home field of the higher seed. The winners of these games advance to meet the top four seeds in the quarterfinals. The NY6 games will host the quarterfinals and semifinals, rotating so that each bowl game will host two quarterfinals and one semifinal in a three-year cycle. The CFP National Championship will continue to be held at a site determined by open bidding several years in advance.
College football is a controversial institution within American higher education, where the amount of money involved—what people will pay for the entertainment provided—is a corrupting factor within universities that they are usually ill-equipped to deal with. According to William E. Kirwan, chancellor of the University of Maryland System and co-director of the Knight Commission on Intercollegiate Athletics, "We've reached a point where big-time intercollegiate athletics is undermining the integrity of our institutions, diverting presidents and institutions from their main purpose." Football coaches often make more than the presidents of the universities which employ them. Athletes are alleged to receive preferential treatment both in academics and when they run afoul of the law. Although in theory football is an extra-curricular activity engaged in as a sideline by students, it is widely believed to turn a substantial profit, from which the athletes receive no direct benefit. There has been serious discussion about making student-athletes university employees to allow them to be paid. In reality, the majority of major collegiate football programs operated at a financial loss in 2014.
There had been discussions on changing rules that prohibited compensation for the use of a player's name, image, and likeness (NIL), but change did not start to come until the mid-2010s. This reform first took place in the NAIA, which initially allowed all student-athletes at its member schools to receive NIL compensation in 2014, and beginning in 2020 specifically allowed these individuals to reference their athletic participation in their endorsement deals. The NCAA passed its own NIL reform, very similar to the NAIA's most recent reform, in July 2021, after its hand was forced by multiple states that had passed legislation allowing NIL compensation, most notably California.
On June 3 of 2021, "The NCAA's Board of Directors adopts a temporary rule change that opens the door for NIL activity, instructing schools to set their own policy for what should be allowed with minimal guidelines" (Murphy 2021). On July 1 of 2021, the new rules set in and student athletes could start signing endorsements using their name, image and likeness. "The NCAA has asked Congress for help in creating a federal NIL law. While several federal options have been proposed, it's becoming increasingly likely that state laws will start to go into effect before a nationwide change is made. There are 28 states with NIL laws already in place and multiple others that are actively pursuing legislation" (Murphy 2021).
Canadian football, which parallels American football, is played by university teams in Canada under the auspices of U Sports. (Unlike in the United States, no junior colleges play football in Canada, and the sanctioning body for junior college athletics in Canada, CCAA, does not sanction the sport.) However, amateur football outside of colleges is played in Canada, such as in the Canadian Junior Football League. Organized competition in American football also exists at the collegiate level in Mexico (ONEFA), the UK (British Universities American Football League), Japan (Japan American Football Association, Koshien Bowl), and South Korea (Korea American Football Association).
According to 2017 study on brains of deceased gridiron football players, 99% of tested brains of NFL players, 88% of CFL players, 64% of semi-professional players, 91% of college football players, and 21% of high school football players had various stages of CTE. The study noted it has limitations due to "selection bias" in that the brains donated are from families who suspected CTE, but "The fact that we were able to gather so many instances of a disease that was previously considered quite rare, in eight years, speaks volumes."
Other common injuries include: injuries of legs, arms, and lower back. | [
{
"paragraph_id": 0,
"text": "Nick Saban is known as the best coach of all time College football refers to gridiron football that is played by teams of amateur student-athletes at universities and colleges. It was through collegiate competition that gridiron football first gained popularity in the United States.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Like gridiron football generally, college football is most popular in the United States and Canada. While no single governing body exists for college football in the United States, most schools, especially those at the highest levels of play, are members of the NCAA. In Canada, collegiate football competition is governed by U Sports for universities. The Canadian Collegiate Athletic Association (for colleges) governs soccer and other sports but not gridiron football. Other countries, such as Mexico, Japan and South Korea, also host college football leagues with modest levels of support.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Unlike most other major sports in North America, no official minor league farm organizations exist for American football or Canadian football. Therefore, college football is generally considered to be the second tier of American and Canadian football; ahead of high school competition, but below professional competition. In some parts of the United States, especially the South and Midwest, college football is more popular than professional football. For much of the 20th century, college football was generally considered to be more prestigious than professional football.",
"title": ""
},
{
"paragraph_id": 3,
"text": "As the second highest tier of gridiron football competition in the United States, many college football players later play professionally in the NFL or other leagues. The NFL draft each spring sees 224 players selected and offered a contract to play in the league, with the vast majority coming from the NCAA. Other professional leagues, such as the CFL and XFL, additionally hold their own drafts each year which see many college players selected. Players who are not selected can still attempt to land a professional roster spot as an undrafted free agent. Despite these opportunities, only around 1.6% of NCAA college football players end up playing professionally in the NFL.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Even after the emergence of the professional National Football League (NFL), college football has remained extremely popular throughout the U.S. Although the college game has a much larger margin for talent than its pro counterpart, the sheer number of fans following major colleges provides a financial equalizer for the game, with Division I programs — the highest level — playing in huge stadiums, six of which have seating capacity exceeding 100,000 people. In many cases, college stadiums employ bench-style seating, as opposed to individual seats with backs and arm rests (although many stadiums do have a small number of chair back seats in addition to the bench seating). This allows them to seat more fans in a given amount of space than the typical professional stadium, which tends to have more features and comforts for fans. (Only three stadiums owned by U.S. colleges or universities — L&N Stadium at the University of Louisville, Center Parc Stadium at Georgia State University, and FAU Stadium at Florida Atlantic University — consist entirely of chair back seating.)",
"title": "History"
},
{
"paragraph_id": 5,
"text": "College athletes, unlike players in the NFL, are not permitted by the NCAA to be paid salaries. Colleges are only allowed to provide non-monetary compensation such as athletic scholarships that provide for tuition, housing, and books. With new bylaws made by the NCAA, college athletes can now receive \"name, image, and likeness\" (NIL) deals, a way to get sponsorships and money before their pro debut.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Modern North American football has its origins in various games, all known as \"football\", played at public schools in Great Britain in the mid-19th century. By the 1840s, students at Rugby School were playing a game in which players were able to pick up the ball and run with it, a sport later known as rugby football. The game was taken to Canada by British soldiers stationed there and was soon being played at Canadian colleges.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The first documented gridiron football game was played at University College, a college of the University of Toronto, on November 9, 1861. One of the participants in the game involving University of Toronto students was (Sir) William Mulock, later Chancellor of the school. A football club was formed at the university soon afterward, although its rules of play at this stage are unclear.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1864, at Trinity College, also a college of the University of Toronto, F. Barlow Cumberland and Frederick A. Bethune devised rules based on rugby football. Modern Canadian football is widely regarded as having originated with a game played in Montreal, in 1865, when British Army officers played local civilians. The game gradually gained a following, and the Montreal Football Club was formed in 1868, the first recorded non-university football club in Canada.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Early games appear to have had much in common with the traditional \"mob football\" played in Great Britain. The games remained largely unorganized until the 19th century, when intramural games of football began to be played on college campuses. Each school played its own variety of football. Princeton University students played a game called \"ballown\" as early as 1820. A Harvard tradition known as \"Bloody Monday\" began in 1827, which consisted of a mass ballgame between the freshman and sophomore classes. In 1860, both the town police and the college authorities agreed the Bloody Monday had to go. The Harvard students responded by going into mourning for a mock figure called \"Football Fightum\", for whom they conducted funeral rites. The authorities held firm and it was a dozen years before football was once again played at Harvard. Dartmouth played its own version called \"Old division football\", the rules of which were first published in 1871, though the game dates to at least the 1830s. All of these games, and others, shared certain commonalities. They remained largely \"mob\" style games, with huge numbers of players attempting to advance the ball into a goal area, often by any means necessary. Rules were simple, and violence and injury were common. The violence of these mob-style games led to widespread protests and a decision to abandon them. Yale, under pressure from the city of New Haven, banned the play of all forms of football in 1860.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "American football historian Parke H. Davis described the period between 1869 and 1875 as the 'Pioneer Period'; the years 1876–93 he called the 'Period of the American Intercollegiate Football Association'; and the years 1894–1933 he dubbed the \"Period of Rules Committees and Conferences\".",
"title": "History"
},
{
"paragraph_id": 11,
"text": "On November 6, 1869, Rutgers University faced Princeton University (then known as the College of New Jersey) in the first game of intercollegiate football that resembled more the game of soccer than \"football\" as it is played today. It was played with a round ball and, like all early games, used a set of rules suggested by Rutgers captain William J. Leggett, based on The Football Association's first set of rules, which were an early attempt by the former pupils of England's public schools, to unify the rules of their public schools games and create a universal and standardized set of rules for the game of football and bore little resemblance to the American game which would be developed in the following decades. It is still usually regarded as the first game of college football. The game was played at a Rutgers field. Two teams of 25 players attempted to score by kicking the ball into the opposing team's goal. Throwing or carrying the ball was not allowed, but there was plenty of physical contact between players. The first team to reach six goals was declared the winner. Rutgers won by a score of six to four. A rematch was played at Princeton a week later under Princeton's own set of rules (one notable difference was the awarding of a \"free kick\" to any player that caught the ball on the fly, which was a feature adopted from The Football Association's rules; the fair catch kick rule has survived through to modern American game). Princeton won that game by a score of 8 – 0. Columbia joined the series in 1870 and by 1872 several schools were fielding intercollegiate teams, including Yale and Stevens Institute of Technology.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Columbia University was the third school to field a team. The Lions traveled from New York City to New Brunswick on November 12, 1870, and were defeated by Rutgers 6 to 3. The game suffered from disorganization and the players kicked and battled each other as much as the ball. Later in 1870, Princeton and Rutgers played again with Princeton defeating Rutgers 6–0. This game's violence caused such an outcry that no games at all were played in 1871. Football came back in 1872, when Columbia played Yale for the first time. The Yale team was coached and captained by David Schley Schaff, who had learned to play football while attending Rugby School. Schaff himself was injured and unable to play the game, but Yale won the game 3-0 nonetheless. Later in 1872, Stevens Tech became the fifth school to field a team. Stevens lost to Columbia, but beat both New York University and City College of New York during the following year.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "By 1873, the college students playing football had made significant efforts to standardize their fledgling game. Teams had been scaled down from 25 players to 20. The only way to score was still to bat or kick the ball through the opposing team's goal, and the game was played in two 45 minute halves on fields 140 yards long and 70 yards wide. On October 20, 1873, representatives from Yale, Columbia, Princeton, and Rutgers met at the Fifth Avenue Hotel in New York City to codify the first set of intercollegiate football rules. Before this meeting, each school had its own set of rules and games were usually played using the home team's own particular code. At this meeting, a list of rules, based more on the Football Association's rules than the rules of the recently founded Rugby Football Union, was drawn up for intercollegiate football games.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Old \"Football Fightum\" had been resurrected at Harvard in 1872, when Harvard resumed playing football. Harvard, however, preferred to play a rougher version of football called \"the Boston Game\" in which the kicking of a round ball was the most prominent feature though a player could run with the ball, pass it, or dribble it (known as \"babying\"). The man with the ball could be tackled, although hitting, tripping, \"hacking\" and other unnecessary roughness was prohibited. There was no limit to the number of players, but there were typically ten to fifteen per side. A player could carry the ball only when being pursued.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "As a result of this, Harvard refused to attend the rules conference organized by Rutgers, Princeton and Columbia at the Fifth Avenue Hotel in New York City on October 20, 1873, to agree on a set of rules and regulations that would allow them to play a form of football that was essentially Association football; and continued to play under its own code. While Harvard's voluntary absence from the meeting made it hard for them to schedule games against other American universities, it agreed to a challenge to play the rugby team of McGill University, from Montreal, in a two-game series. It was agreed that two games would be played on Harvard's Jarvis baseball field in Cambridge, Massachusetts on May 14 and 15, 1874: one to be played under Harvard rules, another under the stricter rugby regulations of McGill. Jarvis Field was at the time a patch of land at the northern point of the Harvard campus, bordered by Everett and Jarvis Streets to the north and south, and Oxford Street and Massachusetts Avenue to the east and west. Harvard beat McGill in the \"Boston Game\" on the Thursday and held McGill to a 0–0 tie on the Friday. The Harvard students took to the rugby rules and adopted them as their own, The games featured a round ball instead of a rugby-style oblong ball. This series of games represents an important milestone in the development of the modern game of American football. In October 1874, the Harvard team once again traveled to Montreal to play McGill in rugby, where they won by three tries.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "In as much as Rugby football had been transplanted to Canada from England, the McGill team played under a set of rules which allowed a player to pick up the ball and run with it whenever he wished. Another rule, unique to McGill, was to count tries (the act of grounding the football past the opposing team's goal line; it is important to note that there was no end zone during this time), as well as goals, in the scoring. In the Rugby rules of the time, a try only provided the attempt to kick a free goal from the field. If the kick was missed, the try did not score any points itself.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Harvard quickly took a liking to the rugby game, and its use of the try which, until that time, was not used in American football. The try would later evolve into the score known as the touchdown. On June 4, 1875, Harvard faced Tufts University in the first game between two American colleges played under rules similar to the McGill/Harvard contest, which was won by Tufts. The rules included each side fielding 11 men at any given time, the ball was advanced by kicking or carrying it, and tackles of the ball carrier stopped play – actions of which have carried over to the modern version of football played today",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Harvard later challenged its closest rival, Yale, to which the Bulldogs accepted. The two teams agreed to play under a set of rules called the \"Concessionary Rules\", which involved Harvard conceding something to Yale's soccer and Yale conceding a great deal to Harvard's rugby. They decided to play with 15 players on each team. On November 13, 1875, Yale and Harvard played each other for the first time ever, where Harvard won 4–0. At the first The Game (as the annual contest between Harvard and Yale came to be named) the future \"father of American football\" Walter Camp was among the 2000 spectators in attendance. Walter, who would enroll at Yale the next year, was torn between an admiration for Harvard's style of play and the misery of the Yale defeat, and became determined to avenge Yale's defeat. Spectators from Princeton also carried the game back home, where it quickly became the most popular version of football.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "On November 23, 1876, representatives from Harvard, Yale, Princeton, and Columbia met at the Massasoit House hotel in Springfield, Massachusetts to standardize a new code of rules based on the rugby game first introduced to Harvard by McGill University in 1874. Three of the schools—Harvard, Columbia, and Princeton—formed the Intercollegiate Football Association, as a result of the meeting. Yale initially refused to join this association because of a disagreement over the number of players to be allowed per team (relenting in 1879) and Rutgers were not invited to the meeting. The rules that they agreed upon were essentially those of rugby union at the time with the exception that points be awarded for scoring a try, not just the conversion afterwards (extra point). Incidentally, rugby was to make a similar change to its scoring system 10 years later.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "Walter Camp is widely considered to be the most important figure in the development of American football. As a youth, he excelled in sports like track, baseball, and association football, and after enrolling at Yale in 1876, he earned varsity honors in every sport the school offered.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Following the introduction of rugby-style rules to American football, Camp became a fixture at the Massasoit House conventions where rules were debated and changed. Dissatisfied with what seemed to him to be a disorganized mob, he proposed his first rule change at the first meeting he attended in 1878: a reduction from fifteen players to eleven. The motion was rejected at that time but passed in 1880. The effect was to open up the game and emphasize speed over strength. Camp's most famous change, the establishment of the line of scrimmage and the snap from center to quarterback, was also passed in 1880. Originally, the snap was executed with the foot of the center. Later changes made it possible to snap the ball with the hands, either through the air or by a direct hand-to-hand pass. Rugby league followed Camp's example, and in 1906 introduced the play-the-ball rule, which greatly resembled Camp's early scrimmage and center-snap rules. In 1966, rugby league introduced a four-tackle rule (changed in 1972 to a six-tackle rule) based on Camp's early down-and-distance rules.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "Camp's new scrimmage rules revolutionized the game, though not always as intended. Princeton, in particular, used scrimmage play to slow the game, making incremental progress towards the end zone during each down. Rather than increase scoring, which had been Camp's original intent, the rule was exploited to maintain control of the ball for the entire game, resulting in slow, unexciting contests. At the 1882 rules meeting, Camp proposed that a team be required to advance the ball a minimum of five yards within three downs. These down-and-distance rules, combined with the establishment of the line of scrimmage, transformed the game from a variation of rugby football into the distinct sport of American football.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "Camp was central to several more significant rule changes that came to define American football. In 1881, the field was reduced in size to its modern dimensions of 120 by 531⁄3 yards (109.7 by 48.8 meters). Several times in 1883, Camp tinkered with the scoring rules, finally arriving at four points for a touchdown, two points for kicks after touchdowns, two points for safeties, and five for field goals. Camp's innovations in the area of point scoring influenced rugby union's move to point scoring in 1890. In 1887, game time was set at two halves of 45 minutes each. Also in 1887, two paid officials—a referee and an umpire—were mandated for each game. A year later, the rules were changed to allow tackling below the waist, and in 1889, the officials were given whistles and stopwatches.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "After leaving Yale in 1882, Camp was employed by the New Haven Clock Company until his death in 1925. Though no longer a player, he remained a fixture at annual rules meetings for most of his life, and he personally selected an annual All-American team every year from 1889 through 1924. The Walter Camp Football Foundation continues to select All-American teams in his honor.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "College football expanded greatly during the last two decades of the 19th century. Several major rivalries date from this time period.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "November 1890 was an active time in the sport. In Baldwin City, Kansas, on November 22, 1890, college football was first played in the state of Kansas. Baker beat Kansas 22–9. On the 27th, Vanderbilt played Nashville (Peabody) at Athletic Park and won 40–0. It was the first time organized football played in the state of Tennessee. The 29th also saw the first instance of the Army–Navy Game. Navy won 24–0.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "Rutgers was first to extend the reach of the game. An intercollegiate game was first played in the state of New York when Rutgers played Columbia on November 2, 1872. It was also the first scoreless tie in the history of the fledgling sport. Yale football starts the same year and has its first match against Columbia, the nearest college to play football. It took place at Hamilton Park in New Haven and was the first game in New England. The game was essentially soccer with 20-man sides, played on a field 400 by 250 feet. Yale wins 3–0, Tommy Sherman scoring the first goal and Lew Irwin the other two.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "After the first game against Harvard, Tufts took its squad to Bates College in Lewiston, Maine for the first football game played in Maine. This occurred on November 6, 1875.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Penn's Athletic Association was looking to pick \"a twenty\" to play a game of football against Columbia. This \"twenty\" never played Columbia, but did play twice against Princeton. Princeton won both games 6 to 0. The first of these happened on November 11, 1876, in Philadelphia and was the first intercollegiate game in the state of Pennsylvania.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "Brown enters the intercollegiate game in 1878.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "The first game where one team scored over 100 points happened on October 25, 1884, when Yale routed Dartmouth 113–0. It was also the first time one team scored over 100 points and the opposing team was shut out. The next week, Princeton outscored Lafayette 140 to 0.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "The first intercollegiate game in the state of Vermont happened on November 6, 1886, between Dartmouth and Vermont at Burlington, Vermont. Dartmouth won 91 to 0.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "Penn State played its first season in 1887, but had no head coach for their first five years, from 1887 to 1891. The teams played its home games on the Old Main lawn on campus in State College, Pennsylvania. They compiled a 12–8–1 record in these seasons, playing as an independent from 1887 to 1890.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In 1891, the Pennsylvania Intercollegiate Football Association (PIFA) was formed. It consisted of Bucknell (University of Lewisburg), Dickinson, Franklin & Marshall, Haverford, Penn State and Swarthmore. Lafayette and Lehigh were excluded because it was felt they would dominate the Association. Penn State won the championship with a 4–1–0 record. Bucknell's record was 3–1–1 (losing to Franklin & Marshall and tying Dickinson). The Association was dissolved prior to the 1892 season.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "The first nighttime football game was played in Mansfield, Pennsylvania on September 28, 1892, between Mansfield State Normal and Wyoming Seminary and ended at halftime in a 0–0 tie. The Army–Navy game of 1893 saw the first documented use of a football helmet by a player in a game. Joseph M. Reeves had a crude leather helmet made by a shoemaker in Annapolis and wore it in the game after being warned by his doctor that he risked death if he continued to play football after suffering an earlier kick to the head.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "In 1879, the University of Michigan became the first school west of Pennsylvania to establish a college football team. On May 30, 1879, Michigan beat Racine College 1–0 in a game played in Chicago. The Chicago Daily Tribune called it \"the first rugby-football game to be played west of the Alleghenies.\" Other Midwestern schools soon followed suit, including the University of Chicago, Northwestern University, and the University of Minnesota. The first western team to travel east was the 1881 Michigan team, which played at Harvard, Yale and Princeton. The nation's first college football league, the Intercollegiate Conference of Faculty Representatives (also known as the Western Conference), a precursor to the Big Ten Conference, was founded in 1895.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "Led by coach Fielding H. Yost, Michigan became the first \"western\" national power. From 1901 to 1905, Michigan had a 56-game undefeated streak that included a 1902 trip to play in the first college football bowl game, which later became the Rose Bowl Game. During this streak, Michigan scored 2,831 points while allowing only 40.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "Organized intercollegiate football was first played in the state of Minnesota on September 30, 1882, when Hamline was convinced to play Minnesota. Minnesota won 2 to 0. It was the first game west of the Mississippi River.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "November 30, 1905, saw Chicago defeat Michigan 2 to 0. Dubbed \"The First Greatest Game of the Century\", it broke Michigan's 56-game unbeaten streak and marked the end of the \"Point-a-Minute\" years.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Organized intercollegiate football was first played in the state of Virginia and the south on November 2, 1873, in Lexington between Washington and Lee and VMI. Washington and Lee won 4–2. Some industrious students of the two schools organized a game for October 23, 1869, but it was rained out. Students of the University of Virginia were playing pickup games of the kicking-style of football as early as 1870, and some accounts even claim it organized a game against Washington and Lee College in 1871; but no record has been found of the score of this contest. Due to scantiness of records of the prior matches some will claim Virginia v. Pantops Academy November 13, 1887, as the first game in Virginia.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "On April 9, 1880, at Stoll Field, Transylvania University (then called Kentucky University) beat Centre College by the score of 13+3⁄4–0 in what is often considered the first recorded game played in the South. The first game of \"scientific football\" in the South was the first instance of the Victory Bell rivalry between North Carolina and Duke (then known as Trinity College) held on Thanksgiving Day, 1888, at the North Carolina State Fairgrounds in Raleigh, North Carolina.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "On November 13, 1887, the Virginia Cavaliers and Pantops Academy fought to a scoreless tie in the first organized football game in the state of Virginia. Students at UVA were playing pickup games of the kicking-style of football as early as 1870, and some accounts even claim that some industrious ones organized a game against Washington and Lee College in 1871, just two years after Rutgers and Princeton's historic first game in 1869. But no record has been found of the score of this contest. Washington and Lee also claims a 4 to 2 win over VMI in 1873.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "On October 18, 1888, the Wake Forest Demon Deacons defeated the North Carolina Tar Heels 6 to 4 in the first intercollegiate game in the state of North Carolina.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "On December 14, 1889, Wofford defeated Furman 5 to 1 in the first intercollegiate game in the state of South Carolina. The game featured no uniforms, no positions, and the rules were formulated before the game.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "January 30, 1892, saw the first football game played in the Deep South when the Georgia Bulldogs defeated Mercer 50–0 at Herty Field.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "The beginnings of the contemporary Southeastern Conference and Atlantic Coast Conference start in 1894. The Southern Intercollegiate Athletic Association (SIAA) was founded on December 21, 1894, by William Dudley, a chemistry professor at Vanderbilt. The original members were Alabama, Auburn, Georgia, Georgia Tech, North Carolina, Sewanee, and Vanderbilt. Clemson, Cumberland, Kentucky, LSU, Mercer, Mississippi, Mississippi A&M (Mississippi State), Southwestern Presbyterian University, Tennessee, Texas, Tulane, and the University of Nashville joined the following year in 1895 as invited charter members. The conference was originally formed for \"the development and purification of college athletics throughout the South\".",
"title": "History"
},
{
"paragraph_id": 47,
"text": "It is thought that the first forward pass in football occurred on October 26, 1895, in a game between Georgia and North Carolina when, out of desperation, the ball was thrown by the North Carolina back Joel Whitaker instead of punted and George Stephens caught the ball. On November 9, 1895, John Heisman executed a hidden ball trick utilizing quarterback Reynolds Tichenor to get Auburn's only touchdown in a 6 to 9 loss to Vanderbilt. It was the first game in the south decided by a field goal. Heisman later used the trick against Pop Warner's Georgia team. Warner picked up the trick and later used it at Cornell against Penn State in 1897. He then used it in 1903 at Carlisle against Harvard and garnered national attention.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "The 1899 Sewanee Tigers are one of the all-time great teams of the early sport. The team went 12–0, outscoring opponents 322 to 10. Known as the \"Iron Men\", with just 13 men they had a six-day road trip with five shutout wins over Texas A&M; Texas; Tulane; LSU; and Ole Miss. It is recalled memorably with the phrase \"... and on the seventh day they rested.\" Grantland Rice called them \"the most durable football team I ever saw.\"",
"title": "History"
},
{
"paragraph_id": 49,
"text": "Organized intercollegiate football was first played in the state of Florida in 1901. A 7-game series between intramural teams from Stetson and Forbes occurred in 1894. The first intercollegiate game between official varsity teams was played on November 22, 1901. Stetson beat Florida Agricultural College at Lake City, one of the four forerunners of the University of Florida, 6–0, in a game played as part of the Jacksonville Fair.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "On September 27, 1902, Georgetown beat Navy 4 to 0. It is claimed by Georgetown authorities as the game with the first ever \"roving center\" or linebacker when Percy Given stood up, in contrast to the usual tale of Germany Schulz. The first linebacker in the South is often considered to be Frank Juhan.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "On Thanksgiving Day 1903, a game was scheduled in Montgomery, Alabama between the best teams from each region of the Southern Intercollegiate Athletic Association for an \"SIAA championship game\", pitting Cumberland against Heisman's Clemson. The game ended in an 11–11 tie causing many teams to claim the title. Heisman pressed hardest for Cumberland to get the claim of champion. It was his last game as Clemson head coach.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "1904 saw big coaching hires in the south: Mike Donahue at Auburn, John Heisman at Georgia Tech, and Dan McGugin at Vanderbilt were all hired that year. Both Donahue and McGugin just came from the north that year, Donahue from Yale and McGugin from Michigan, and were among the initial inductees of the College Football Hall of Fame. The undefeated 1904 Vanderbilt team scored an average of 52.7 points per game, the most in college football that season, and allowed just four points.",
"title": "History"
},
{
"paragraph_id": 53,
"text": "The first college football game in Oklahoma Territory occurred on November 7, 1895, when the \"Oklahoma City Terrors\" defeated the Oklahoma Sooners 34 to 0. The Terrors were a mix of Methodist college and high school students. The Sooners did not manage a single first down. By next season, Oklahoma coach John A. Harts had left to prospect for gold in the Arctic. Organized football was first played in the territory on November 29, 1894, between the Oklahoma City Terrors and Oklahoma City High School. The high school won 24 to 0.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "The University of Southern California first fielded an American football team in 1888. Playing its first game on November 14 of that year against the Alliance Athletic Club, in which USC gained a 16–0 victory. Frank Suffel and Henry H. Goddard were playing coaches for the first team which was put together by quarterback Arthur Carroll; who in turn volunteered to make the pants for the team and later became a tailor. USC faced its first collegiate opponent the following year in fall 1889, playing St. Vincent's College to a 40–0 victory. In 1893, USC joined the Intercollegiate Football Association of Southern California (the forerunner of the SCIAC), which was composed of USC, Occidental College, Throop Polytechnic Institute (Caltech), and Chaffey College. Pomona College was invited to enter, but declined to do so. An invitation was also extended to Los Angeles High School.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "In 1891, the first Stanford football team was hastily organized and played a four-game season beginning in January 1892 with no official head coach. Following the season, Stanford captain John Whittemore wrote to Yale coach Walter Camp asking him to recommend a coach for Stanford. To Whittemore's surprise, Camp agreed to coach the team himself, on the condition that he finish the season at Yale first. As a result of Camp's late arrival, Stanford played just three official games, against San Francisco's Olympic Club and rival California. The team also played exhibition games against two Los Angeles area teams that Stanford does not include in official results. Camp returned to the East Coast following the season, then returned to coach Stanford in 1894 and 1895.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "On December 25, 1894, Amos Alonzo Stagg's Chicago Maroons agreed to play Camp's Stanford football team in San Francisco in the first postseason intersectional contest, foreshadowing the modern bowl game. Future president Herbert Hoover was Stanford's student financial manager. Chicago won 24 to 4. Stanford won a rematch in Los Angeles on December 29 by 12 to 0.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "The Big Game between Stanford and California is the oldest college football rivalry in the West. The first game was played on San Francisco's Haight Street Grounds on March 19, 1892, with Stanford winning 14–10. The term \"Big Game\" was first used in 1900, when it was played on Thanksgiving Day in San Francisco. During that game, a large group of men and boys, who were observing from the roof of the nearby S.F. and Pacific Glass Works, fell into the fiery interior of the building when the roof collapsed, resulting in 13 dead and 78 injured. On December 4, 1900, the last victim of the disaster (Fred Lilly) died, bringing the death toll to 22; and, to this day, the \"Thanksgiving Day Disaster\" remains the deadliest accident to kill spectators at a U.S. sporting event.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "The University of Oregon began playing American football in 1894 and played its first game on March 24, 1894, defeating Albany College 44–3 under head coach Cal Young. Cal Young left after that first game and J.A. Church took over the coaching position in the fall for the rest of the season. Oregon finished the season with two additional losses and a tie, but went undefeated the following season, winning all four of its games under head coach Percy Benson. In 1899, the Oregon football team left the state for the first time, playing the California Golden Bears in Berkeley, California.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "American football at Oregon State University started in 1893 shortly after athletics were initially authorized at the college. Athletics were banned at the school in May 1892, but when the strict school president, Benjamin Arnold, died, President John Bloss reversed the ban. Bloss's son William started the first team, on which he served as both coach and quarterback. The team's first game was an easy 63–0 defeat over the home team, Albany College.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "In May 1900, Yost was hired as the football coach at Stanford University, and, after traveling home to West Virginia, he arrived in Palo Alto, California, on August 21, 1900. Yost led the 1900 Stanford team to a 7–2–1, outscoring opponents 154 to 20. The next year in 1901, Yost was hired by Charles A. Baird as the head football coach for the Michigan Wolverines football team. On January 1, 1902, Yost's dominating 1901 Michigan Wolverines football team agreed to play a 3–1–2 team from Stanford University in the inaugural \"Tournament East-West football game\" what is now known as the Rose Bowl Game by a score of 49–0 after Stanford captain Ralph Fisher requested to quit with eight minutes remaining.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "The 1905 season marked the first meeting between Stanford and USC. Consequently, Stanford is USC's oldest existing rival. The Big Game between Stanford and Cal on November 11, 1905, was the first played at Stanford Field, with Stanford winning 12–5.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "In 1906, citing concerns about the violence in American Football, universities on the West Coast, led by California and Stanford, replaced the sport with rugby union. At the time, the future of American football was very much in doubt and these schools believed that rugby union would eventually be adopted nationwide. Other schools followed suit and also made the switch included Nevada, St. Mary's, Santa Clara, and USC (in 1911). However, due to the perception that West Coast football was inferior to the game played on the East Coast anyway, East Coast and Midwest teams shrugged off the loss of the teams and continued playing American football. With no nationwide movement, the available pool of rugby teams to play remained small. The schools scheduled games against local club teams and reached out to rugby union powers in Australia, New Zealand, and especially, due to its proximity, Canada. The annual Big Game between Stanford and California continued as rugby, with the winner invited by the British Columbia Rugby Union to a tournament in Vancouver over the Christmas holidays, with the winner of that tournament receiving the Cooper Keith Trophy.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "During 12 seasons of playing rugby union, Stanford was remarkably successful: the team had three undefeated seasons, three one-loss seasons, and an overall record of 94 wins, 20 losses, and 3 ties for a winning percentage of .816. However, after a few years, the school began to feel the isolation of its newly adopted sport, which was not spreading as many had hoped. Students and alumni began to clamor for a return to American football to allow wider intercollegiate competition. The pressure at rival California was stronger (especially as the school had not been as successful in the Big Game as they had hoped), and in 1915 California returned to American football. As reasons for the change, the school cited rule change back to American football, the overwhelming desire of students and supporters to play American football, interest in playing other East Coast and Midwest schools, and a patriotic desire to play an \"American\" game. California's return to American football increased the pressure on Stanford to also change back in order to maintain the rivalry. Stanford played its 1915, 1916, and 1917 \"Big Games\" as rugby union against Santa Clara and California's football \"Big Game\" in those years was against Washington, but both schools desired to restore the old traditions. The onset of American involvement in World War I gave Stanford an out: In 1918, the Stanford campus was designated as the Students' Army Training Corps headquarters for all of California, Nevada, and Utah, and the commanding officer Sam M. Parker decreed that American football was the appropriate athletic activity to train soldiers and rugby union was dropped.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "The University of Colorado began playing American football in 1890. Colorado found much success in its early years, winning eight Colorado Football Association Championships (1894–97, 1901–08).",
"title": "History"
},
{
"paragraph_id": 65,
"text": "The following was taken from the Silver & Gold newspaper of December 16, 1898. It was a recollection of the birth of Colorado football written by one of CU's original gridders, John C. Nixon, also the school's second captain. It appears here in its original form:",
"title": "History"
},
{
"paragraph_id": 66,
"text": "At the beginning of the first semester in the fall of '90 the boys rooming at the dormitory on the campus of the U. of C. being afflicted with a super-abundance of penned up energy, or perhaps having recently drifted from under the parental wing and delighting in their newly found freedom, decided among other wild schemes, to form an athletic association. Messrs Carney, Whittaker, Layton and others, who at that time constituted a majority of the male population of the University, called a meeting of the campus boys in the old medical building. Nixon was elected president and Holden secretary of the association.",
"title": "History"
},
{
"paragraph_id": 67,
"text": "It was voted that the officers constitute a committee to provide uniform suits in which to play what was called \"association football\". Suits of flannel were ultimately procured and paid for assessments on the members of the association and generous contributions from members of the faculty. ...",
"title": "History"
},
{
"paragraph_id": 68,
"text": "The Athletic Association should now invigorate its base-ball and place it at par with its football team; and it certainly has the material with which to do it. The U of C should henceforth lead the state and possibly the west in athletic sports. ...",
"title": "History"
},
{
"paragraph_id": 69,
"text": "The style of football playing has altered considerably; by the old rules, all men in front of the runner with the ball, were offside, consequently we could not send backs through and break the line ahead of the ball as is done at present. The notorious V was then in vogue, which gave a heavy team too much advantage. The mass plays being now barred, skill on the football field is more in demand than mere weight and strength.",
"title": "History"
},
{
"paragraph_id": 70,
"text": "In 1909, the Rocky Mountain Athletic Conference was founded, featuring four members: Colorado, Colorado College, Colorado School of Mines, and Colorado Agricultural College. The University of Denver and the University of Utah joined the RMAC in 1910. For its first thirty years, the RMAC was considered a major conference equivalent to today's Division I, before 7 larger members left and formed the Mountain States Conference (also called the Skyline Conference).",
"title": "History"
},
{
"paragraph_id": 71,
"text": "College football increased in popularity through the remainder of the 19th and early 20th century. It also became increasingly violent. Between 1890 and 1905, 330 college athletes died as a direct result of injuries sustained on the football field. These deaths could be attributed to the mass formations and gang tackling that characterized the sport in its early years.",
"title": "History"
},
{
"paragraph_id": 72,
"text": "No sport is wholesome in which ungenerous or mean acts which easily escape detection contribute to victory.",
"title": "History"
},
{
"paragraph_id": 73,
"text": "Charles William Eliot, President of Harvard University (1869–1909) opposing football in 1905.",
"title": "History"
},
{
"paragraph_id": 74,
"text": "The 1894 Harvard–Yale game, known as the \"Hampden Park Blood Bath\", resulted in crippling injuries for four players; the contest was suspended until 1897. The annual Army–Navy game was suspended from 1894 to 1898 for similar reasons. One of the major problems was the popularity of mass-formations like the flying wedge, in which a large number of offensive players charged as a unit against a similarly arranged defense. The resultant collisions often led to serious injuries and sometimes even death. Georgia fullback Richard Von Albade Gammon notably died on the field from concussions received against Virginia in 1897, causing Georgia, Georgia Tech, and Mercer to suspend their football programs.",
"title": "History"
},
{
"paragraph_id": 75,
"text": "The situation came to a head in 1905 when there were 19 fatalities nationwide. President Theodore Roosevelt reportedly threatened to shut down the game if drastic changes were not made. However, the threat by Roosevelt to eliminate football is disputed by sports historians. What is absolutely certain is that on October 9, 1905, Roosevelt held a meeting of football representatives from Harvard, Yale, and Princeton. Though he lectured on eliminating and reducing injuries, he never threatened to ban football. He also lacked the authority to abolish football and was, in fact, actually a fan of the sport and wanted to preserve it. The President's sons were also playing football at the college and secondary levels at the time.",
"title": "History"
},
{
"paragraph_id": 76,
"text": "Meanwhile, John H. Outland held an experimental game in Wichita, Kansas that reduced the number of scrimmage plays to earn a first down from four to three in an attempt to reduce injuries. The Los Angeles Times reported an increase in punts and considered the game much safer than regular play but that the new rule was not \"conducive to the sport\". In 1906, President Roosevelt organized a meeting among thirteen school leaders at the White House to find solutions to make the sport safer for the athletes. Because the college officials could not agree upon a change in rules, it was decided over the course of several subsequent meetings that an external governing body should be responsible. Finally, on December 28, 1905, 62 schools met in New York City to discuss rule changes to make the game safer. As a result of this meeting, the Intercollegiate Athletic Association of the United States was formed in 1906. The IAAUS was the original rule making body of college football, but would go on to sponsor championships in other sports. The IAAUS would get its current name of National Collegiate Athletic Association (NCAA) in 1910, and still sets rules governing the sport.",
"title": "History"
},
{
"paragraph_id": 77,
"text": "The rules committee considered widening the playing field to \"open up\" the game, but Harvard Stadium (the first large permanent football stadium) had recently been built at great expense; it would be rendered useless by a wider field. The rules committee legalized the forward pass instead. Though it was underutilized for years, this proved to be one of the most important rule changes in the establishment of the modern game. Another rule change banned \"mass momentum\" plays (many of which, like the infamous \"flying wedge\", were sometimes literally deadly).",
"title": "History"
},
{
"paragraph_id": 78,
"text": "As a result of the 1905–1906 reforms, mass formation plays became illegal and forward passes legal. Bradbury Robinson, playing for visionary coach Eddie Cochems at Saint Louis University, threw the first legal pass in a September 5, 1906, game against Carroll College at Waukesha. Other important changes, formally adopted in 1910, were the requirements that at least seven offensive players be on the line of scrimmage at the time of the snap, that there be no pushing or pulling, and that interlocking interference (arms linked or hands on belts and uniforms) was not allowed. These changes greatly reduced the potential for collision injuries. Several coaches emerged who took advantage of these sweeping changes. Amos Alonzo Stagg introduced such innovations as the huddle, the tackling dummy, and the pre-snap shift. Other coaches, such as Pop Warner and Knute Rockne, introduced new strategies that still remain part of the game.",
"title": "History"
},
{
"paragraph_id": 79,
"text": "Besides these coaching innovations, several rules changes during the first third of the 20th century had a profound impact on the game, mostly in opening up the passing game. In 1914, the first roughing-the-passer penalty was implemented. In 1918, the rules on eligible receivers were loosened to allow eligible players to catch the ball anywhere on the field—previously strict rules were in place allowing passes to only certain areas of the field. Scoring rules also changed during this time: field goals were lowered to three points in 1909 and touchdowns raised to six points in 1912.",
"title": "History"
},
{
"paragraph_id": 80,
"text": "Star players that emerged in the early 20th century include Jim Thorpe, Red Grange, and Bronko Nagurski; these three made the transition to the fledgling NFL and helped turn it into a successful league. Sportswriter Grantland Rice helped popularize the sport with his poetic descriptions of games and colorful nicknames for the game's biggest players, including Notre Dame's \"Four Horsemen\" backfield and Fordham University's linemen, known as the \"Seven Blocks of Granite\".",
"title": "History"
},
{
"paragraph_id": 81,
"text": "In 1907 at Champaign, Illinois Chicago and Illinois played in the first game to have a halftime show featuring a marching band. Chicago won 42–6. On November 25, 1911 Kansas played at Missouri in the first homecoming football game. The game was \"broadcast\" play-by-play over telegraph to at least 1,000 fans in Lawrence, Kansas. It ended in a 3–3 tie. The game between West Virginia and Pittsburgh on October 8, 1921, saw the first live radio broadcast of a college football game when Harold W. Arlin announced that year's Backyard Brawl played at Forbes Field on KDKA. Pitt won 21–13. On October 28, 1922, Princeton and Chicago played the first game to be nationally broadcast on radio. Princeton won 21–18 in a hotly contested game which had Princeton dubbed the \"Team of Destiny\".",
"title": "History"
},
{
"paragraph_id": 82,
"text": "One publication claims \"The first scouting done in the South was in 1905, when Dan McGugin and Captain Innis Brown, of Vanderbilt went to Atlanta to see Sewanee play Georgia Tech.\" Fuzzy Woodruff claims Davidson was the first in the south to throw a legal forward pass in 1906. The following season saw Vanderbilt execute a double pass play to set up the touchdown that beat Sewanee in a meeting of the unbeaten for the SIAA championship. Grantland Rice cited this event as the greatest thrill he ever witnessed in his years of watching sports. Vanderbilt coach Dan McGugin in Spalding's Football Guide's summation of the season in the SIAA wrote \"The standing. First, Vanderbilt; second, Sewanee, a might good second;\" and that Aubrey Lanier \"came near winning the Vanderbilt game by his brilliant dashes after receiving punts.\" Bob Blake threw the final pass to center Stein Stone, catching it near the goal amongst defenders. Honus Craig then ran in the winning touchdown.",
"title": "History"
},
{
"paragraph_id": 83,
"text": "Utilizing the \"jump shift\" offense, John Heisman's Georgia Tech Golden Tornado won 222 to 0 over Cumberland on October 7, 1916, at Grant Field in the most lopsided victory in college football history. Tech went on a 33-game winning streak during this period. The 1917 team was the first national champion from the South, led by a powerful backfield. It also had the first two players from the Deep South selected first-team All-American in Walker Carpenter and Everett Strupper. Pop Warner's Pittsburgh Panthers were also undefeated, but declined a challenge by Heisman to a game. When Heisman left Tech after 1919, his shift was still employed by protégé William Alexander.",
"title": "History"
},
{
"paragraph_id": 84,
"text": "In 1906, Vanderbilt defeated Carlisle 4 to 0, the result of a Bob Blake field goal. In 1907 Vanderbilt fought Navy to a 6 to 6 tie. In 1910 Vanderbilt held defending national champion Yale to a scoreless tie.",
"title": "History"
},
{
"paragraph_id": 85,
"text": "Helping Georgia Tech's claim to a title in 1917, the Auburn Tigers held undefeated, Chic Harley-led Big Ten champion Ohio State to a scoreless tie the week before Georgia Tech beat the Tigers 68 to 7. The next season, with many players gone due to World War I, a game was finally scheduled at Forbes Field with Pittsburgh. The Panthers, led by freshman Tom Davies, defeated Georgia Tech 32 to 0. Tech center Bum Day was the first player on a Southern team ever selected first-team All-American by Walter Camp.",
"title": "History"
},
{
"paragraph_id": 86,
"text": "1917 saw the rise of another Southern team in Centre of Danville, Kentucky. In 1921 Bo McMillin-led Centre upset defending national champion Harvard 6 to 0 in what is widely considered one of the greatest upsets in college football history. The next year Vanderbilt fought Michigan to a scoreless tie at the inaugural game at Dudley Field (now Vanderbilt Stadium), the first stadium in the South made exclusively for college football. Michigan coach Fielding Yost and Vanderbilt coach Dan McGugin were brothers-in-law, and the latter the protégé of the former. The game featured the season's two best defenses and included a goal line stand by Vanderbilt to preserve the tie. Its result was \"a great surprise to the sporting world\". Commodore fans celebrated by throwing some 3,000 seat cushions onto the field. The game features prominently in Vanderbilt's history. That same year, Alabama upset Penn 9 to 7.",
"title": "History"
},
{
"paragraph_id": 87,
"text": "Vanderbilt's line coach then was Wallace Wade, who coached Alabama to the South's first Rose Bowl victory in 1925. This game is commonly referred to as \"the game that changed the south\". Wade followed up the next season with an undefeated record and Rose Bowl tie. Georgia's 1927 \"dream and wonder team\" defeated Yale for the first time. Georgia Tech, led by Heisman protégé William Alexander, gave the dream and wonder team its only loss, and the next year were national and Rose Bowl champions. The Rose Bowl included Roy Riegels' wrong-way run. On October 12, 1929, Yale lost to Georgia in Sanford Stadium in its first trip to the south. Wade's Alabama again won a national championship and Rose Bowl in 1930.",
"title": "History"
},
{
"paragraph_id": 88,
"text": "Glenn \"Pop\" Warner coached at several schools throughout his career, including the University of Georgia, Cornell University, University of Pittsburgh, Stanford University, Iowa State University, and Temple University. One of his most famous stints was at the Carlisle Indian Industrial School, where he coached Jim Thorpe, who went on to become the first president of the National Football League, an Olympic Gold Medalist, and is widely considered one of the best overall athletes in history. Warner wrote one of the first important books of football strategy, Football for Coaches and Players, published in 1927. Though the shift was invented by Stagg, Warner's single wing and double wing formations greatly improved upon it; for almost 40 years, these were among the most important formations in football. As part of his single and double wing formations, Warner was one of the first coaches to effectively utilize the forward pass. Among his other innovations are modern blocking schemes, the three-point stance, and the reverse play. The youth football league, Pop Warner Little Scholars, was named in his honor.",
"title": "History"
},
{
"paragraph_id": 89,
"text": "Knute Rockne rose to prominence in 1913 as an end for the University of Notre Dame, then a largely unknown Midwestern Catholic school. When Army scheduled Notre Dame as a warm-up game, they thought little of the small school. Rockne and quarterback Gus Dorais made innovative use of the forward pass, still at that point a relatively unused weapon, to defeat Army 35–13 and helped establish the school as a national power. Rockne returned to coach the team in 1918, and devised the powerful Notre Dame Box offense, based on Warner's single wing. He is credited with being the first major coach to emphasize offense over defense. Rockne is also credited with popularizing and perfecting the forward pass, a seldom used play at the time. The 1924 team featured the Four Horsemen backfield. In 1927, his complex shifts led directly to a rule change whereby all offensive players had to stop for a full second before the ball could be snapped. Rather than simply a regional team, Rockne's \"Fighting Irish\" became famous for barnstorming and played any team at any location. It was during Rockne's tenure that the annual Notre Dame-University of Southern California rivalry began. He led his team to an impressive 105–12–5 record before his premature death in a plane crash in 1931. He was so famous at that point that his funeral was broadcast nationally on radio.",
"title": "History"
},
{
"paragraph_id": 90,
"text": "In the early 1930s, the college game continued to grow, particularly in the South, bolstered by fierce rivalries such as the \"South's Oldest Rivalry\", between Virginia and North Carolina and the \"Deep South's Oldest Rivalry\", between Georgia and Auburn. Although before the mid-1920s most national powers came from the Northeast or the Midwest, the trend changed when several teams from the South and the West Coast achieved national success. Wallace William Wade's 1925 Alabama team won the 1926 Rose Bowl after receiving its first national title and William Alexander's 1928 Georgia Tech team defeated California in the 1929 Rose Bowl. College football quickly became the most popular spectator sport in the South.",
"title": "History"
},
{
"paragraph_id": 91,
"text": "Several major modern college football conferences rose to prominence during this time period. The Southwest Athletic Conference had been founded in 1915. Consisting mostly of schools from Texas, the conference saw back-to-back national champions with Texas Christian University (TCU) in 1938 and Texas A&M in 1939. The Pacific Coast Conference (PCC), a precursor to the Pac-12 Conference (Pac-12), had its own back-to-back champion in the University of Southern California which was awarded the title in 1931 and 1932. The Southeastern Conference (SEC) formed in 1932 and consisted mostly of schools in the Deep South. As in previous decades, the Big Ten continued to dominate in the 1930s and 1940s, with Minnesota winning 5 titles between 1934 and 1941, and Michigan (1933, 1947, and 1948) and Ohio State (1942) also winning titles.",
"title": "History"
},
{
"paragraph_id": 92,
"text": "As it grew beyond its regional affiliations in the 1930s, college football garnered increased national attention. Four new bowl games were created: the Orange Bowl, Sugar Bowl, the Sun Bowl in 1935, and the Cotton Bowl in 1937. In lieu of an actual national championship, these bowl games, along with the earlier Rose Bowl, provided a way to match up teams from distant regions of the country that did not otherwise play. In 1936, the Associated Press began its weekly poll of prominent sports writers, ranking all of the nation's college football teams. Since there was no national championship game, the final version of the AP poll was used to determine who was crowned the National Champion of college football.",
"title": "History"
},
{
"paragraph_id": 93,
"text": "The 1930s saw growth in the passing game. Though some coaches, such as General Robert Neyland at Tennessee, continued to eschew its use, several rules changes to the game had a profound effect on teams' ability to throw the ball. In 1934, the rules committee removed two major penalties—a loss of five yards for a second incomplete pass in any series of downs and a loss of possession for an incomplete pass in the end zone—and shrunk the circumference of the ball, making it easier to grip and throw. Players who became famous for taking advantage of the easier passing game included Alabama end Don Hutson and TCU passer \"Slingin\" Sammy Baugh.",
"title": "History"
},
{
"paragraph_id": 94,
"text": "In 1935, New York City's Downtown Athletic Club awarded the first Heisman Trophy to University of Chicago halfback Jay Berwanger, who was also the first ever NFL Draft pick in 1936. The trophy was designed by sculptor Frank Eliscu and modeled after New York University player Ed Smith. The trophy recognizes the nation's \"most outstanding\" college football player and has become one of the most coveted awards in all of American sports.",
"title": "History"
},
{
"paragraph_id": 95,
"text": "During World War II, college football players enlisted in the armed forces, some playing in Europe during the war. As most of these players had eligibility left on their college careers, some of them returned to college at West Point, bringing Army back-to-back national titles in 1944 and 1945 under coach Red Blaik. Doc Blanchard (known as \"Mr. Inside\") and Glenn Davis (known as \"Mr. Outside\") both won the Heisman Trophy, in 1945 and 1946. On the coaching staff of those 1944–1946 Army teams was future Pro Football Hall of Fame coach Vince Lombardi.",
"title": "History"
},
{
"paragraph_id": 96,
"text": "The 1950s saw the rise of yet more dynasties and power programs. Oklahoma, under coach Bud Wilkinson, won three national titles (1950, 1955, 1956) and all ten Big Eight Conference championships in the decade while building a record 47-game winning streak. Woody Hayes led Ohio State to two national titles, in 1954 and 1957, and won three Big Ten titles. The Michigan State Spartans were known as the \"football factory\" during the 1950s, where coaches Clarence Munn and Duffy Daugherty led the Spartans to two national titles and two Big Ten titles after joining the Big Ten athletically in 1953. Wilkinson and Hayes, along with Robert Neyland of Tennessee, oversaw a revival of the running game in the 1950s. Passing numbers dropped from an average of 18.9 attempts in 1951 to 13.6 attempts in 1955, while teams averaged just shy of 50 running plays per game. Nine out of ten Heisman Trophy winners in the 1950s were runners. Notre Dame, one of the biggest passing teams of the decade, saw a substantial decline in success; the 1950s were the only decade between 1920 and 1990 when the team did not win at least a share of the national title. Paul Hornung, Notre Dame quarterback, did, however, win the Heisman in 1956, becoming the only player from a losing team ever to do so.",
"title": "History"
},
{
"paragraph_id": 97,
"text": "The 1956 Sugar Bowl also gained international attention when Georgia's pro-segregationist Gov. Griffin publicly threatened Georgia Tech and its President Blake Van Leer over allowing the first African American player to play in a collegiate bowl game in the south.",
"title": "History"
},
{
"paragraph_id": 98,
"text": "Following the enormous success of the 1958 NFL Championship Game, college football no longer enjoyed the same popularity as the NFL, at least on a national level. While both games benefited from the advent of television, since the late 1950s, the NFL has become a nationally popular sport while college football has maintained strong regional ties.",
"title": "History"
},
{
"paragraph_id": 99,
"text": "As professional football became a national television phenomenon, college football did as well. In the 1950s, Notre Dame, which had a large national following, formed its own network to broadcast its games, but by and large the sport still retained a mostly regional following. In 1952, the NCAA claimed all television broadcasting rights for the games of its member institutions, and it alone negotiated television rights. This situation continued until 1984, when several schools brought a suit under the Sherman Antitrust Act; the Supreme Court ruled against the NCAA and schools are now free to negotiate their own television deals. ABC Sports began broadcasting a national Game of the Week in 1966, bringing key matchups and rivalries to a national audience for the first time.",
"title": "History"
},
{
"paragraph_id": 100,
"text": "New formations and play sets continued to be developed. Emory Bellard, an assistant coach under Darrell Royal at the University of Texas, developed a three-back option style offense known as the wishbone. The wishbone is a run-heavy offense that depends on the quarterback making last second decisions on when and to whom to hand or pitch the ball to. Royal went on to teach the offense to other coaches, including Bear Bryant at Alabama, Chuck Fairbanks at Oklahoma and Pepper Rodgers at UCLA; who all adapted and developed it to their own tastes. The strategic opposite of the wishbone is the spread offense, developed by professional and college coaches throughout the 1960s and 1970s. Though some schools play a run-based version of the spread, its most common use is as a passing offense designed to \"spread\" the field both horizontally and vertically. Some teams have managed to adapt with the times to keep winning consistently. In the rankings of the most victorious programs, Michigan, Ohio State, and Alabama ranked first, second, and third in total wins.",
"title": "History"
},
{
"paragraph_id": 101,
"text": "In 1940, for the highest level of college football, there were only five bowl games (Rose, Orange, Sugar, Sun, and Cotton). By 1950, three more had joined that number and in 1970, there were still only eight major college bowl games. The number grew to eleven in 1976. At the birth of cable television and cable sports networks like ESPN, there were fifteen bowls in 1980. With more national venues and increased available revenue, the bowls saw an explosive growth throughout the 1980s and 1990s. In the thirty years from 1950 to 1980, seven bowl games were added to the schedule. From 1980 to 2008, an additional 20 bowl games were added to the schedule. Some have criticized this growth, claiming that the increased number of games has diluted the significance of playing in a bowl game. Yet others have countered that the increased number of games has increased exposure and revenue for a greater number of schools, and see it as a positive development. Teams participating in bowl games also get to practice up to four hours per day or 20 hours per week until their bowl game concludes. There is no limit on the number of practices during the bowl season, so teams that play later in the season (usually ones with more wins) get more opportunity to practice than ones that play earlier. This bowl practice period can be compared to the spring practice schedule when teams can have 15 on-field practice sessions. Many teams that play late in the bowl season use the first few practices for evaluation and development of younger players while resting the starters.",
"title": "History"
},
{
"paragraph_id": 102,
"text": "Currently, the NCAA Division I football teams are divided into two divisions - the \"football bowl subdivision\" (FBS) and the \"football championship subdivision\"(FCS). As indicated by the name, the FBS teams are eligible to play in post-season bowls. The FCS teams, Division II, Division III, National Junior College teams play in sanctioned tournaments to determine their annual champions. There is not now, and never has been, an NCAA-sanctioned tournament to determine the champion of the top-level football teams.",
"title": "Determination of national champion"
},
{
"paragraph_id": 103,
"text": "With the growth of bowl games, it became difficult to determine a national champion in a fair and equitable manner. As conferences became contractually bound to certain bowl games (a situation known as a tie-in), match-ups that guaranteed a consensus national champion became increasingly rare.",
"title": "Determination of national champion"
},
{
"paragraph_id": 104,
"text": "In 1992, seven conferences and independent Notre Dame formed the Bowl Coalition, which attempted to arrange an annual No.1 versus No.2 matchup based on the final AP poll standings. The Coalition lasted for three years; however, several scheduling issues prevented much success; tie-ins still took precedence in several cases. For example, the Big Eight and SEC champions could never meet, since they were contractually bound to different bowl games. The coalition also excluded the Rose Bowl, arguably the most prestigious game in the nation, and two major conferences—the Pac-10 and Big Ten—meaning that it had limited success.",
"title": "Determination of national champion"
},
{
"paragraph_id": 105,
"text": "In 1995, the Coalition was replaced by the Bowl Alliance, which reduced the number of bowl games to host a national championship game to three—the Fiesta, Sugar, and Orange Bowls—and the participating conferences to five—the ACC, SEC, Southwest, Big Eight, and Big East. It was agreed that the No.1 and No.2 ranked teams gave up their prior bowl tie-ins and were guaranteed to meet in the national championship game, which rotated between the three participating bowls. The system still did not include the Big Ten, Pac-10, or the Rose Bowl, and thus still lacked the legitimacy of a true national championship. However, one positive side effect is that if there were three teams at the end of the season vying for a national title, but one of them was a Pac-10/Big Ten team bound to the Rose Bowl, then there would be no difficulty in deciding which teams to place in the Bowl Alliance \"national championship\" bowl; if the Pac-10 / Big Ten team won the Rose Bowl and finished with the same record as whichever team won the other bowl game, they could have a share of the national title. This happened in the final year of the Bowl Alliance, with Michigan winning the 1998 Rose Bowl and Nebraska winning the 1998 Orange Bowl. Without the Pac-10/Big Ten team bound to a bowl game, it would be difficult to decide which two teams should play for the national title.",
"title": "Determination of national champion"
},
{
"paragraph_id": 106,
"text": "In 1998, a new system was put into place called the Bowl Championship Series. For the first time, it included all major conferences (ACC, Big East, Big 12, Big Ten, Pac-10, and SEC) and four major bowl games (Rose, Orange, Sugar and Fiesta). The champions of these six conferences, along with two \"at-large\" selections, were invited to play in the four bowl games. Each year, one of the four bowl games served as a national championship game. Also, a complex system of human polls, computer rankings, and strength of schedule calculations was instituted to rank schools. Based on this ranking system, the No.1 and No.2 teams met each year in the national championship game. Traditional tie-ins were maintained for schools and bowls not part of the national championship. For example, in years when not a part of the national championship, the Rose Bowl still hosted the Big Ten and Pac-10 champions.",
"title": "Determination of national champion"
},
{
"paragraph_id": 107,
"text": "The system continued to change, as the formula for ranking teams was tweaked from year to year. At-large teams could be chosen from any of the Division I-A conferences, though only one selection—Utah in 2005—came from a BCS non-AQ conference. Starting with the 2006 season, a fifth game—simply called the BCS National Championship Game—was added to the schedule, to be played at the site of one of the four BCS bowl games on a rotating basis, one week after the regular bowl game. This opened up the BCS to two additional at-large teams. Also, rules were changed to add the champions of five additional conferences (Conference USA [C-USA], the Mid-American Conference [MAC], the Mountain West Conference [MW], the Sun Belt Conference and the Western Athletic Conference [WAC]), provided that said champion ranked in the top twelve in the final BCS rankings, or was within the top 16 of the BCS rankings and ranked higher than the champion of at least one of the BCS Automatic Qualifying (AQ) conferences. Several times since this rule change was implemented, schools from non-AQ conferences have played in BCS bowl games. In 2009, Boise State played TCU in the Fiesta Bowl, the first time two schools from non-AQ conferences played each other in a BCS bowl game. The last team from the non-AQ ranks to reach a BCS bowl game in the BCS era was Northern Illinois in 2012, which played in (and lost) the 2013 Orange Bowl.",
"title": "Determination of national champion"
},
{
"paragraph_id": 108,
"text": "The longtime resistance to a playoff system at the FBS level finally ended with the creation of the College Football Playoff (CFP) beginning with the 2014 season. The CFP is a Plus-One system, a concept that became popular as a BCS alternative following controversies in 2003 and 2004. The CFP is a four-team tournament whose participants are chosen and seeded by a 13-member selection committee. The semifinals are hosted by two of a group of traditional bowl games known as the New Year's Six, with semifinal hosting rotating annually among three pairs of games in the following order: Rose/Sugar, Orange/Cotton, and Fiesta/Peach. The two semifinal winners then advance to the College Football Playoff National Championship, whose host is determined by open bidding several years in advance.",
"title": "Determination of national champion"
},
{
"paragraph_id": 109,
"text": "The 10 FBS conferences are formally and popularly divided into two groups:",
"title": "Determination of national champion"
},
{
"paragraph_id": 110,
"text": "Although rules for the high school, college, and NFL games are generally consistent, there are several minor differences. Before 2023, a single NCAA Football Rules Committee determined the playing rules for Division I (both Bowl and Championship Subdivisions), II, and III games (the National Association of Intercollegiate Athletics (NAIA) is a separate organization, but uses the NCAA rules). As part of an NCAA initiative to give each division more autonomy over its governance, separate rules committees have been established for each NCAA division.",
"title": "Official rules and notable rule distinctions"
},
{
"paragraph_id": 111,
"text": "College teams mostly play other similarly sized schools through the NCAA's divisional system. Division I generally consists of the major collegiate athletic powers with larger budgets, more elaborate facilities, and (with the exception of a few conferences such as the Pioneer Football League) more athletic scholarships. Division II primarily consists of smaller public and private institutions that offer fewer scholarships than those in Division I. Division III institutions also field teams, but do not offer any scholarships.",
"title": "Organization"
},
{
"paragraph_id": 112,
"text": "Football teams in Division I are further divided into the Bowl Subdivision (consisting of the largest programs) and the Championship Subdivision. The Bowl Subdivision has historically not used an organized tournament to determine its champion, and instead teams compete in post-season bowl games. That changed with the debut of the four-team College Football Playoff at the end of the 2014 season.",
"title": "Organization"
},
{
"paragraph_id": 113,
"text": "Teams in each of these four divisions are further divided into various regional conferences.",
"title": "Organization"
},
{
"paragraph_id": 114,
"text": "Several organizations operate college football programs outside the jurisdiction of the NCAA:",
"title": "Organization"
},
{
"paragraph_id": 115,
"text": "A college that fields a team in the NCAA is not restricted from fielding teams in club or sprint football, and several colleges field two teams, a varsity (NCAA) squad and a club or sprint squad (no schools, as of 2023, field both club and sprint teams at the same time).",
"title": "Organization"
},
{
"paragraph_id": 116,
"text": "Started in the 2014 season, four Division I FBS teams are selected at the end of regular season to compete in a playoff for the FBS national championship. The inaugural champion was Ohio State University. The College Football Playoff replaced the Bowl Championship Series, which had been used as a selection method to determine the national championship game participants since in the 1998 season. The Georgia Bulldogs won the most recent playoff 65–7 over the TCU Horned Frogs in the 2023 College Football Playoff.",
"title": "Playoff games"
},
{
"paragraph_id": 117,
"text": "At the Division I FCS level, the teams participate in a 24-team playoff (most recently expanded from 20 teams in 2013) to determine the national championship. Under the current playoff structure, the top eight teams are all seeded, and receive a bye week in the first round. The highest seed receives automatic home field advantage. Starting in 2013, non-seeded teams can only host a playoff game if both teams involved are unseeded; in such a matchup, the schools must bid for the right to host the game. Selection for the playoffs is determined by a selection committee, although usually a team must have an 8–4 record to even be considered. Losses to an FBS team count against their playoff eligibility, while wins against a Division II opponent do not count towards playoff consideration. Thus, only Division I wins (whether FBS, FCS, or FCS non-scholarship) are considered for playoff selection. The Division I National Championship game is held in Frisco, Texas.",
"title": "Playoff games"
},
{
"paragraph_id": 118,
"text": "Division II and Division III of the NCAA also participate in their own respective playoffs, crowning national champions at the end of the season. The National Association of Intercollegiate Athletics also holds a playoff.",
"title": "Playoff games"
},
{
"paragraph_id": 119,
"text": "Unlike other college football divisions and most other sports—collegiate or professional—the Football Bowl Subdivision, formerly known as Division I-A college football, has historically not employed a playoff system to determine a champion. Instead, it has a series of postseason \"bowl games\". The annual National Champion in the Football Bowl Subdivision is then instead traditionally determined by a vote of sports writers and other non-players.",
"title": "Bowl games"
},
{
"paragraph_id": 120,
"text": "This system has been challenged often, beginning with an NCAA committee proposal in 1979 to have a four-team playoff following the bowl games. However, little headway was made in instituting a playoff tournament until 2014, given the entrenched vested economic interests in the various bowls. Although the NCAA publishes lists of claimed FBS-level national champions in its official publications, it has never recognized an official FBS national championship; this policy continues even after the establishment of the College Football Playoff (which is not directly run by the NCAA) in 2014. As a result, the official Division I National Champion is the winner of the Football Championship Subdivision, as it is the highest level of football with an NCAA-administered championship tournament. (This also means that FBS student-athletes are the only NCAA athletes who are ineligible for the Elite 90 Award, an academic award presented to the upper class player with the highest grade-point average among the teams that advance to the championship final site.)",
"title": "Bowl games"
},
{
"paragraph_id": 121,
"text": "The first bowl game was the 1902 Rose Bowl, played between Michigan and Stanford; Michigan won 49–0. It ended when Stanford requested and Michigan agreed to end it with 8 minutes on the clock. That game was so lopsided that the game was not played annually until 1916, when the Tournament of Roses decided to reattempt the postseason game. The term \"bowl\" originates from the shape of the Rose Bowl stadium in Pasadena, California, which was built in 1923 and resembled the Yale Bowl, built in 1915. This is where the name came into use, as it became known as the Rose Bowl Game. Other games came along and used the term \"bowl\", whether the stadium was shaped like a bowl or not.",
"title": "Bowl games"
},
{
"paragraph_id": 122,
"text": "At the Division I FBS level, teams must earn the right to be bowl eligible by winning at least 6 games during the season (teams that play 13 games in a season, which is allowed for Hawaii and any of its home opponents, must win 7 games). They are then invited to a bowl game based on their conference ranking and the tie-ins that the conference has to each bowl game. For the 2009 season, there were 34 bowl games, so 68 of the 120 Division I FBS teams were invited to play at a bowl. These games are played from mid-December to early January and most of the later bowl games are typically considered more prestigious.",
"title": "Bowl games"
},
{
"paragraph_id": 123,
"text": "After the Bowl Championship Series, additional all-star bowl games round out the post-season schedule through the beginning of February.",
"title": "Bowl games"
},
{
"paragraph_id": 124,
"text": "Partly as a compromise between both bowl game and playoff supporters, the NCAA created the Bowl Championship Series (BCS) in 1998 in order to create a definitive national championship game for college football. The series included the four most prominent bowl games (Rose Bowl, Orange Bowl, Sugar Bowl, Fiesta Bowl), while the national championship game rotated each year between one of these venues. The BCS system was slightly adjusted in 2006, as the NCAA added a fifth game to the series, called the National Championship Game. This allowed the four other BCS bowls to use their normal selection process to select the teams in their games while the top two teams in the BCS rankings would play in the new National Championship Game.",
"title": "Bowl games"
},
{
"paragraph_id": 125,
"text": "The BCS selection committee used a complicated, and often controversial, computer system to rank all Division I-FBS teams and the top two teams at the end of the season played for the national championship. This computer system, which factored in newspaper polls, online polls, coaches' polls, strength of schedule, and various other factors of a team's season, led to much dispute over whether the two best teams in the country were being selected to play in the National Championship Game.",
"title": "Bowl games"
},
{
"paragraph_id": 126,
"text": "The BCS ended after the 2013 season and, since the 2014 season, the FBS national champion has been determined by a four-team tournament known as the College Football Playoff (CFP). A selection committee of college football experts decides the participating teams. Six major bowl games known as the New Year's Six (NY6)—the Rose, Sugar, Cotton, Orange, Peach, and Fiesta Bowls—rotate on a three-year cycle as semifinal games, with the winners advancing to the College Football Playoff National Championship. This arrangement was contractually locked in until the 2026 season, but an agreement was reached on CFP expansion to 12 teams effective with the 2024 season.",
"title": "Bowl games"
},
{
"paragraph_id": 127,
"text": "In the new CFP format, no conferences will receive automatic bids. Playoff berths will be awarded to the top six conference champions in the CFP rankings, plus the top six remaining teams (which may include other conference champions). The top four conference champions receive first-round byes. All first-round games will be played at the home field of the higher seed. The winners of these games advance to meet the top four seeds in the quarterfinals. The NY6 games will host the quarterfinals and semifinals, rotating so that each bowl game will host two quarterfinals and one semifinal in a three-year cycle. The CFP National Championship will continue to be held at a site determined by open bidding several years in advance.",
"title": "Bowl games"
},
{
"paragraph_id": 128,
"text": "College football is a controversial institution within American higher education, where the amount of money involved—what people will pay for the entertainment provided—is a corrupting factor within universities that they are usually ill-equipped to deal with. According to William E. Kirwan, chancellor of the University of Maryland System and co-director of the Knight Commission on Intercollegiate Athletics, \"We've reached a point where big-time intercollegiate athletics is undermining the integrity of our institutions, diverting presidents and institutions from their main purpose.\" Football coaches often make more than the presidents of the universities which employ them. Athletes are alleged to receive preferential treatment both in academics and when they run afoul of the law. Although in theory football is an extra-curricular activity engaged in as a sideline by students, it is widely believed to turn a substantial profit, from which the athletes receive no direct benefit. There has been serious discussion about making student-athletes university employees to allow them to be paid. In reality, the majority of major collegiate football programs operated at a financial loss in 2014.",
"title": "Controversy"
},
{
"paragraph_id": 129,
"text": "There had been discussions on changing rules that prohibited compensation for the use of a player's name, image, and likeness (NIL), but change did not start to come until the mid-2010s. This reform first took place in the NAIA, which initially allowed all student-athletes at its member schools to receive NIL compensation in 2014, and beginning in 2020 specifically allowed these individuals to reference their athletic participation in their endorsement deals. The NCAA passed its own NIL reform, very similar to the NAIA's most recent reform, in July 2021, after its hand was forced by multiple states that had passed legislation allowing NIL compensation, most notably California.",
"title": "Controversy"
},
{
"paragraph_id": 130,
"text": "On June 3 of 2021, \"The NCAA's Board of Directors adopts a temporary rule change that opens the door for NIL activity, instructing schools to set their own policy for what should be allowed with minimal guidelines\" (Murphy 2021). On July 1 of 2021, the new rules set in and student athletes could start signing endorsements using their name, image and likeness. \"The NCAA has asked Congress for help in creating a federal NIL law. While several federal options have been proposed, it's becoming increasingly likely that state laws will start to go into effect before a nationwide change is made. There are 28 states with NIL laws already in place and multiple others that are actively pursuing legislation\" (Murphy 2021).",
"title": "Controversy"
},
{
"paragraph_id": 131,
"text": "Canadian football, which parallels American football, is played by university teams in Canada under the auspices of U Sports. (Unlike in the United States, no junior colleges play football in Canada, and the sanctioning body for junior college athletics in Canada, CCAA, does not sanction the sport.) However, amateur football outside of colleges is played in Canada, such as in the Canadian Junior Football League. Organized competition in American football also exists at the collegiate level in Mexico (ONEFA), the UK (British Universities American Football League), Japan (Japan American Football Association, Koshien Bowl), and South Korea (Korea American Football Association).",
"title": "College football outside the United States"
},
{
"paragraph_id": 132,
"text": "According to 2017 study on brains of deceased gridiron football players, 99% of tested brains of NFL players, 88% of CFL players, 64% of semi-professional players, 91% of college football players, and 21% of high school football players had various stages of CTE. The study noted it has limitations due to \"selection bias\" in that the brains donated are from families who suspected CTE, but \"The fact that we were able to gather so many instances of a disease that was previously considered quite rare, in eight years, speaks volumes.\"",
"title": "Injuries"
},
{
"paragraph_id": 133,
"text": "Other common injuries include: injuries of legs, arms, and lower back.",
"title": "Injuries"
}
] | Nick Saban is known as the best coach of all time College football refers to gridiron football that is played by teams of amateur student-athletes at universities and colleges. It was through collegiate competition that gridiron football first gained popularity in the United States. Like gridiron football generally, college football is most popular in the United States and Canada. While no single governing body exists for college football in the United States, most schools, especially those at the highest levels of play, are members of the NCAA. In Canada, collegiate football competition is governed by U Sports for universities. The Canadian Collegiate Athletic Association governs soccer and other sports but not gridiron football. Other countries, such as Mexico, Japan and South Korea, also host college football leagues with modest levels of support. Unlike most other major sports in North America, no official minor league farm organizations exist for American football or Canadian football. Therefore, college football is generally considered to be the second tier of American and Canadian football; ahead of high school competition, but below professional competition. In some parts of the United States, especially the South and Midwest, college football is more popular than professional football. For much of the 20th century, college football was generally considered to be more prestigious than professional football. As the second highest tier of gridiron football competition in the United States, many college football players later play professionally in the NFL or other leagues. The NFL draft each spring sees 224 players selected and offered a contract to play in the league, with the vast majority coming from the NCAA. Other professional leagues, such as the CFL and XFL, additionally hold their own drafts each year which see many college players selected. Players who are not selected can still attempt to land a professional roster spot as an undrafted free agent. Despite these opportunities, only around 1.6% of NCAA college football players end up playing professionally in the NFL. | 2001-09-10T15:45:55Z | 2023-12-30T18:04:25Z | [
"Template:Redirect",
"Template:Fraction",
"Template:Div col end",
"Template:Cite magazine",
"Template:Spoken Wikipedia",
"Template:National Association of Intercollegiate Athletics",
"Template:Main",
"Template:'s",
"Template:Unreferenced section",
"Template:Div col",
"Template:Cite book",
"Template:Gridiron football concepts",
"Template:Multiple image",
"Template:Cite encyclopedia",
"Template:Commons category",
"Template:Cite web",
"Template:Short description",
"Template:Distinguish",
"Template:Cleanup rewrite",
"Template:Infobox sport overview",
"Template:See also",
"Template:Convert",
"Template:Reflist",
"Template:Webarchive",
"Template:Full citation needed",
"Template:Citation needed",
"Template:As of",
"Template:Clear",
"Template:Frac",
"Template:Quote box",
"Template:College football",
"Template:Sfn",
"Template:Nfly",
"Template:Cite news",
"Template:ISBN",
"Template:Page needed",
"Template:Dead link",
"Template:About",
"Template:Use mdy dates",
"Template:Summarize section",
"Template:Blockquote",
"Template:NFL year",
"Template:Portal",
"Template:Cite journal",
"Template:Cbignore",
"Template:Cite press release",
"Template:National Collegiate Athletic Association",
"Template:American football in the United States"
] | https://en.wikipedia.org/wiki/College_football |
6,773 | Ciprofloxacin | Ciprofloxacin is a fluoroquinolone antibiotic used to treat a number of bacterial infections. This includes bone and joint infections, intra-abdominal infections, certain types of infectious diarrhea, respiratory tract infections, skin infections, typhoid fever, and urinary tract infections, among others. For some infections it is used in addition to other antibiotics. It can be taken by mouth, as eye drops, as ear drops, or intravenously.
Common side effects include nausea, vomiting, and diarrhea. Severe side effects include an increased risk of tendon rupture, hallucinations, and nerve damage. In people with myasthenia gravis, there is worsening muscle weakness. Rates of side effects appear to be higher than some groups of antibiotics such as cephalosporins but lower than others such as clindamycin. Studies in other animals raise concerns regarding use in pregnancy. No problems were identified, however, in the children of a small number of women who took the medication. It appears to be safe during breastfeeding. It is a second-generation fluoroquinolone with a broad spectrum of activity that usually results in the death of the bacteria.
Ciprofloxacin was patented in 1980 and introduced in 1987. It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies ciprofloxacin as critically important for human medicine. It is available as a generic medication. In 2020, it was the 132nd-most-commonly prescribed medication in the United States, with more than 4 million prescriptions.
Ciprofloxacin is used to treat a wide variety of infections, including infections of bones and joints, endocarditis, gastroenteritis, malignant otitis externa, respiratory tract infections, cellulitis, urinary tract infections, prostatitis, anthrax, and chancroid.
Ciprofloxacin only treats bacterial infections; it does not treat viral infections such as the common cold. For certain uses including acute sinusitis, lower respiratory tract infections and uncomplicated gonorrhea, ciprofloxacin is not considered a first-line agent.
Ciprofloxacin occupies an important role in treatment guidelines issued by major medical societies for the treatment of serious infections, especially those likely to be caused by Gram-negative bacteria, including Pseudomonas aeruginosa. For example, ciprofloxacin in combination with metronidazole is one of several first-line antibiotic regimens recommended by the Infectious Diseases Society of America for the treatment of community-acquired abdominal infections in adults. It also features prominently in treatment guidelines for acute pyelonephritis, complicated or hospital-acquired urinary tract infection, acute or chronic prostatitis, certain types of endocarditis, certain skin infections, and prosthetic joint infections.
In other cases, treatment guidelines are more restrictive, recommending in most cases that older, narrower-spectrum drugs be used as first-line therapy for less severe infections to minimize fluoroquinolone-resistance development. For example, the Infectious Diseases Society of America recommends the use of ciprofloxacin and other fluoroquinolones in urinary tract infections be reserved to cases of proven or expected resistance to narrower-spectrum drugs such as nitrofurantoin or trimethoprim/sulfamethoxazole. The European Association of Urology recommends ciprofloxacin as an alternative regimen for the treatment of uncomplicated urinary tract infections, but cautions that the potential for "adverse events have to be considered".
Although approved by regulatory authorities for the treatment of respiratory infections, ciprofloxacin is not recommended for respiratory infections by most treatment guidelines due in part to its modest activity against the common respiratory pathogen Streptococcus pneumoniae. "Respiratory quinolones" such as levofloxacin, having greater activity against this pathogen, are recommended as first line agents for the treatment of community-acquired pneumonia in patients with important co-morbidities and in patients requiring hospitalization (Infectious Diseases Society of America 2007). Similarly, ciprofloxacin is not recommended as a first-line treatment for acute sinusitis.
Ciprofloxacin is approved for the treatment of gonorrhea in many countries, but this recommendation is widely regarded as obsolete due to resistance development.
In the United States, ciprofloxacin is pregnancy category C. This category includes drugs for which no adequate and well-controlled studies in human pregnancy exist, and for which animal studies have suggested the potential for harm to the fetus, but potential benefits may warrant use of the drug in pregnant women despite potential risks. An expert review of published data on experiences with ciprofloxacin use during pregnancy by the Teratogen Information System concluded therapeutic doses during pregnancy are unlikely to pose a substantial teratogenic risk (quantity and quality of data=fair), but the data are insufficient to state no risk exists. Exposure to quinolones, including levofloxacin, during the first-trimester is not associated with an increased risk of stillbirths, premature births, birth defects, or low birth weight.
Two small post-marketing epidemiology studies of mostly short-term, first-trimester exposure found that fluoroquinolones did not increase risk of major malformations, spontaneous abortions, premature birth, or low birth weight. The label notes, however, that these studies are insufficient to reliably evaluate the definitive safety or risk of less common defects by ciprofloxacin in pregnant women and their developing fetuses.
Fluoroquinolones have been reported as present in a mother's milk and thus passed on to the nursing child. The U.S. Food and Drug Administration (FDA) recommends that because of the risk of serious adverse reactions (including articular damage) in infants nursing from mothers taking ciprofloxacin, a decision should be made whether to discontinue nursing or discontinue the drug, taking into account the importance of the drug to the mother.
Oral and intravenous ciprofloxacin are approved by the FDA for use in children for only two indications due to the risk of permanent injury to the musculoskeletal system:
Current recommendations by the American Academy of Pediatrics note the systemic use of ciprofloxacin in children should be restricted to infections caused by multidrug-resistant pathogens or when no safe or effective alternatives are available.
Its spectrum of activity includes most strains of bacterial pathogens responsible for community-acquired pneumonias, bronchitis, urinary tract infections, and gastroenteritis. Ciprofloxacin is particularly effective against Gram-negative bacteria (such as Escherichia coli, Haemophilus influenzae, Klebsiella pneumoniae, Legionella pneumophila, Moraxella catarrhalis, Proteus mirabilis, and Pseudomonas aeruginosa), but is less effective against Gram-positive bacteria (such as methicillin-sensitive Staphylococcus aureus, Streptococcus pneumoniae, and Enterococcus faecalis) than newer fluoroquinolones.
As a result of its widespread use to treat minor infections readily treatable with older, narrower-spectrum antibiotics, many bacteria have developed resistance to this drug in recent years, leaving it significantly less effective than it would have been otherwise.
Resistance to ciprofloxacin and other fluoroquinolones may evolve rapidly, even during a course of treatment. Numerous pathogens, including enterococci, Streptococcus pyogenes , and Klebsiella pneumoniae (quinolone-resistant) now exhibit resistance. Widespread veterinary usage of fluoroquinolones, particularly in Europe, has been implicated. Meanwhile, some Burkholderia cepacia, Clostridium innocuum, and Enterococcus faecium strains have developed resistance to ciprofloxacin to varying degrees.
Fluoroquinolones had become the class of antibiotics most commonly prescribed to adults in 2002. Nearly half (42%) of those prescriptions in the U.S. were for conditions not approved by the FDA, such as acute bronchitis, otitis media, and acute upper respiratory tract infection, according to a study supported in part by the Agency for Healthcare Research and Quality. Additionally, they were commonly prescribed for medical conditions that were not even bacterial to begin with, such as viral infections, or those for which no proven benefit existed.
Contraindications include:
Ciprofloxacin is also considered to be contraindicated in children (except for the indications outlined above), in pregnancy, to nursing mothers, and in people with epilepsy or other seizure disorders.
Caution may be required in people with Marfan syndrome or Ehlers-Danlos syndrome.
Adverse effects can involve the tendons, muscles, joints, nerves, and the central nervous system.
Rates of adverse effects appear to be higher than with some groups of antibiotics such as cephalosporins but lower than with others such as clindamycin. Compared to other antibiotics some studies find a higher rate of adverse effects while others find no difference.
In clinical trials most of the adverse events were described as mild or moderate in severity, abated soon after the drug was discontinued, and required no treatment. Some adverse effects may be permanent. Ciprofloxacin was stopped because of an adverse event in 1% of people treated with the medication by mouth. The most frequently reported drug-related events, from trials of all formulations, all dosages, all drug-therapy durations, and for all indications, were nausea (2.5%), diarrhea (1.6%), abnormal liver function tests (1.3%), vomiting (1%), and rash (1%). Other adverse events occurred at rates of <1%.
Ciprofloxacin includes a boxed warning in the United States due to an increased risk of tendinitis and tendon rupture, especially in people who are older than 60 years, people who also use corticosteroids, and people with kidney, lung, or heart transplants. Tendon rupture can occur during therapy or even months after discontinuation of the medication. One study found that fluoroquinolone use was associated with a 1.9-fold increase in tendon problems. The risk increased to 3.2 in those over 60 years of age and to 6.2 in those over the age of 60 who were also taking corticosteroids. Among the 46,766 quinolone users in the study, 38 (0.08%) cases of Achilles tendon rupture were identified.
The fluoroquinolones, including ciprofloxacin, are associated with an increased risk of cardiac toxicity, including QT interval prolongation, torsades de pointes, ventricular arrhythmia, and sudden death.
Because Ciprofloxacin is lipophilic, it has the ability to cross the blood–brain barrier. The 2013 FDA label warns of nervous system effects. Ciprofloxacin, like other fluoroquinolones, is known to trigger seizures or lower the seizure threshold, and may cause other central nervous system adverse effects. Headache, dizziness, and insomnia have been reported as occurring fairly commonly in postapproval review articles, along with a much lower incidence of serious CNS adverse effects such as tremors, psychosis, anxiety, hallucinations, paranoia, and suicide attempts, especially at higher doses. Like other fluoroquinolones, it is also known to cause peripheral neuropathy that may be irreversible, such as weakness, burning pain, tingling or numbness.
Ciprofloxacin is active in six of eight in vitro assays used as rapid screens for the detection of genotoxic effects, but is not active in in vivo assays of genotoxicity. Long-term carcinogenicity studies in rats and mice resulted in no carcinogenic or tumorigenic effects due to ciprofloxacin at daily oral dose levels up to 250 and 750 mg/kg to rats and mice, respectively (about 1.7 and 2.5 times the highest recommended therapeutic dose based upon mg/m). Results from photo co-carcinogenicity testing indicate ciprofloxacin does not reduce the time to appearance of UV-induced skin tumors as compared to vehicle control.
The other black box warning is that ciprofloxacin should not be used in people with myasthenia gravis due to possible exacerbation of muscle weakness which may lead to breathing problems resulting in death or ventilator support. Fluoroquinolones are known to block neuromuscular transmission. There are concerns that fluoroquinolones including ciprofloxacin can affect cartilage in young children.
Clostridium difficile-associated diarrhea is a serious adverse effect of ciprofloxacin and other fluoroquinolones; it is unclear whether the risk is higher than with other broad-spectrum antibiotics.
A wide range of rare but potentially fatal adverse effects reported to the U.S. FDA or the subject of case reports includes aortic dissection, toxic epidermal necrolysis, Stevens–Johnson syndrome, low blood pressure, allergic pneumonitis, bone marrow suppression, hepatitis or liver failure, and sensitivity to light. The medication should be discontinued if a rash, jaundice, or other sign of hypersensitivity occurs.
Children and the elderly are at a much greater risk of experiencing adverse reactions.
Overdose of ciprofloxacin may result in reversible renal toxicity. Treatment of overdose includes emptying of the stomach by induced vomiting or gastric lavage, as well as administration of antacids containing magnesium, aluminium, or calcium to reduce drug absorption. Renal function and urinary pH should be monitored. Important support includes adequate hydration and urine acidification if necessary to prevent crystalluria. Hemodialysis or peritoneal dialysis can only remove less than 10% of ciprofloxacin. Ciprofloxacin may be quantified in plasma or serum to monitor for drug accumulation in patients with hepatic dysfunction or to confirm a diagnosis of poisoning in acute overdose victims.
Ciprofloxacin interacts with certain foods and several other drugs leading to undesirable increases or decreases in the serum levels or distribution of one or both drugs.
Ciprofloxacin should not be taken with antacids containing magnesium or aluminum, highly buffered drugs (sevelamer, lanthanum carbonate, sucralfate, didanosine), or with supplements containing calcium, iron, or zinc. It should be taken two hours before or six hours after these products. Magnesium or aluminum antacids turn ciprofloxacin into insoluble salts that are not readily absorbed by the intestinal tract, reducing peak serum concentrations by 90% or more, leading to therapeutic failure. Additionally, it should not be taken with dairy products or calcium-fortified juices alone, as peak serum concentration and the area under the serum concentration-time curve can be reduced up to 40%. However, ciprofloxacin may be taken with dairy products or calcium-fortified juices as part of a meal.
Ciprofloxacin inhibits the drug-metabolizing enzyme CYP1A2 and thereby can reduce the clearance of drugs metabolized by that enzyme. CYP1A2 substrates that exhibit increased serum levels in ciprofloxacin-treated patients include tizanidine, theophylline, caffeine, methylxanthines, clozapine, olanzapine, and ropinirole. Co-administration of ciprofloxacin with the CYP1A2 substrate tizanidine (Zanaflex) is contraindicated due to a 583% increase in the peak serum concentrations of tizanidine when administered with ciprofloxacin as compared to administration of tizanidine alone. Use of ciprofloxacin is cautioned in patients on theophylline due to its narrow therapeutic index. The authors of one review recommended that patients being treated with ciprofloxacin reduce their caffeine intake. Evidence for significant interactions with several other CYP1A2 substrates such as cyclosporine is equivocal or conflicting.
The Committee on Safety of Medicines and the FDA warn that central nervous system adverse effects, including seizure risk, may be increased when NSAIDs are combined with quinolones. The mechanism for this interaction may involve a synergistic increased antagonism of GABA neurotransmission.
Altered serum levels of the antiepileptic drugs phenytoin and carbamazepine (increased and decreased) have been reported in patients receiving concomitant ciprofloxacin.
Ciprofloxacin is a potent inhibitor of CYP1A2, CYP2D6, and CYP3A4.
Ciprofloxacin is a broad-spectrum antibiotic of the fluoroquinolone class. It is active against some Gram-positive and many Gram-negative bacteria. It functions by inhibiting a type II topoisomerase (DNA gyrase) and topoisomerase IV, necessary to separate bacterial DNA, thereby inhibiting cell division. Bacterial DNA fragmentation will occur as a result of inhibition of the enzymes.
Ciprofloxacin for systemic administration is available as immediate-release tablets, extended-release tablets, an oral suspension, and as a solution for intravenous administration. When administered over one hour as an intravenous infusion, ciprofloxacin rapidly distributes into the tissues, with levels in some tissues exceeding those in the serum. Penetration into the central nervous system is relatively modest, with cerebrospinal fluid levels normally less than 10% of peak serum concentrations. The serum half-life of ciprofloxacin is about 4–6 hours, with 50–70% of an administered dose being excreted in the urine as unmetabolized drug. An additional 10% is excreted in urine as metabolites. Urinary excretion is virtually complete 24 hours after administration. Dose adjustment is required in the elderly and in those with renal impairment.
Ciprofloxacin is weakly bound to serum proteins (20–40%). It is an inhibitor of the drug-metabolizing enzyme cytochrome P450 1A2, which leads to the potential for clinically important drug interactions with drugs metabolized by that enzyme.
Ciprofloxacin is about 70% orally available when administered orally, so a slightly higher dose is needed to achieve the same exposure when switching from IV to oral administration
The extended release oral tablets allow once-daily administration by releasing the drug more slowly in the gastrointestinal tract. These tablets contain 35% of the administered dose in an immediate-release form and 65% in a slow-release matrix. Maximum serum concentrations are achieved between 1 and 4 hours after administration. Compared to the 250- and 500-mg immediate-release tablets, the 500-mg and 1000-mg XR tablets provide higher Cmax, but the 24‑hour AUCs are equivalent.
Ciprofloxacin immediate-release tablets contain ciprofloxacin as the hydrochloride salt, and the XR tablets contain a mixture of the hydrochloride salt as the free base.
Ciprofloxacin is 1-cyclopropyl-6-fluoro-1,4-dihydro-4-oxo-7-(1-piperazinyl)-3-quinolinecarboxylic acid. Its empirical formula is C17H18FN3O3 and its molecular weight is 331.4 g/mol. It is a faintly yellowish to light yellow crystalline substance.
Ciprofloxacin hydrochloride (USP) is the monohydrochloride monohydrate salt of ciprofloxacin. It is a faintly yellowish to light yellow crystalline substance with a molecular weight of 385.8 g/mol. Its empirical formula is C17H18FN3O3HCl•H2O.
Ciprofloxacin is the most widely used of the second-generation quinolones. In 2010, over 20 million prescriptions were written, making it the 35th-most-commonly prescribed generic drug and the 5th-most-commonly prescribed antibacterial in the U.S.
The first members of the quinolone antibacterial class were relatively low-potency drugs such as nalidixic acid, used mainly in the treatment of urinary tract infections owing to their renal excretion and propensity to be concentrated in urine. In 1979, the publication of a patent filed by the pharmaceutical arm of Kyorin Seiyaku Kabushiki Kaisha disclosed the discovery of norfloxacin, and the demonstration that certain structural modifications including the attachment of a fluorine atom to the quinolone ring leads to dramatically enhanced antibacterial potency. In the aftermath of this disclosure, several other pharmaceutical companies initiated research and development programs with the goal of discovering additional antibacterial agents of the fluoroquinolone class.
The fluoroquinolone program at Bayer focused on examining the effects of very minor changes to the norfloxacin structure. In 1983, the company published in vitro potency data for ciprofloxacin, a fluoroquinolone antibacterial having a chemical structure differing from that of norfloxacin by the presence of a single carbon atom. This small change led to a two- to 10-fold increase in potency against most strains of Gram-negative bacteria. Importantly, this structural change led to a four-fold improvement in activity against the important Gram-negative pathogen Pseudomonas aeruginosa, making ciprofloxacin one of the most potent known drugs for the treatment of this intrinsically antibiotic-resistant pathogen.
The oral tablet form of ciprofloxacin was approved in October 1987, just one year after the approval of norfloxacin. In 1991, the intravenous formulation was introduced. Ciprofloxacin sales reached a peak of about 2 billion euros in 2001, before Bayer's patent expired in 2004, after which annual sales have averaged around €200 million.
The name probably originates from the International Scientific Nomenclature: ci- (alteration of cycl-) + propyl + fluor- + ox- + az- + -mycin.
It is available as a generic medication and not very expensive. At least one company, Turtle Pharma Private Limited provides industrial-size amounts
On 24 October 2001, the Prescription Access Litigation (PAL) project filed suit to dissolve an agreement between Bayer and three of its competitors which produced generic versions of drugs (Barr Laboratories, Rugby Laboratories, and Hoechst-Marion-Roussel) that PAL claimed was blocking access to adequate supplies and cheaper, generic versions of ciprofloxacin. The plaintiffs charged that Bayer Corporation, a unit of Bayer AG, had unlawfully paid the three competing companies a total of $200 million to prevent cheaper, generic versions of ciprofloxacin from being brought to the market, as well as manipulating its price and supply. Numerous other consumer advocacy groups joined the lawsuit. On 15 October 2008, five years after Bayer's patent had expired, the United States District Court for the Eastern District of New York granted Bayer's and the other defendants' motion for summary judgment, holding that any anticompetitive effects caused by the settlement agreements between Bayer and its codefendants were within the exclusionary zone of the patent and thus could not be redressed by federal antitrust law, in effect upholding Bayer's agreement with its competitors.
Ciprofloxacin for systemic administration is available as immediate-release tablets, as extended-release tablets, as an oral suspension, and as a solution for intravenous infusion. It is also available for local administration as eye drops and ear drops.
A class action was filed against Bayer AG on behalf of employees of the Brentwood Post Office in Washington, D.C., and workers at the U.S. Capitol, along with employees of American Media, Inc. in Florida and postal workers in general who alleged they developed serious adverse effects from taking ciprofloxacin in the aftermath of the anthrax attacks in 2001. The action alleged Bayer failed to warn class members of the potential side effects of the drug, thereby violating the Pennsylvania Unfair Trade Practices and Consumer Protection Laws. The class action was defeated and the litigation abandoned by the plaintiffs. A similar action was filed in 2003 in New Jersey by four New Jersey postal workers but was withdrawn for lack of grounds, as workers had been informed of the risks of ciprofloxacin when they were given the option of taking the drug.
As resistance to ciprofloxacin has grown since its introduction, research has been conducted to discover and develop analogs that can be effective against resistant bacteria; some have been looked at in antiviral models as well. | [
{
"paragraph_id": 0,
"text": "Ciprofloxacin is a fluoroquinolone antibiotic used to treat a number of bacterial infections. This includes bone and joint infections, intra-abdominal infections, certain types of infectious diarrhea, respiratory tract infections, skin infections, typhoid fever, and urinary tract infections, among others. For some infections it is used in addition to other antibiotics. It can be taken by mouth, as eye drops, as ear drops, or intravenously.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Common side effects include nausea, vomiting, and diarrhea. Severe side effects include an increased risk of tendon rupture, hallucinations, and nerve damage. In people with myasthenia gravis, there is worsening muscle weakness. Rates of side effects appear to be higher than some groups of antibiotics such as cephalosporins but lower than others such as clindamycin. Studies in other animals raise concerns regarding use in pregnancy. No problems were identified, however, in the children of a small number of women who took the medication. It appears to be safe during breastfeeding. It is a second-generation fluoroquinolone with a broad spectrum of activity that usually results in the death of the bacteria.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Ciprofloxacin was patented in 1980 and introduced in 1987. It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies ciprofloxacin as critically important for human medicine. It is available as a generic medication. In 2020, it was the 132nd-most-commonly prescribed medication in the United States, with more than 4 million prescriptions.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Ciprofloxacin is used to treat a wide variety of infections, including infections of bones and joints, endocarditis, gastroenteritis, malignant otitis externa, respiratory tract infections, cellulitis, urinary tract infections, prostatitis, anthrax, and chancroid.",
"title": "Medical uses"
},
{
"paragraph_id": 4,
"text": "Ciprofloxacin only treats bacterial infections; it does not treat viral infections such as the common cold. For certain uses including acute sinusitis, lower respiratory tract infections and uncomplicated gonorrhea, ciprofloxacin is not considered a first-line agent.",
"title": "Medical uses"
},
{
"paragraph_id": 5,
"text": "Ciprofloxacin occupies an important role in treatment guidelines issued by major medical societies for the treatment of serious infections, especially those likely to be caused by Gram-negative bacteria, including Pseudomonas aeruginosa. For example, ciprofloxacin in combination with metronidazole is one of several first-line antibiotic regimens recommended by the Infectious Diseases Society of America for the treatment of community-acquired abdominal infections in adults. It also features prominently in treatment guidelines for acute pyelonephritis, complicated or hospital-acquired urinary tract infection, acute or chronic prostatitis, certain types of endocarditis, certain skin infections, and prosthetic joint infections.",
"title": "Medical uses"
},
{
"paragraph_id": 6,
"text": "In other cases, treatment guidelines are more restrictive, recommending in most cases that older, narrower-spectrum drugs be used as first-line therapy for less severe infections to minimize fluoroquinolone-resistance development. For example, the Infectious Diseases Society of America recommends the use of ciprofloxacin and other fluoroquinolones in urinary tract infections be reserved to cases of proven or expected resistance to narrower-spectrum drugs such as nitrofurantoin or trimethoprim/sulfamethoxazole. The European Association of Urology recommends ciprofloxacin as an alternative regimen for the treatment of uncomplicated urinary tract infections, but cautions that the potential for \"adverse events have to be considered\".",
"title": "Medical uses"
},
{
"paragraph_id": 7,
"text": "Although approved by regulatory authorities for the treatment of respiratory infections, ciprofloxacin is not recommended for respiratory infections by most treatment guidelines due in part to its modest activity against the common respiratory pathogen Streptococcus pneumoniae. \"Respiratory quinolones\" such as levofloxacin, having greater activity against this pathogen, are recommended as first line agents for the treatment of community-acquired pneumonia in patients with important co-morbidities and in patients requiring hospitalization (Infectious Diseases Society of America 2007). Similarly, ciprofloxacin is not recommended as a first-line treatment for acute sinusitis.",
"title": "Medical uses"
},
{
"paragraph_id": 8,
"text": "Ciprofloxacin is approved for the treatment of gonorrhea in many countries, but this recommendation is widely regarded as obsolete due to resistance development.",
"title": "Medical uses"
},
{
"paragraph_id": 9,
"text": "In the United States, ciprofloxacin is pregnancy category C. This category includes drugs for which no adequate and well-controlled studies in human pregnancy exist, and for which animal studies have suggested the potential for harm to the fetus, but potential benefits may warrant use of the drug in pregnant women despite potential risks. An expert review of published data on experiences with ciprofloxacin use during pregnancy by the Teratogen Information System concluded therapeutic doses during pregnancy are unlikely to pose a substantial teratogenic risk (quantity and quality of data=fair), but the data are insufficient to state no risk exists. Exposure to quinolones, including levofloxacin, during the first-trimester is not associated with an increased risk of stillbirths, premature births, birth defects, or low birth weight.",
"title": "Medical uses"
},
{
"paragraph_id": 10,
"text": "Two small post-marketing epidemiology studies of mostly short-term, first-trimester exposure found that fluoroquinolones did not increase risk of major malformations, spontaneous abortions, premature birth, or low birth weight. The label notes, however, that these studies are insufficient to reliably evaluate the definitive safety or risk of less common defects by ciprofloxacin in pregnant women and their developing fetuses.",
"title": "Medical uses"
},
{
"paragraph_id": 11,
"text": "Fluoroquinolones have been reported as present in a mother's milk and thus passed on to the nursing child. The U.S. Food and Drug Administration (FDA) recommends that because of the risk of serious adverse reactions (including articular damage) in infants nursing from mothers taking ciprofloxacin, a decision should be made whether to discontinue nursing or discontinue the drug, taking into account the importance of the drug to the mother.",
"title": "Medical uses"
},
{
"paragraph_id": 12,
"text": "Oral and intravenous ciprofloxacin are approved by the FDA for use in children for only two indications due to the risk of permanent injury to the musculoskeletal system:",
"title": "Medical uses"
},
{
"paragraph_id": 13,
"text": "Current recommendations by the American Academy of Pediatrics note the systemic use of ciprofloxacin in children should be restricted to infections caused by multidrug-resistant pathogens or when no safe or effective alternatives are available.",
"title": "Medical uses"
},
{
"paragraph_id": 14,
"text": "Its spectrum of activity includes most strains of bacterial pathogens responsible for community-acquired pneumonias, bronchitis, urinary tract infections, and gastroenteritis. Ciprofloxacin is particularly effective against Gram-negative bacteria (such as Escherichia coli, Haemophilus influenzae, Klebsiella pneumoniae, Legionella pneumophila, Moraxella catarrhalis, Proteus mirabilis, and Pseudomonas aeruginosa), but is less effective against Gram-positive bacteria (such as methicillin-sensitive Staphylococcus aureus, Streptococcus pneumoniae, and Enterococcus faecalis) than newer fluoroquinolones.",
"title": "Medical uses"
},
{
"paragraph_id": 15,
"text": "As a result of its widespread use to treat minor infections readily treatable with older, narrower-spectrum antibiotics, many bacteria have developed resistance to this drug in recent years, leaving it significantly less effective than it would have been otherwise.",
"title": "Medical uses"
},
{
"paragraph_id": 16,
"text": "Resistance to ciprofloxacin and other fluoroquinolones may evolve rapidly, even during a course of treatment. Numerous pathogens, including enterococci, Streptococcus pyogenes , and Klebsiella pneumoniae (quinolone-resistant) now exhibit resistance. Widespread veterinary usage of fluoroquinolones, particularly in Europe, has been implicated. Meanwhile, some Burkholderia cepacia, Clostridium innocuum, and Enterococcus faecium strains have developed resistance to ciprofloxacin to varying degrees.",
"title": "Medical uses"
},
{
"paragraph_id": 17,
"text": "Fluoroquinolones had become the class of antibiotics most commonly prescribed to adults in 2002. Nearly half (42%) of those prescriptions in the U.S. were for conditions not approved by the FDA, such as acute bronchitis, otitis media, and acute upper respiratory tract infection, according to a study supported in part by the Agency for Healthcare Research and Quality. Additionally, they were commonly prescribed for medical conditions that were not even bacterial to begin with, such as viral infections, or those for which no proven benefit existed.",
"title": "Medical uses"
},
{
"paragraph_id": 18,
"text": "Contraindications include:",
"title": "Contraindications"
},
{
"paragraph_id": 19,
"text": "Ciprofloxacin is also considered to be contraindicated in children (except for the indications outlined above), in pregnancy, to nursing mothers, and in people with epilepsy or other seizure disorders.",
"title": "Contraindications"
},
{
"paragraph_id": 20,
"text": "Caution may be required in people with Marfan syndrome or Ehlers-Danlos syndrome.",
"title": "Contraindications"
},
{
"paragraph_id": 21,
"text": "Adverse effects can involve the tendons, muscles, joints, nerves, and the central nervous system.",
"title": "Adverse effects"
},
{
"paragraph_id": 22,
"text": "Rates of adverse effects appear to be higher than with some groups of antibiotics such as cephalosporins but lower than with others such as clindamycin. Compared to other antibiotics some studies find a higher rate of adverse effects while others find no difference.",
"title": "Adverse effects"
},
{
"paragraph_id": 23,
"text": "In clinical trials most of the adverse events were described as mild or moderate in severity, abated soon after the drug was discontinued, and required no treatment. Some adverse effects may be permanent. Ciprofloxacin was stopped because of an adverse event in 1% of people treated with the medication by mouth. The most frequently reported drug-related events, from trials of all formulations, all dosages, all drug-therapy durations, and for all indications, were nausea (2.5%), diarrhea (1.6%), abnormal liver function tests (1.3%), vomiting (1%), and rash (1%). Other adverse events occurred at rates of <1%.",
"title": "Adverse effects"
},
{
"paragraph_id": 24,
"text": "Ciprofloxacin includes a boxed warning in the United States due to an increased risk of tendinitis and tendon rupture, especially in people who are older than 60 years, people who also use corticosteroids, and people with kidney, lung, or heart transplants. Tendon rupture can occur during therapy or even months after discontinuation of the medication. One study found that fluoroquinolone use was associated with a 1.9-fold increase in tendon problems. The risk increased to 3.2 in those over 60 years of age and to 6.2 in those over the age of 60 who were also taking corticosteroids. Among the 46,766 quinolone users in the study, 38 (0.08%) cases of Achilles tendon rupture were identified.",
"title": "Adverse effects"
},
{
"paragraph_id": 25,
"text": "The fluoroquinolones, including ciprofloxacin, are associated with an increased risk of cardiac toxicity, including QT interval prolongation, torsades de pointes, ventricular arrhythmia, and sudden death.",
"title": "Adverse effects"
},
{
"paragraph_id": 26,
"text": "Because Ciprofloxacin is lipophilic, it has the ability to cross the blood–brain barrier. The 2013 FDA label warns of nervous system effects. Ciprofloxacin, like other fluoroquinolones, is known to trigger seizures or lower the seizure threshold, and may cause other central nervous system adverse effects. Headache, dizziness, and insomnia have been reported as occurring fairly commonly in postapproval review articles, along with a much lower incidence of serious CNS adverse effects such as tremors, psychosis, anxiety, hallucinations, paranoia, and suicide attempts, especially at higher doses. Like other fluoroquinolones, it is also known to cause peripheral neuropathy that may be irreversible, such as weakness, burning pain, tingling or numbness.",
"title": "Adverse effects"
},
{
"paragraph_id": 27,
"text": "Ciprofloxacin is active in six of eight in vitro assays used as rapid screens for the detection of genotoxic effects, but is not active in in vivo assays of genotoxicity. Long-term carcinogenicity studies in rats and mice resulted in no carcinogenic or tumorigenic effects due to ciprofloxacin at daily oral dose levels up to 250 and 750 mg/kg to rats and mice, respectively (about 1.7 and 2.5 times the highest recommended therapeutic dose based upon mg/m). Results from photo co-carcinogenicity testing indicate ciprofloxacin does not reduce the time to appearance of UV-induced skin tumors as compared to vehicle control.",
"title": "Adverse effects"
},
{
"paragraph_id": 28,
"text": "The other black box warning is that ciprofloxacin should not be used in people with myasthenia gravis due to possible exacerbation of muscle weakness which may lead to breathing problems resulting in death or ventilator support. Fluoroquinolones are known to block neuromuscular transmission. There are concerns that fluoroquinolones including ciprofloxacin can affect cartilage in young children.",
"title": "Adverse effects"
},
{
"paragraph_id": 29,
"text": "Clostridium difficile-associated diarrhea is a serious adverse effect of ciprofloxacin and other fluoroquinolones; it is unclear whether the risk is higher than with other broad-spectrum antibiotics.",
"title": "Adverse effects"
},
{
"paragraph_id": 30,
"text": "A wide range of rare but potentially fatal adverse effects reported to the U.S. FDA or the subject of case reports includes aortic dissection, toxic epidermal necrolysis, Stevens–Johnson syndrome, low blood pressure, allergic pneumonitis, bone marrow suppression, hepatitis or liver failure, and sensitivity to light. The medication should be discontinued if a rash, jaundice, or other sign of hypersensitivity occurs.",
"title": "Adverse effects"
},
{
"paragraph_id": 31,
"text": "Children and the elderly are at a much greater risk of experiencing adverse reactions.",
"title": "Adverse effects"
},
{
"paragraph_id": 32,
"text": "Overdose of ciprofloxacin may result in reversible renal toxicity. Treatment of overdose includes emptying of the stomach by induced vomiting or gastric lavage, as well as administration of antacids containing magnesium, aluminium, or calcium to reduce drug absorption. Renal function and urinary pH should be monitored. Important support includes adequate hydration and urine acidification if necessary to prevent crystalluria. Hemodialysis or peritoneal dialysis can only remove less than 10% of ciprofloxacin. Ciprofloxacin may be quantified in plasma or serum to monitor for drug accumulation in patients with hepatic dysfunction or to confirm a diagnosis of poisoning in acute overdose victims.",
"title": "Overdose"
},
{
"paragraph_id": 33,
"text": "Ciprofloxacin interacts with certain foods and several other drugs leading to undesirable increases or decreases in the serum levels or distribution of one or both drugs.",
"title": "Interactions"
},
{
"paragraph_id": 34,
"text": "Ciprofloxacin should not be taken with antacids containing magnesium or aluminum, highly buffered drugs (sevelamer, lanthanum carbonate, sucralfate, didanosine), or with supplements containing calcium, iron, or zinc. It should be taken two hours before or six hours after these products. Magnesium or aluminum antacids turn ciprofloxacin into insoluble salts that are not readily absorbed by the intestinal tract, reducing peak serum concentrations by 90% or more, leading to therapeutic failure. Additionally, it should not be taken with dairy products or calcium-fortified juices alone, as peak serum concentration and the area under the serum concentration-time curve can be reduced up to 40%. However, ciprofloxacin may be taken with dairy products or calcium-fortified juices as part of a meal.",
"title": "Interactions"
},
{
"paragraph_id": 35,
"text": "Ciprofloxacin inhibits the drug-metabolizing enzyme CYP1A2 and thereby can reduce the clearance of drugs metabolized by that enzyme. CYP1A2 substrates that exhibit increased serum levels in ciprofloxacin-treated patients include tizanidine, theophylline, caffeine, methylxanthines, clozapine, olanzapine, and ropinirole. Co-administration of ciprofloxacin with the CYP1A2 substrate tizanidine (Zanaflex) is contraindicated due to a 583% increase in the peak serum concentrations of tizanidine when administered with ciprofloxacin as compared to administration of tizanidine alone. Use of ciprofloxacin is cautioned in patients on theophylline due to its narrow therapeutic index. The authors of one review recommended that patients being treated with ciprofloxacin reduce their caffeine intake. Evidence for significant interactions with several other CYP1A2 substrates such as cyclosporine is equivocal or conflicting.",
"title": "Interactions"
},
{
"paragraph_id": 36,
"text": "The Committee on Safety of Medicines and the FDA warn that central nervous system adverse effects, including seizure risk, may be increased when NSAIDs are combined with quinolones. The mechanism for this interaction may involve a synergistic increased antagonism of GABA neurotransmission.",
"title": "Interactions"
},
{
"paragraph_id": 37,
"text": "Altered serum levels of the antiepileptic drugs phenytoin and carbamazepine (increased and decreased) have been reported in patients receiving concomitant ciprofloxacin.",
"title": "Interactions"
},
{
"paragraph_id": 38,
"text": "Ciprofloxacin is a potent inhibitor of CYP1A2, CYP2D6, and CYP3A4.",
"title": "Interactions"
},
{
"paragraph_id": 39,
"text": "Ciprofloxacin is a broad-spectrum antibiotic of the fluoroquinolone class. It is active against some Gram-positive and many Gram-negative bacteria. It functions by inhibiting a type II topoisomerase (DNA gyrase) and topoisomerase IV, necessary to separate bacterial DNA, thereby inhibiting cell division. Bacterial DNA fragmentation will occur as a result of inhibition of the enzymes.",
"title": "Mechanism of action"
},
{
"paragraph_id": 40,
"text": "Ciprofloxacin for systemic administration is available as immediate-release tablets, extended-release tablets, an oral suspension, and as a solution for intravenous administration. When administered over one hour as an intravenous infusion, ciprofloxacin rapidly distributes into the tissues, with levels in some tissues exceeding those in the serum. Penetration into the central nervous system is relatively modest, with cerebrospinal fluid levels normally less than 10% of peak serum concentrations. The serum half-life of ciprofloxacin is about 4–6 hours, with 50–70% of an administered dose being excreted in the urine as unmetabolized drug. An additional 10% is excreted in urine as metabolites. Urinary excretion is virtually complete 24 hours after administration. Dose adjustment is required in the elderly and in those with renal impairment.",
"title": "Pharmacokinetics"
},
{
"paragraph_id": 41,
"text": "Ciprofloxacin is weakly bound to serum proteins (20–40%). It is an inhibitor of the drug-metabolizing enzyme cytochrome P450 1A2, which leads to the potential for clinically important drug interactions with drugs metabolized by that enzyme.",
"title": "Pharmacokinetics"
},
{
"paragraph_id": 42,
"text": "Ciprofloxacin is about 70% orally available when administered orally, so a slightly higher dose is needed to achieve the same exposure when switching from IV to oral administration",
"title": "Pharmacokinetics"
},
{
"paragraph_id": 43,
"text": "The extended release oral tablets allow once-daily administration by releasing the drug more slowly in the gastrointestinal tract. These tablets contain 35% of the administered dose in an immediate-release form and 65% in a slow-release matrix. Maximum serum concentrations are achieved between 1 and 4 hours after administration. Compared to the 250- and 500-mg immediate-release tablets, the 500-mg and 1000-mg XR tablets provide higher Cmax, but the 24‑hour AUCs are equivalent.",
"title": "Pharmacokinetics"
},
{
"paragraph_id": 44,
"text": "Ciprofloxacin immediate-release tablets contain ciprofloxacin as the hydrochloride salt, and the XR tablets contain a mixture of the hydrochloride salt as the free base.",
"title": "Pharmacokinetics"
},
{
"paragraph_id": 45,
"text": "Ciprofloxacin is 1-cyclopropyl-6-fluoro-1,4-dihydro-4-oxo-7-(1-piperazinyl)-3-quinolinecarboxylic acid. Its empirical formula is C17H18FN3O3 and its molecular weight is 331.4 g/mol. It is a faintly yellowish to light yellow crystalline substance.",
"title": "Chemical properties"
},
{
"paragraph_id": 46,
"text": "Ciprofloxacin hydrochloride (USP) is the monohydrochloride monohydrate salt of ciprofloxacin. It is a faintly yellowish to light yellow crystalline substance with a molecular weight of 385.8 g/mol. Its empirical formula is C17H18FN3O3HCl•H2O.",
"title": "Chemical properties"
},
{
"paragraph_id": 47,
"text": "Ciprofloxacin is the most widely used of the second-generation quinolones. In 2010, over 20 million prescriptions were written, making it the 35th-most-commonly prescribed generic drug and the 5th-most-commonly prescribed antibacterial in the U.S.",
"title": "Usage"
},
{
"paragraph_id": 48,
"text": "The first members of the quinolone antibacterial class were relatively low-potency drugs such as nalidixic acid, used mainly in the treatment of urinary tract infections owing to their renal excretion and propensity to be concentrated in urine. In 1979, the publication of a patent filed by the pharmaceutical arm of Kyorin Seiyaku Kabushiki Kaisha disclosed the discovery of norfloxacin, and the demonstration that certain structural modifications including the attachment of a fluorine atom to the quinolone ring leads to dramatically enhanced antibacterial potency. In the aftermath of this disclosure, several other pharmaceutical companies initiated research and development programs with the goal of discovering additional antibacterial agents of the fluoroquinolone class.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "The fluoroquinolone program at Bayer focused on examining the effects of very minor changes to the norfloxacin structure. In 1983, the company published in vitro potency data for ciprofloxacin, a fluoroquinolone antibacterial having a chemical structure differing from that of norfloxacin by the presence of a single carbon atom. This small change led to a two- to 10-fold increase in potency against most strains of Gram-negative bacteria. Importantly, this structural change led to a four-fold improvement in activity against the important Gram-negative pathogen Pseudomonas aeruginosa, making ciprofloxacin one of the most potent known drugs for the treatment of this intrinsically antibiotic-resistant pathogen.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "The oral tablet form of ciprofloxacin was approved in October 1987, just one year after the approval of norfloxacin. In 1991, the intravenous formulation was introduced. Ciprofloxacin sales reached a peak of about 2 billion euros in 2001, before Bayer's patent expired in 2004, after which annual sales have averaged around €200 million.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "The name probably originates from the International Scientific Nomenclature: ci- (alteration of cycl-) + propyl + fluor- + ox- + az- + -mycin.",
"title": "History"
},
{
"paragraph_id": 52,
"text": "It is available as a generic medication and not very expensive. At least one company, Turtle Pharma Private Limited provides industrial-size amounts",
"title": "Society and culture"
},
{
"paragraph_id": 53,
"text": "On 24 October 2001, the Prescription Access Litigation (PAL) project filed suit to dissolve an agreement between Bayer and three of its competitors which produced generic versions of drugs (Barr Laboratories, Rugby Laboratories, and Hoechst-Marion-Roussel) that PAL claimed was blocking access to adequate supplies and cheaper, generic versions of ciprofloxacin. The plaintiffs charged that Bayer Corporation, a unit of Bayer AG, had unlawfully paid the three competing companies a total of $200 million to prevent cheaper, generic versions of ciprofloxacin from being brought to the market, as well as manipulating its price and supply. Numerous other consumer advocacy groups joined the lawsuit. On 15 October 2008, five years after Bayer's patent had expired, the United States District Court for the Eastern District of New York granted Bayer's and the other defendants' motion for summary judgment, holding that any anticompetitive effects caused by the settlement agreements between Bayer and its codefendants were within the exclusionary zone of the patent and thus could not be redressed by federal antitrust law, in effect upholding Bayer's agreement with its competitors.",
"title": "Society and culture"
},
{
"paragraph_id": 54,
"text": "Ciprofloxacin for systemic administration is available as immediate-release tablets, as extended-release tablets, as an oral suspension, and as a solution for intravenous infusion. It is also available for local administration as eye drops and ear drops.",
"title": "Society and culture"
},
{
"paragraph_id": 55,
"text": "A class action was filed against Bayer AG on behalf of employees of the Brentwood Post Office in Washington, D.C., and workers at the U.S. Capitol, along with employees of American Media, Inc. in Florida and postal workers in general who alleged they developed serious adverse effects from taking ciprofloxacin in the aftermath of the anthrax attacks in 2001. The action alleged Bayer failed to warn class members of the potential side effects of the drug, thereby violating the Pennsylvania Unfair Trade Practices and Consumer Protection Laws. The class action was defeated and the litigation abandoned by the plaintiffs. A similar action was filed in 2003 in New Jersey by four New Jersey postal workers but was withdrawn for lack of grounds, as workers had been informed of the risks of ciprofloxacin when they were given the option of taking the drug.",
"title": "Society and culture"
},
{
"paragraph_id": 56,
"text": "As resistance to ciprofloxacin has grown since its introduction, research has been conducted to discover and develop analogs that can be effective against resistant bacteria; some have been looked at in antiviral models as well.",
"title": "Research"
}
] | Ciprofloxacin is a fluoroquinolone antibiotic used to treat a number of bacterial infections. This includes bone and joint infections, intra-abdominal infections, certain types of infectious diarrhea, respiratory tract infections, skin infections, typhoid fever, and urinary tract infections, among others. For some infections it is used in addition to other antibiotics. It can be taken by mouth, as eye drops, as ear drops, or intravenously. Common side effects include nausea, vomiting, and diarrhea. Severe side effects include an increased risk of tendon rupture, hallucinations, and nerve damage. In people with myasthenia gravis, there is worsening muscle weakness. Rates of side effects appear to be higher than some groups of antibiotics such as cephalosporins but lower than others such as clindamycin. Studies in other animals raise concerns regarding use in pregnancy. No problems were identified, however, in the children of a small number of women who took the medication. It appears to be safe during breastfeeding. It is a second-generation fluoroquinolone with a broad spectrum of activity that usually results in the death of the bacteria. Ciprofloxacin was patented in 1980 and introduced in 1987. It is on the World Health Organization's List of Essential Medicines. The World Health Organization classifies ciprofloxacin as critically important for human medicine. It is available as a generic medication. In 2020, it was the 132nd-most-commonly prescribed medication in the United States, with more than 4 million prescriptions. | 2001-10-12T19:59:00Z | 2023-12-20T21:35:08Z | [
"Template:When",
"Template:Cite dictionary",
"Template:Piperazines",
"Template:Cite press release",
"Template:Cite journal",
"Template:Cite book",
"Template:Short description",
"Template:Nbsp",
"Template:See also",
"Template:Medical citation needed",
"Template:Reflist",
"Template:Cite web",
"Template:Page needed",
"Template:Webarchive",
"Template:QuinoloneAntiBiotics",
"Template:GABA receptor modulators",
"Template:Commons category",
"Template:Otologicals",
"Template:Redirect",
"Template:Distinguish",
"Template:Use dmy dates",
"Template:Drugbox",
"Template:ISBN",
"Template:Drugs.com",
"Template:Portal bar"
] | https://en.wikipedia.org/wiki/Ciprofloxacin |
6,774 | Consubstantiation | Consubstantiation is a Christian theological doctrine that (like transubstantiation) describes the real presence of Christ in the Eucharist. It holds that during the sacrament, the substance of the body and blood of Christ are present alongside the substance of the bread and wine, which remain present. It was part of the doctrines of Lollardy, and considered a heresy by the Roman Catholic Church. It was later championed by Edward Pusey of the Oxford Movement, and is therefore held by many high church Anglicans.
In England in the late 14th century, there was a political and religious movement known as Lollardy. Among much broader goals, the Lollards affirmed a form of consubstantiation—that the Eucharist remained physically bread and wine, while becoming spiritually the body and blood of Christ. Lollardy survived up until the time of the English Reformation.
Whilst ultimately rejected by him on account of the authority of the Church of Rome, William of Ockham entertains a version of consubstantiation in his Fourth Quodlibet, Question 30, where he claims that "the substance of the bread and the substance of the wine remain there and that the substance of the body of Christ remains in the same place, together with the substance of the bread".
Literary critic Kenneth Burke's dramatism takes this concept and utilizes it in secular rhetorical theory to look at the dialectic of unity and difference within the context of logology.
The doctrine of consubstantiation is often held in contrast to the doctrine of transubstantiation.
To explain the manner of Christ's presence in Holy Communion, many high church Anglicans teach the philosophical explanation of consubstantiation. A major leader in the Anglo-Catholic Oxford Movement, Edward Pusey, championed the view of consubstantiation. Pusey's view is that:
I cannot deem it unfair to apply the name of Consubstantiation to a doctrine which teaches, that "the true flesh and true blood of Christ are in the true bread and wine", in such a way that "whatsoever motion or action the bread" and wine have, the body and blood "of Christ also" have "the same"; and that "the substances in both cases" are "so mingled—that they should constitute some one thing".
The term consubstantiation has been used to describe Martin Luther's Eucharistic doctrine, the sacramental union. Lutheran theologians reject the term because it refers to a philosophical construct that differs from the Lutheran doctrine of the sacramental union, denotes a mixing of substances (bread and wine with body and blood), and suggests a "gross, Capernaitic, carnal" presence of the body and blood of Christ. | [
{
"paragraph_id": 0,
"text": "Consubstantiation is a Christian theological doctrine that (like transubstantiation) describes the real presence of Christ in the Eucharist. It holds that during the sacrament, the substance of the body and blood of Christ are present alongside the substance of the bread and wine, which remain present. It was part of the doctrines of Lollardy, and considered a heresy by the Roman Catholic Church. It was later championed by Edward Pusey of the Oxford Movement, and is therefore held by many high church Anglicans.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In England in the late 14th century, there was a political and religious movement known as Lollardy. Among much broader goals, the Lollards affirmed a form of consubstantiation—that the Eucharist remained physically bread and wine, while becoming spiritually the body and blood of Christ. Lollardy survived up until the time of the English Reformation.",
"title": "Development"
},
{
"paragraph_id": 2,
"text": "Whilst ultimately rejected by him on account of the authority of the Church of Rome, William of Ockham entertains a version of consubstantiation in his Fourth Quodlibet, Question 30, where he claims that \"the substance of the bread and the substance of the wine remain there and that the substance of the body of Christ remains in the same place, together with the substance of the bread\".",
"title": "Development"
},
{
"paragraph_id": 3,
"text": "Literary critic Kenneth Burke's dramatism takes this concept and utilizes it in secular rhetorical theory to look at the dialectic of unity and difference within the context of logology.",
"title": "Development"
},
{
"paragraph_id": 4,
"text": "The doctrine of consubstantiation is often held in contrast to the doctrine of transubstantiation.",
"title": "Development"
},
{
"paragraph_id": 5,
"text": "To explain the manner of Christ's presence in Holy Communion, many high church Anglicans teach the philosophical explanation of consubstantiation. A major leader in the Anglo-Catholic Oxford Movement, Edward Pusey, championed the view of consubstantiation. Pusey's view is that:",
"title": "Development"
},
{
"paragraph_id": 6,
"text": "I cannot deem it unfair to apply the name of Consubstantiation to a doctrine which teaches, that \"the true flesh and true blood of Christ are in the true bread and wine\", in such a way that \"whatsoever motion or action the bread\" and wine have, the body and blood \"of Christ also\" have \"the same\"; and that \"the substances in both cases\" are \"so mingled—that they should constitute some one thing\".",
"title": "Development"
},
{
"paragraph_id": 7,
"text": "The term consubstantiation has been used to describe Martin Luther's Eucharistic doctrine, the sacramental union. Lutheran theologians reject the term because it refers to a philosophical construct that differs from the Lutheran doctrine of the sacramental union, denotes a mixing of substances (bread and wine with body and blood), and suggests a \"gross, Capernaitic, carnal\" presence of the body and blood of Christ.",
"title": "Development"
}
] | Consubstantiation is a Christian theological doctrine that describes the real presence of Christ in the Eucharist. It holds that during the sacrament, the substance of the body and blood of Christ are present alongside the substance of the bread and wine, which remain present. It was part of the doctrines of Lollardy, and considered a heresy by the Roman Catholic Church. It was later championed by Edward Pusey of the Oxford Movement, and is therefore held by many high church Anglicans. | 2001-10-12T21:04:26Z | 2023-12-10T20:48:00Z | [
"Template:Short description",
"Template:Distinguish",
"Template:Quotation",
"Template:Cite book",
"Template:Cite web",
"Template:Eucharist",
"Template:POV statement",
"Template:Reflist",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Consubstantiation |
6,775 | Chlorophyta | Chlorophyta is a taxon of green algae informally called chlorophytes. The name is used in two very different senses, so care is needed to determine the use by a particular author. In older classification systems, it is a highly paraphyletic group of all the green algae within the green plants (Viridiplantae) and thus includes about 7,000 species of mostly aquatic photosynthetic eukaryotic organisms. In newer classifications, it is the sister clade of the streptophytes/charophytes. The clade Streptophyta consists of the Charophyta in which the Embryophyta (land plants) emerged. In this latter sense the Chlorophyta includes only about 4,300 species. About 90% of all known species live in freshwater. Like the land plants (embryophytes: bryophytes and tracheophytes), green algae (chlorophytes and charophytes besides embryophytes) contain chlorophyll a and chlorophyll b and store food as starch in their plastids.
With the exception of the three classes Ulvophyceae, Trebouxiophyceae and Chlorophyceae in the UTC clade, which show various degrees of multicellularity, all the Chlorophyta lineages are unicellular. Some members of the group form symbiotic relationships with protozoa, sponges, and cnidarians. Others form symbiotic relationships with fungi to form lichens, but the majority of species are free-living. Some conduct sexual reproduction, which is oogamous or isogamous. All members of the clade have motile flagellated swimming cells. While most species live in freshwater habitats and a large number in marine habitats, other species are adapted to a wide range of land environments. For example, Chlamydomonas nivalis, which causes Watermelon snow, lives on summer alpine snowfields. Others, such as Trentepohlia species, live attached to rocks or woody parts of trees. Monostroma kuroshiense, an edible green alga cultivated worldwide and most expensive among green algae, belongs to this group.
Species of Chlorophyta (treated as what is now considered one of the two main clades of Viridiplantae) are common inhabitants of marine, freshwater and terrestrial environments. Several species have adapted to specialised and extreme environments, such as deserts, arctic environments, hypersaline habitats, marine deep waters, deep-sea hydrothermal vents and habitats that experiences extreme changes in temperature, light and salinity. Some groups, such as the Trentepohliales are exclusively found on land. Several species of Chlorophyta live in symbiosis with a diverse range of eukaryotes, including fungi (to form lichens), ciliates, forams, cnidarians and molluscs. Some species of Chlorophyta are heterotrophic, either free-living or parasitic. Others are mixotrophic bacterivores through phagocytosis. Two common species of the heterotrophic green alga Prototheca are pathogenic and can cause the disease protothecosis in humans and animals.
Characteristics used for the classification of Chlorophyta are: type of zoid, mitosis (karyokinesis), cytokinesis, organization level, life cycle, type of gametes, cell wall polysaccharides and more recently genetic data.
Leliaert et al. 2012 proposed the following phylogeny. He marked the "prasinophytes" as paraphyletic, with the remaining Chlorophyta groups as "core chlorophytes". He described all Streptophyta except the land plants as paraphyletic "charophytes".
A 2020 paper places the "Prasinodermophyta" (i.e. Prasinodermophyceae + Palmophyllophyceae) as the basal Viridiplantae clade.
Simplified phylogeny of the Chlorophyta, according to Leliaert et al. 2012. Note that many algae previously classified in Chlorophyta are placed here in Streptophyta.
A possible classification when Chlorophyta refers to one of the two clades of the Viridiplantae is shown below.
Classification of the Chlorophyta, treated as all green algae, according to Hoek, Mann and Jahns 1995.
In a note added in proof, an alternative classification is presented for the algae of the class Chlorophyceae:
Classification of the Chlorophyta and Charophyta according to Bold and Wynne 1985.
Classification of the Chlorophyta according to Mattox & Stewart 1984:
Classification of the Chlorophyta according to Fott 1971.
Classification of the Chlorophyta and related algae according to Round 1971.
Classification of the Chlorophyta according to Smith 1938:
In February 2020, the fossilized remains of green algae, named Proterocladus antiquus were discovered in the northern province of Liaoning, China. At around a billion years old, it is believed to be one of the oldest examples of a multicellular chlorophyte. | [
{
"paragraph_id": 0,
"text": "Chlorophyta is a taxon of green algae informally called chlorophytes. The name is used in two very different senses, so care is needed to determine the use by a particular author. In older classification systems, it is a highly paraphyletic group of all the green algae within the green plants (Viridiplantae) and thus includes about 7,000 species of mostly aquatic photosynthetic eukaryotic organisms. In newer classifications, it is the sister clade of the streptophytes/charophytes. The clade Streptophyta consists of the Charophyta in which the Embryophyta (land plants) emerged. In this latter sense the Chlorophyta includes only about 4,300 species. About 90% of all known species live in freshwater. Like the land plants (embryophytes: bryophytes and tracheophytes), green algae (chlorophytes and charophytes besides embryophytes) contain chlorophyll a and chlorophyll b and store food as starch in their plastids.",
"title": ""
},
{
"paragraph_id": 1,
"text": "With the exception of the three classes Ulvophyceae, Trebouxiophyceae and Chlorophyceae in the UTC clade, which show various degrees of multicellularity, all the Chlorophyta lineages are unicellular. Some members of the group form symbiotic relationships with protozoa, sponges, and cnidarians. Others form symbiotic relationships with fungi to form lichens, but the majority of species are free-living. Some conduct sexual reproduction, which is oogamous or isogamous. All members of the clade have motile flagellated swimming cells. While most species live in freshwater habitats and a large number in marine habitats, other species are adapted to a wide range of land environments. For example, Chlamydomonas nivalis, which causes Watermelon snow, lives on summer alpine snowfields. Others, such as Trentepohlia species, live attached to rocks or woody parts of trees. Monostroma kuroshiense, an edible green alga cultivated worldwide and most expensive among green algae, belongs to this group.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Species of Chlorophyta (treated as what is now considered one of the two main clades of Viridiplantae) are common inhabitants of marine, freshwater and terrestrial environments. Several species have adapted to specialised and extreme environments, such as deserts, arctic environments, hypersaline habitats, marine deep waters, deep-sea hydrothermal vents and habitats that experiences extreme changes in temperature, light and salinity. Some groups, such as the Trentepohliales are exclusively found on land. Several species of Chlorophyta live in symbiosis with a diverse range of eukaryotes, including fungi (to form lichens), ciliates, forams, cnidarians and molluscs. Some species of Chlorophyta are heterotrophic, either free-living or parasitic. Others are mixotrophic bacterivores through phagocytosis. Two common species of the heterotrophic green alga Prototheca are pathogenic and can cause the disease protothecosis in humans and animals.",
"title": "Ecology"
},
{
"paragraph_id": 3,
"text": "Characteristics used for the classification of Chlorophyta are: type of zoid, mitosis (karyokinesis), cytokinesis, organization level, life cycle, type of gametes, cell wall polysaccharides and more recently genetic data.",
"title": "Classifications"
},
{
"paragraph_id": 4,
"text": "Leliaert et al. 2012 proposed the following phylogeny. He marked the \"prasinophytes\" as paraphyletic, with the remaining Chlorophyta groups as \"core chlorophytes\". He described all Streptophyta except the land plants as paraphyletic \"charophytes\".",
"title": "Classifications"
},
{
"paragraph_id": 5,
"text": "A 2020 paper places the \"Prasinodermophyta\" (i.e. Prasinodermophyceae + Palmophyllophyceae) as the basal Viridiplantae clade.",
"title": "Classifications"
},
{
"paragraph_id": 6,
"text": "Simplified phylogeny of the Chlorophyta, according to Leliaert et al. 2012. Note that many algae previously classified in Chlorophyta are placed here in Streptophyta.",
"title": "Classifications"
},
{
"paragraph_id": 7,
"text": "A possible classification when Chlorophyta refers to one of the two clades of the Viridiplantae is shown below.",
"title": "Classifications"
},
{
"paragraph_id": 8,
"text": "Classification of the Chlorophyta, treated as all green algae, according to Hoek, Mann and Jahns 1995.",
"title": "Classifications"
},
{
"paragraph_id": 9,
"text": "In a note added in proof, an alternative classification is presented for the algae of the class Chlorophyceae:",
"title": "Classifications"
},
{
"paragraph_id": 10,
"text": "Classification of the Chlorophyta and Charophyta according to Bold and Wynne 1985.",
"title": "Classifications"
},
{
"paragraph_id": 11,
"text": "Classification of the Chlorophyta according to Mattox & Stewart 1984:",
"title": "Classifications"
},
{
"paragraph_id": 12,
"text": "Classification of the Chlorophyta according to Fott 1971.",
"title": "Classifications"
},
{
"paragraph_id": 13,
"text": "Classification of the Chlorophyta and related algae according to Round 1971.",
"title": "Classifications"
},
{
"paragraph_id": 14,
"text": "Classification of the Chlorophyta according to Smith 1938:",
"title": "Classifications"
},
{
"paragraph_id": 15,
"text": "In February 2020, the fossilized remains of green algae, named Proterocladus antiquus were discovered in the northern province of Liaoning, China. At around a billion years old, it is believed to be one of the oldest examples of a multicellular chlorophyte.",
"title": "Research and discoveries"
}
] | Chlorophyta is a taxon of green algae informally called chlorophytes. The name is used in two very different senses, so care is needed to determine the use by a particular author. In older classification systems, it is a highly paraphyletic group of all the green algae within the green plants (Viridiplantae) and thus includes about 7,000 species of mostly aquatic photosynthetic eukaryotic organisms. In newer classifications, it is the sister clade of the streptophytes/charophytes. The clade Streptophyta consists of the Charophyta in which the Embryophyta emerged. In this latter sense the Chlorophyta includes only about 4,300 species. About 90% of all known species live in freshwater.
Like the land plants, green algae contain chlorophyll a and chlorophyll b and store food as starch in their plastids. With the exception of the three classes Ulvophyceae, Trebouxiophyceae and Chlorophyceae in the UTC clade, which show various degrees of multicellularity, all the Chlorophyta lineages are unicellular. Some members of the group form symbiotic relationships with protozoa, sponges, and cnidarians. Others form symbiotic relationships with fungi to form lichens, but the majority of species are free-living. Some conduct sexual reproduction, which is oogamous or isogamous. All members of the clade have motile flagellated swimming cells. While most species live in freshwater habitats and a large number in marine habitats, other species are adapted to a wide range of land environments. For example, Chlamydomonas nivalis, which causes Watermelon snow, lives on summer alpine snowfields. Others, such as Trentepohlia species, live attached to rocks or woody parts of trees. Monostroma kuroshiense, an edible green alga cultivated worldwide and most expensive among green algae, belongs to this group. | 2002-02-25T15:43:11Z | 2023-11-29T16:11:51Z | [
"Template:Automatic taxobox",
"Template:Commons category",
"Template:Plant classification",
"Template:Further",
"Template:Clade",
"Template:Rp",
"Template:Cite journal",
"Template:Cite web",
"Template:Wikispecies",
"Template:Refend",
"Template:About",
"Template:Reflist",
"Template:Refbegin",
"Template:Short description",
"Template:Cite book",
"Template:Life on Earth",
"Template:Taxonbar"
] | https://en.wikipedia.org/wiki/Chlorophyta |
6,776 | Capybara | The capybara or greater capybara (Hydrochoerus hydrochaeris) is a giant cavy rodent native to South America. It is the largest living rodent and a member of the genus Hydrochoerus. The only other extant member is the lesser capybara (Hydrochoerus isthmius). Its close relatives include guinea pigs and rock cavies, and it is more distantly related to the agouti, the chinchilla, and the nutria. The capybara inhabits savannas and dense forests, and lives near bodies of water. It is a highly social species and can be found in groups as large as 100 individuals, but usually live in groups of 10–20 individuals. The capybara is hunted for its meat and hide and also for grease from its thick fatty skin. It is not considered a threatened species.
Its common name is derived from Tupi ka'apiûaracode: tpw is deprecated , a complex agglutination of kaácode: tpw is deprecated (leaf) + píicode: tpw is deprecated (slender) + úcode: tpw is deprecated (eat) + aracode: tpw is deprecated (a suffix for agent nouns), meaning "one who eats slender leaves", or "grass-eater". The scientific name, both hydrochoerus and hydrochaeris, comes from Greek ὕδωρ (hydor "water") and χοῖρος (choiros "pig, hog").
The capybara and the lesser capybara both belong to the subfamily Hydrochoerinae along with the rock cavies. The living capybaras and their extinct relatives were previously classified in their own family Hydrochoeridae. Since 2002, molecular phylogenetic studies have recognized a close relationship between Hydrochoerus and Kerodon, the rock cavies, supporting placement of both genera in a subfamily of Caviidae.
Paleontological classifications previously used Hydrochoeridae for all capybaras, while using Hydrochoerinae for the living genus and its closest fossil relatives, such as Neochoerus, but more recently have adopted the classification of Hydrochoerinae within Caviidae. The taxonomy of fossil hydrochoerines is also in a state of flux. In recent years, the diversity of fossil hydrochoerines has been substantially reduced. This is largely due to the recognition that capybara molar teeth show strong variation in shape over the life of an individual. In one instance, material once referred to four genera and seven species on the basis of differences in molar shape is now thought to represent differently aged individuals of a single species, Cardiatherium paranense. Among fossil species, the name "capybara" can refer to the many species of Hydrochoerinae that are more closely related to the modern Hydrochoerus than to the "cardiomyine" rodents like Cardiomys. The fossil genera Cardiatherium, Phugatherium, Hydrochoeropsis, and Neochoerus are all capybaras under that concept.
The capybara has a heavy, barrel-shaped body and short head, with reddish-brown fur on the upper part of its body that turns yellowish-brown underneath. Its sweat glands can be found in the surface of the hairy portions of its skin, an unusual trait among rodents. The animal lacks down hair, and its guard hair differs little from over hair.
Adult capybaras grow to 106 to 134 cm (3.48 to 4.40 ft) in length, stand 50 to 62 cm (20 to 24 in) tall at the withers, and typically weigh 35 to 66 kg (77 to 146 lb), with an average in the Venezuelan llanos of 48.9 kg (108 lb). Females are slightly heavier than males. The top recorded weights are 91 kg (201 lb) for a wild female from Brazil and 73.5 kg (162 lb) for a wild male from Uruguay. Also, an 81 kg individual was reported in São Paulo in 2001 or 2002. The dental formula is 1.0.1.31.0.1.3. Capybaras have slightly webbed feet and vestigial tails. Their hind legs are slightly longer than their forelegs; they have three toes on their rear feet and four toes on their front feet. Their muzzles are blunt, with nostrils, and the eyes and ears are near the top of their heads.
Its karyotype has 2n = 66 and FN = 102, meaning it has 66 chromosomes with a total of 102 arms.
Capybaras are semiaquatic mammals found throughout all countries of South America except Chile. They live in densely forested areas near bodies of water, such as lakes, rivers, swamps, ponds, and marshes, as well as flooded savannah and along rivers in the tropical rainforest. They are superb swimmers and can hold their breath underwater for up to five minutes at a time. Capybara have flourished in cattle ranches. They roam in home ranges averaging 10 hectares (25 acres) in high-density populations.
Many escapees from captivity can also be found in similar watery habitats around the world. Sightings are fairly common in Florida, although a breeding population has not yet been confirmed. In 2011, one specimen was spotted on the Central Coast of California. These escaped populations occur in areas where prehistoric capybaras inhabited; late Pleistocene capybaras inhabited Florida and Hydrochoerus hesperotiganites in California and Hydrochoerus gaylordi in Grenada, and feral capybaras in North America may actually fill the ecological niche of the Pleistocene species.
Capybaras are herbivores, grazing mainly on grasses and aquatic plants, as well as fruit and tree bark. They are very selective feeders and feed on the leaves of one species and disregard other species surrounding it. They eat a greater variety of plants during the dry season, as fewer plants are available. While they eat grass during the wet season, they have to switch to more abundant reeds during the dry season. Plants that capybaras eat during the summer lose their nutritional value in the winter, so they are not consumed at that time. The capybara's jaw hinge is not perpendicular, so they chew food by grinding back-and-forth rather than side-to-side. Capybaras are autocoprophagous, meaning they eat their own feces as a source of bacterial gut flora, to help digest the cellulose in the grass that forms their normal diet, and to extract the maximum protein and vitamins from their food. They also regurgitate food to masticate again, similar to cud-chewing by cattle. As is the case with other rodents, the front teeth of capybaras grow continually to compensate for the constant wear from eating grasses; their cheek teeth also grow continuously.
Like its relative the guinea pig, the capybara does not have the capacity to synthesize vitamin C, and capybaras not supplemented with vitamin C in captivity have been reported to develop gum disease as a sign of scurvy.
They can have a lifespan of 8–10 years, but tend to live less than four years in the wild due to predation from big cats like the jaguars and pumas and non-mammalian predators like eagles and the caimans. The capybara is also the preferred prey of the green anaconda.
Capybaras are known to be gregarious. While they sometimes live solitarily, they are more commonly found in groups of around 10–20 individuals, with two to four adult males, four to seven adult females, and the remainder juveniles. Capybara groups can consist of as many as 50 or 100 individuals during the dry season when the animals gather around available water sources. Males establish social bonds, dominance, or general group consensus. They can make dog-like barks when threatened or when females are herding young.
Capybaras have two types of scent glands: a morrillo, located on the snout, and anal glands. Both sexes have these glands, but males have much larger morrillos and use their anal glands more frequently. The anal glands of males are also lined with detachable hairs. A crystalline form of scent secretion is coated on these hairs and is released when in contact with objects such as plants. These hairs have a longer-lasting scent mark and are tasted by other capybaras. Capybaras scent-mark by rubbing their morrillos on objects, or by walking over scrub and marking it with their anal glands. Capybaras can spread their scent further by urinating; however, females usually mark without urinating and scent-mark less frequently than males overall. Females mark more often during the wet season when they are in estrus. In addition to objects, males also scent-mark females.
When in estrus, the female's scent changes subtly and nearby males begin pursuit. In addition, a female alerts males she is in estrus by whistling through her nose. During mating, the female has the advantage and mating choice. Capybaras mate only in water, and if a female does not want to mate with a certain male, she either submerges or leaves the water. Dominant males are highly protective of the females, but they usually cannot prevent some of the subordinates from copulating. The larger the group, the harder it is for the male to watch all the females. Dominant males secure significantly more matings than each subordinate, but subordinate males, as a class, are responsible for more matings than each dominant male. The lifespan of the capybara's sperm is longer than that of other rodents.
Capybara gestation is 130–150 days, and produces a litter of four young on average, but may produce between one and eight in a single litter. Birth is on land and the female rejoins the group within a few hours of delivering the newborn capybaras, which join the group as soon as they are mobile. Within a week, the young can eat grass, but continue to suckle—from any female in the group—until weaned around 16 weeks. The young form a group within the main group. Alloparenting has been observed in this species. Breeding peaks between April and May in Venezuela and between October and November in Mato Grosso, Brazil.
Though quite agile on land, capybaras are equally at home in the water. They are excellent swimmers, and can remain completely submerged for up to five minutes, an ability they use to evade predators. Capybaras can sleep in water, keeping only their noses out. As temperatures increase during the day, they wallow in water and then graze during the late afternoon and early evening. They also spend time wallowing in mud. They rest around midnight and then continue to graze before dawn.
Capybaras are not considered a threatened species; their population is stable throughout most of their South American range, though in some areas hunting has reduced their numbers. Capybaras are hunted for their meat and pelts in some areas, and otherwise killed by humans who see their grazing as competition for livestock. In some areas, they are farmed, which has the effect of ensuring the wetland habitats are protected. Their survival is aided by their ability to breed rapidly.
Capybaras have adapted well to urbanization in South America. They can be found in many areas in zoos and parks, and may live for 12 years in captivity, more than double their wild lifespan. Capybaras are docile and usually allow humans to pet and hand-feed them, but physical contact is normally discouraged, as their ticks can be vectors to Rocky Mountain spotted fever. The European Association of Zoos and Aquaria asked Drusillas Park in Alfriston, Sussex, England, to keep the studbook for capybaras, to monitor captive populations in Europe. The studbook includes information about all births, deaths and movements of capybaras, as well as how they are related.
Capybaras are farmed for meat and skins in South America. The meat is considered unsuitable to eat in some areas, while in other areas it is considered an important source of protein. In parts of South America, especially in Venezuela, capybara meat is popular during Lent and Holy Week as the Catholic Church previously issued special dispensation to allow it to be eaten while other meats are generally forbidden. After several attempts a 1784 Papal bull was obtained that allowed the consumption of capybara during Lent. There is widespread perception in Venezuela that consumption of capybaras is exclusive to rural people.
Although it is illegal in some states, capybaras are occasionally kept as pets in the United States. The image of a capybara features on the 2-peso coin of Uruguay. In Japan, following the lead of Izu Shaboten Zoo in 1982, multiple establishments or zoos in Japan that raise capybaras have adopted the practice of having them relax in onsen during the winter. They are seen as an attraction by Japanese people. Capybaras became big in Japan due to the popular cartoon character Kapibara-san.
In August 2021, Argentine and international media reported that capybaras had been causing serious problems for residents of Nordelta, an affluent gated community north of Buenos Aires built atop wetland habitat. This inspired social media users to jokingly adopt the capybara as a symbol of class struggle and communism. Brazilian Lyme-like borreliosis likely involves capybaras as reservoirs and Amblyomma and Rhipicephalus ticks as vectors.
In the early 2020s, capybaras became a growing figure of meme culture due to many factors, including the disturbances in Nordelta which led to them being comically postulated as figures of class struggle. Also, a common meme format includes capybaras in various situations with the song "After Party" by Don Toliver, leading to a tremendous growth in popularity. Due to a lyric in Toliver's song, capybaras are also associated with the phrase "Ok I pull up". | [
{
"paragraph_id": 0,
"text": "The capybara or greater capybara (Hydrochoerus hydrochaeris) is a giant cavy rodent native to South America. It is the largest living rodent and a member of the genus Hydrochoerus. The only other extant member is the lesser capybara (Hydrochoerus isthmius). Its close relatives include guinea pigs and rock cavies, and it is more distantly related to the agouti, the chinchilla, and the nutria. The capybara inhabits savannas and dense forests, and lives near bodies of water. It is a highly social species and can be found in groups as large as 100 individuals, but usually live in groups of 10–20 individuals. The capybara is hunted for its meat and hide and also for grease from its thick fatty skin. It is not considered a threatened species.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Its common name is derived from Tupi ka'apiûaracode: tpw is deprecated , a complex agglutination of kaácode: tpw is deprecated (leaf) + píicode: tpw is deprecated (slender) + úcode: tpw is deprecated (eat) + aracode: tpw is deprecated (a suffix for agent nouns), meaning \"one who eats slender leaves\", or \"grass-eater\". The scientific name, both hydrochoerus and hydrochaeris, comes from Greek ὕδωρ (hydor \"water\") and χοῖρος (choiros \"pig, hog\").",
"title": "Etymology"
},
{
"paragraph_id": 2,
"text": "The capybara and the lesser capybara both belong to the subfamily Hydrochoerinae along with the rock cavies. The living capybaras and their extinct relatives were previously classified in their own family Hydrochoeridae. Since 2002, molecular phylogenetic studies have recognized a close relationship between Hydrochoerus and Kerodon, the rock cavies, supporting placement of both genera in a subfamily of Caviidae.",
"title": "Classification and phylogeny"
},
{
"paragraph_id": 3,
"text": "Paleontological classifications previously used Hydrochoeridae for all capybaras, while using Hydrochoerinae for the living genus and its closest fossil relatives, such as Neochoerus, but more recently have adopted the classification of Hydrochoerinae within Caviidae. The taxonomy of fossil hydrochoerines is also in a state of flux. In recent years, the diversity of fossil hydrochoerines has been substantially reduced. This is largely due to the recognition that capybara molar teeth show strong variation in shape over the life of an individual. In one instance, material once referred to four genera and seven species on the basis of differences in molar shape is now thought to represent differently aged individuals of a single species, Cardiatherium paranense. Among fossil species, the name \"capybara\" can refer to the many species of Hydrochoerinae that are more closely related to the modern Hydrochoerus than to the \"cardiomyine\" rodents like Cardiomys. The fossil genera Cardiatherium, Phugatherium, Hydrochoeropsis, and Neochoerus are all capybaras under that concept.",
"title": "Classification and phylogeny"
},
{
"paragraph_id": 4,
"text": "The capybara has a heavy, barrel-shaped body and short head, with reddish-brown fur on the upper part of its body that turns yellowish-brown underneath. Its sweat glands can be found in the surface of the hairy portions of its skin, an unusual trait among rodents. The animal lacks down hair, and its guard hair differs little from over hair.",
"title": "Description"
},
{
"paragraph_id": 5,
"text": "Adult capybaras grow to 106 to 134 cm (3.48 to 4.40 ft) in length, stand 50 to 62 cm (20 to 24 in) tall at the withers, and typically weigh 35 to 66 kg (77 to 146 lb), with an average in the Venezuelan llanos of 48.9 kg (108 lb). Females are slightly heavier than males. The top recorded weights are 91 kg (201 lb) for a wild female from Brazil and 73.5 kg (162 lb) for a wild male from Uruguay. Also, an 81 kg individual was reported in São Paulo in 2001 or 2002. The dental formula is 1.0.1.31.0.1.3. Capybaras have slightly webbed feet and vestigial tails. Their hind legs are slightly longer than their forelegs; they have three toes on their rear feet and four toes on their front feet. Their muzzles are blunt, with nostrils, and the eyes and ears are near the top of their heads.",
"title": "Description"
},
{
"paragraph_id": 6,
"text": "Its karyotype has 2n = 66 and FN = 102, meaning it has 66 chromosomes with a total of 102 arms.",
"title": "Description"
},
{
"paragraph_id": 7,
"text": "Capybaras are semiaquatic mammals found throughout all countries of South America except Chile. They live in densely forested areas near bodies of water, such as lakes, rivers, swamps, ponds, and marshes, as well as flooded savannah and along rivers in the tropical rainforest. They are superb swimmers and can hold their breath underwater for up to five minutes at a time. Capybara have flourished in cattle ranches. They roam in home ranges averaging 10 hectares (25 acres) in high-density populations.",
"title": "Ecology"
},
{
"paragraph_id": 8,
"text": "Many escapees from captivity can also be found in similar watery habitats around the world. Sightings are fairly common in Florida, although a breeding population has not yet been confirmed. In 2011, one specimen was spotted on the Central Coast of California. These escaped populations occur in areas where prehistoric capybaras inhabited; late Pleistocene capybaras inhabited Florida and Hydrochoerus hesperotiganites in California and Hydrochoerus gaylordi in Grenada, and feral capybaras in North America may actually fill the ecological niche of the Pleistocene species.",
"title": "Ecology"
},
{
"paragraph_id": 9,
"text": "Capybaras are herbivores, grazing mainly on grasses and aquatic plants, as well as fruit and tree bark. They are very selective feeders and feed on the leaves of one species and disregard other species surrounding it. They eat a greater variety of plants during the dry season, as fewer plants are available. While they eat grass during the wet season, they have to switch to more abundant reeds during the dry season. Plants that capybaras eat during the summer lose their nutritional value in the winter, so they are not consumed at that time. The capybara's jaw hinge is not perpendicular, so they chew food by grinding back-and-forth rather than side-to-side. Capybaras are autocoprophagous, meaning they eat their own feces as a source of bacterial gut flora, to help digest the cellulose in the grass that forms their normal diet, and to extract the maximum protein and vitamins from their food. They also regurgitate food to masticate again, similar to cud-chewing by cattle. As is the case with other rodents, the front teeth of capybaras grow continually to compensate for the constant wear from eating grasses; their cheek teeth also grow continuously.",
"title": "Ecology"
},
{
"paragraph_id": 10,
"text": "Like its relative the guinea pig, the capybara does not have the capacity to synthesize vitamin C, and capybaras not supplemented with vitamin C in captivity have been reported to develop gum disease as a sign of scurvy.",
"title": "Ecology"
},
{
"paragraph_id": 11,
"text": "They can have a lifespan of 8–10 years, but tend to live less than four years in the wild due to predation from big cats like the jaguars and pumas and non-mammalian predators like eagles and the caimans. The capybara is also the preferred prey of the green anaconda.",
"title": "Ecology"
},
{
"paragraph_id": 12,
"text": "Capybaras are known to be gregarious. While they sometimes live solitarily, they are more commonly found in groups of around 10–20 individuals, with two to four adult males, four to seven adult females, and the remainder juveniles. Capybara groups can consist of as many as 50 or 100 individuals during the dry season when the animals gather around available water sources. Males establish social bonds, dominance, or general group consensus. They can make dog-like barks when threatened or when females are herding young.",
"title": "Social organization"
},
{
"paragraph_id": 13,
"text": "Capybaras have two types of scent glands: a morrillo, located on the snout, and anal glands. Both sexes have these glands, but males have much larger morrillos and use their anal glands more frequently. The anal glands of males are also lined with detachable hairs. A crystalline form of scent secretion is coated on these hairs and is released when in contact with objects such as plants. These hairs have a longer-lasting scent mark and are tasted by other capybaras. Capybaras scent-mark by rubbing their morrillos on objects, or by walking over scrub and marking it with their anal glands. Capybaras can spread their scent further by urinating; however, females usually mark without urinating and scent-mark less frequently than males overall. Females mark more often during the wet season when they are in estrus. In addition to objects, males also scent-mark females.",
"title": "Social organization"
},
{
"paragraph_id": 14,
"text": "When in estrus, the female's scent changes subtly and nearby males begin pursuit. In addition, a female alerts males she is in estrus by whistling through her nose. During mating, the female has the advantage and mating choice. Capybaras mate only in water, and if a female does not want to mate with a certain male, she either submerges or leaves the water. Dominant males are highly protective of the females, but they usually cannot prevent some of the subordinates from copulating. The larger the group, the harder it is for the male to watch all the females. Dominant males secure significantly more matings than each subordinate, but subordinate males, as a class, are responsible for more matings than each dominant male. The lifespan of the capybara's sperm is longer than that of other rodents.",
"title": "Social organization"
},
{
"paragraph_id": 15,
"text": "Capybara gestation is 130–150 days, and produces a litter of four young on average, but may produce between one and eight in a single litter. Birth is on land and the female rejoins the group within a few hours of delivering the newborn capybaras, which join the group as soon as they are mobile. Within a week, the young can eat grass, but continue to suckle—from any female in the group—until weaned around 16 weeks. The young form a group within the main group. Alloparenting has been observed in this species. Breeding peaks between April and May in Venezuela and between October and November in Mato Grosso, Brazil.",
"title": "Social organization"
},
{
"paragraph_id": 16,
"text": "Though quite agile on land, capybaras are equally at home in the water. They are excellent swimmers, and can remain completely submerged for up to five minutes, an ability they use to evade predators. Capybaras can sleep in water, keeping only their noses out. As temperatures increase during the day, they wallow in water and then graze during the late afternoon and early evening. They also spend time wallowing in mud. They rest around midnight and then continue to graze before dawn.",
"title": "Social organization"
},
{
"paragraph_id": 17,
"text": "Capybaras are not considered a threatened species; their population is stable throughout most of their South American range, though in some areas hunting has reduced their numbers. Capybaras are hunted for their meat and pelts in some areas, and otherwise killed by humans who see their grazing as competition for livestock. In some areas, they are farmed, which has the effect of ensuring the wetland habitats are protected. Their survival is aided by their ability to breed rapidly.",
"title": "Conservation and human interaction"
},
{
"paragraph_id": 18,
"text": "Capybaras have adapted well to urbanization in South America. They can be found in many areas in zoos and parks, and may live for 12 years in captivity, more than double their wild lifespan. Capybaras are docile and usually allow humans to pet and hand-feed them, but physical contact is normally discouraged, as their ticks can be vectors to Rocky Mountain spotted fever. The European Association of Zoos and Aquaria asked Drusillas Park in Alfriston, Sussex, England, to keep the studbook for capybaras, to monitor captive populations in Europe. The studbook includes information about all births, deaths and movements of capybaras, as well as how they are related.",
"title": "Conservation and human interaction"
},
{
"paragraph_id": 19,
"text": "Capybaras are farmed for meat and skins in South America. The meat is considered unsuitable to eat in some areas, while in other areas it is considered an important source of protein. In parts of South America, especially in Venezuela, capybara meat is popular during Lent and Holy Week as the Catholic Church previously issued special dispensation to allow it to be eaten while other meats are generally forbidden. After several attempts a 1784 Papal bull was obtained that allowed the consumption of capybara during Lent. There is widespread perception in Venezuela that consumption of capybaras is exclusive to rural people.",
"title": "Conservation and human interaction"
},
{
"paragraph_id": 20,
"text": "Although it is illegal in some states, capybaras are occasionally kept as pets in the United States. The image of a capybara features on the 2-peso coin of Uruguay. In Japan, following the lead of Izu Shaboten Zoo in 1982, multiple establishments or zoos in Japan that raise capybaras have adopted the practice of having them relax in onsen during the winter. They are seen as an attraction by Japanese people. Capybaras became big in Japan due to the popular cartoon character Kapibara-san.",
"title": "Conservation and human interaction"
},
{
"paragraph_id": 21,
"text": "In August 2021, Argentine and international media reported that capybaras had been causing serious problems for residents of Nordelta, an affluent gated community north of Buenos Aires built atop wetland habitat. This inspired social media users to jokingly adopt the capybara as a symbol of class struggle and communism. Brazilian Lyme-like borreliosis likely involves capybaras as reservoirs and Amblyomma and Rhipicephalus ticks as vectors.",
"title": "Conservation and human interaction"
},
{
"paragraph_id": 22,
"text": "In the early 2020s, capybaras became a growing figure of meme culture due to many factors, including the disturbances in Nordelta which led to them being comically postulated as figures of class struggle. Also, a common meme format includes capybaras in various situations with the song \"After Party\" by Don Toliver, leading to a tremendous growth in popularity. Due to a lyric in Toliver's song, capybaras are also associated with the phrase \"Ok I pull up\".",
"title": "Popularity and meme culture"
}
] | The capybara or greater capybara is a giant cavy rodent native to South America. It is the largest living rodent and a member of the genus Hydrochoerus. The only other extant member is the lesser capybara. Its close relatives include guinea pigs and rock cavies, and it is more distantly related to the agouti, the chinchilla, and the nutria. The capybara inhabits savannas and dense forests, and lives near bodies of water. It is a highly social species and can be found in groups as large as 100 individuals, but usually live in groups of 10–20 individuals. The capybara is hunted for its meat and hide and also for grease from its thick fatty skin. It is not considered a threatened species. | 2001-10-13T20:04:37Z | 2023-12-29T18:46:05Z | [
"Template:Cite book",
"Template:Cite journal",
"Template:Wikispecies",
"Template:Wiktionary",
"Template:Caviidae nav",
"Template:Pp",
"Template:Reflist",
"Template:Webarchive",
"Template:Taxonbar",
"Template:Convert",
"Template:Cite web",
"Template:Commons category",
"Template:Use dmy dates",
"Template:Speciesbox",
"Template:Lang",
"Template:Portal",
"Template:Notelist",
"Template:Cite news",
"Template:Short description",
"Template:Other uses",
"Template:Efn",
"Template:EB1911 poster",
"Template:Good article",
"Template:DentalFormula",
"Template:MSW3 Woods"
] | https://en.wikipedia.org/wiki/Capybara |
6,777 | Computer animation | Computer animation is the process used for digitally generating animations. The more general term computer-generated imagery (CGI) encompasses both static scenes (still images) and dynamic images (moving images), while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics. The animation's target is sometimes the computer itself, while other times it is film.
Computer animation is essentially a digital successor to stop motion techniques, but using models and traditional animation techniques using frame-by-frame animation illustrations. Also computer-generated animations allow a single graphic artist to produce such content without using actors, expensive set pieces, or props. To create the illusion of movement, an image is displayed on the computer monitor and repeatedly replaced by a new similar image but advanced slightly in time (usually at a rate of 24, 25, or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.
For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered.
For 3D animations, all frames must be rendered after the modeling is complete. For pre-recorded presentations, the rendered frames are transferred to a different format or medium, like digital video. The frames may also be rendered in real-time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. Adobe Flash, X3D) often use the software on the end user's computer to render in real-time as an alternative to streaming or pre-loaded high bandwidth animations.
To trick the eye and the brain into thinking they are seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second or faster. (A frame is one complete image.) With rates above 75 to 120 frames per second, no improvement in realism or smoothness is perceivable due to the way the eye and the brain both process images. At rates below 12 frames per second, most people can detect jerkiness associated with the drawing of new images that detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames per second in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. To produce more realistic imagery, computer animation demands higher frame rates.
Films seen in theaters in the United States run at 24 frames per second, which is sufficient to create the illusion of continuous movement. For high resolution, adapters are used.
Early digital computer animation was developed at Bell Telephone Laboratories in the 1960s by Edward E. Zajac, Frank W. Sinden, Kenneth C. Knowlton, and A. Michael Noll. Other digital animation was also practiced at the Lawrence Livermore National Laboratory.
In 1967, a computer animation named "Hummingbird" was created by Charles Csuri and James Shaffer. In 1968, a computer animation called "Kitty" was created with BESM-4 by Nikolai Konstantinov, depicting a cat moving around. In 1971, a computer animation called "Metadata" was created, showing various shapes.
An early step in the history of computer animation was the sequel to the 1973 film Westworld, a science-fiction film about a society in which robots live and work among humans. The sequel, Futureworld (1976), used the 3D wire-frame imagery, which featured a computer-animated hand and face both created by University of Utah graduates Edwin Catmull and Fred Parke. This imagery originally appeared in their student film A Computer Animated Hand, which they completed in 1972.
Developments in CGI technologies are reported each year at SIGGRAPH, an annual conference on computer graphics and interactive techniques that is attended by thousands of computer professionals each year. Developers of computer games and 3D video cards strive to achieve the same visual quality on personal computers in real-time as is possible for CGI films and animation. With the rapid advancement of real-time rendering quality, artists began to use game engines to render non-interactive movies, which led to the art form Machinima.
CGI short films have been produced as independent animation since 1976. Early examples of feature films incorporating CGI animation include the live-action films Star Trek II: The Wrath of Khan and Tron (both 1982), and the Japanese anime film Golgo 13: The Professional (1983). VeggieTales is the first American fully 3D computer-animated series sold directly (made in 1993); its success inspired other animation series, such as ReBoot (1994) and Transformers: Beast Wars (1996) to adopt a fully computer-generated style.
The first full length computer-animated television series was ReBoot, which debuted in September 1994; the series followed the adventures of characters who lived inside a computer. The first feature-length computer-animated film is Toy Story (1995), which was made by Disney and Pixar: following an adventure centered around anthropomorphic toys and their owners, this groundbreaking film was also the first of many fully computer-animated movies.
The popularity of computer animation (especially in the field of special effects) skyrocketed during the modern era of U.S. animation. Films like Avatar (2009) and The Jungle Book (2016) use CGI for the majority of the movie runtime, but still incorporate human actors into the mix. Computer animation in this era has achieved photorealism, to the point that computer-animated films such as The Lion King (2019) are able to be marketed as if they were live-action.
In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, which is analogous to a skeleton or stick figure. They are arranged into a default position known as a bind pose, or T-Pose. The position of each segment of the skeletal model is defined by animation variables, or Avars for short. In human and animal characters, many parts of the skeletal model correspond to the actual bones, but skeletal animation is also used to animate other things, with facial features (though other methods for facial animation exist). The character "Woody" in Toy Story, for example, uses 712 Avars (212 in the face alone). The computer does not usually render the skeletal model directly (it is invisible), but it does use the skeletal model to compute the exact position and orientation of that certain character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame.
There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or tween between them in a process called keyframing. Keyframing puts control in the hands of the animator and has roots in hand-drawn traditional animation.
In contrast, a newer method called motion capture makes use of live action footage. When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated. Their motion is recorded to a computer using video cameras and markers and that performance is then applied to the animated character.
Each method has its advantages and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor. For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, Bill Nighy provided the performance for the character Davy Jones. Even though Nighy does not appear in the movie himself, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done throughout the conventional costuming.
3D computer animation combines 3D models of objects and programmed or hand "keyframed" movement. These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with "textures" for realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process known as rigging, the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two.
3D models rigged for animation may contain thousands of control points — for example, "Woody" from Toy Story uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which had about 1,851 controllers (742 in the face alone). In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model "human" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.
Computer animation can be created with a computer and an animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can require much time on an ordinary home computer. Professional animators of movies, television and video games could make photorealistic animation with high detail. This level of quality for movie animation would take hundreds of years to create on a home computer. Instead, many powerful workstation computers are used. Graphics workstation computers use two to four processors, and they are a lot more powerful than an actual home computer and are specialized for rendering. Many workstations (known as a "render farm") are networked together to effectively act as a giant computer, resulting in a computer-animated movie that can be completed in about one to five years (however, this process is not composed solely of rendering). A workstation typically costs $2,000 to $16,000 with the more expensive stations being able to render much faster due to the more technologically-advanced hardware that they contain. Professionals also use digital movie cameras, motion/performance capture, bluescreens, film editing software, props, and other tools used for movie animation. Programs like Blender allow for people who can not afford expensive animation and rendering software to be able to work in a similar manner to those who use the commercial grade equipment.
The realistic modeling of human facial features is both one of the most challenging and sought after elements in computer-generated imagery. Computer facial animation is a highly complex field where models typically include a very large number of animation variables. Historically speaking, the first SIGGRAPH tutorials on State of the art in Facial Animation in 1989 and 1990 proved to be a turning point in the field by bringing together and consolidating multiple research elements and sparked interest among a number of researchers.
The Facial Action Coding System (with 46 "action units", "lip bite" or "squint"), which had been developed in 1976, became a popular basis for many systems. As early as 2001, MPEG-4 included 68 Face Animation Parameters (FAPs) for lips, jaws, etc., and the field has made significant progress since then and the use of facial microexpression has increased.
In some cases, an affective space, the PAD emotional state model, can be used to assign specific emotions to the faces of avatars. In this approach, the PAD model is used as a high level emotional space and the lower level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used to in a two-level structure – the PAD-PEP mapping and the PEP-FAP translation model.
Realism in computer animation can mean making each frame look photorealistic, in the sense that the scene is rendered to resemble a photograph or make the characters' animation believable and lifelike. Computer animation can also be realistic with or without the photorealistic rendering.
One of the greatest challenges in computer animation has been creating human characters that look and move with the highest degree of realism. Part of the difficulty in making pleasing, realistic human characters is the uncanny valley, the concept where the human audience (up to a point) tends to have an increasingly negative, emotional response as a human replica looks and acts more and more human. Films that have attempted photorealistic human characters, such as The Polar Express, Beowulf, and A Christmas Carol have been criticized as "disconcerting" and "creepy".
The goal of computer animation is not always to emulate live action as closely as possible, so many animated films instead feature characters who are anthropomorphic animals, legendary creatures and characters, superheroes, or otherwise have non-realistic, cartoon-like proportions. Computer animation can also be tailored to mimic or substitute for other kinds of animation, like traditional stop-motion animation (as shown in Flushed Away or The Peanuts Movie). Some of the long-standing basic principles of animation, like squash and stretch, call for movement that is not strictly realistic, and such principles still see widespread application in computer animation.
Some notable producers of computer-animated feature films include:
The popularity of websites that allow members to upload their own movies for others to view has created a growing community of independent and amateur computer animators. With utilities and programs often included free with modern operating systems, many users can make their own animated movies and shorts. Several free and open-source animation software applications exist as well. The ease at which these animations can be distributed has attracted professional animation talent also. Companies such as PowToon and Vyond attempt to bridge the gap by giving amateurs access to professional animations as clip art.
The oldest (most backward compatible) web-based animations are in the animated GIF format, which can be uploaded and seen on the web easily. However, the raster graphics format of GIF animations slows the download and frame rate, especially with larger screen sizes. The growing demand for higher quality web-based animations was met by a vector graphics alternative that relied on the use of a plugin. For decades, Flash animations were the most popular format, until the web development community abandoned support for the Flash Player plugin. Web browsers on mobile devices and mobile operating systems never fully supported the Flash plugin.
By this time, internet bandwidth and download speeds increased, making raster graphic animations more convenient. Some of the more complex vector graphic animations had a slower frame rate due to complex rendering compared to some of the raster graphic alternatives. Many of the GIF and Flash animations were already converted to digital video formats, which were compatible with mobile devices and reduced file sizes via video compression technology. However, compatibility was still problematic as some of the popular video formats such as Apple's QuickTime and Microsoft Silverlight required plugins. YouTube, the most popular video sharing website, was also relying on the Flash plugin to deliver digital video in the Flash Video format.
The latest alternatives are HTML5 compatible animations. Technologies such as JavaScript and CSS animations made sequencing the movement of images in HTML5 web pages more convenient. SVG animations offered a vector graphic alternative to the original Flash graphic format, SmartSketch. YouTube offers an HTML5 alternative for digital video. APNG (Animated PNG) offered a raster graphic alternative to animated GIF files that enables multi-level transparency not available in GIFs.
Computer animation uses different techniques to produce animations. Most frequently, sophisticated mathematics is used to manipulate complex three-dimensional polygons, apply "textures", lighting and other effects to the polygons and finally rendering the complete image. A sophisticated graphical user interface may be used to create the animation and arrange its choreography. Another technique called constructive solid geometry defines objects by conducting Boolean operations on regular shapes, and has the advantage that animations may be accurately produced at any resolution.
To animate means, figuratively, to "give life to". There are two basic methods that animators commonly use to accomplish this.
Computer-generated animation is known as three-dimensional (3D) animation. Creators design an object or character with an X, a Y and a Z axis. No pencil-to-paper drawings create the way computer-generated animation works. The object or character created will then be taken into a software. Key-framing and tweening are also carried out in computer-generated animation but so are many techniques unrelated to traditional animation. Animators can break physical laws by using mathematical algorithms to cheat mass, force and gravity rulings. Fundamentally, time scale and quality could be said to be a preferred way to produce animation as they are major aspects enhanced by using computer-generated animation. Another positive aspect of CGA is the fact one can create a flock of creatures to act independently when created as a group. An animal's fur can be programmed to wave in the wind and lie flat when it rains instead of separately programming each strand of hair.
A few examples of computer-generated animation movies are Toy Story, Antz, Ice Age, Happy Feet, Despicable Me, Frozen, and Shrek.
2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings.
Computer animation is essentially a digital successor to stop motion techniques, but using 3D models, and traditional animation techniques using frame-by-frame animation of 2D illustrations.
For 2D figure animations, separate objects (illustrations) and separate transparent layers are used with or without that virtual skeleton.
In 2D computer animation, moving objects are often referred to as "sprites." A sprite is an image that has a location associated with it. The location of the sprite is changed slightly, between each displayed frame, to make the sprite appear to move. The following pseudocode makes a sprite move from left to right:
Computer-assisted animation is usually classed as two-dimensional (2D) animation. Drawings are either hand drawn (pencil to paper) or interactively drawn (on the computer) using different assisting appliances and are positioned into specific software packages. Within the software package, the creator places drawings into different key frames which fundamentally create an outline of the most important movements. The computer then fills in the "in-between frames", a process commonly known as Tweening. Computer-assisted animation employs new technologies to produce content faster than is possible with traditional animation, while still retaining the stylistic elements of traditionally drawn characters or objects.
Examples of films produced using computer-assisted animation are The Little Mermaid, The Rescuers Down Under, Beauty and the Beast, Aladdin, The Lion King, Pocahontas, The Hunchback of Notre Dame, Hercules, Mulan, Tarzan and The Road to El Dorado.
A text-to-video model is a machine learning model which takes as input a natural language description and produces a video matching that description. | [
{
"paragraph_id": 0,
"text": "Computer animation is the process used for digitally generating animations. The more general term computer-generated imagery (CGI) encompasses both static scenes (still images) and dynamic images (moving images), while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics. The animation's target is sometimes the computer itself, while other times it is film.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Computer animation is essentially a digital successor to stop motion techniques, but using models and traditional animation techniques using frame-by-frame animation illustrations. Also computer-generated animations allow a single graphic artist to produce such content without using actors, expensive set pieces, or props. To create the illusion of movement, an image is displayed on the computer monitor and repeatedly replaced by a new similar image but advanced slightly in time (usually at a rate of 24, 25, or 30 frames/second). This technique is identical to how the illusion of movement is achieved with television and motion pictures.",
"title": ""
},
{
"paragraph_id": 2,
"text": "For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered.",
"title": ""
},
{
"paragraph_id": 3,
"text": "For 3D animations, all frames must be rendered after the modeling is complete. For pre-recorded presentations, the rendered frames are transferred to a different format or medium, like digital video. The frames may also be rendered in real-time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet (e.g. Adobe Flash, X3D) often use the software on the end user's computer to render in real-time as an alternative to streaming or pre-loaded high bandwidth animations.",
"title": ""
},
{
"paragraph_id": 4,
"text": "To trick the eye and the brain into thinking they are seeing a smoothly moving object, the pictures should be drawn at around 12 frames per second or faster. (A frame is one complete image.) With rates above 75 to 120 frames per second, no improvement in realism or smoothness is perceivable due to the way the eye and the brain both process images. At rates below 12 frames per second, most people can detect jerkiness associated with the drawing of new images that detracts from the illusion of realistic movement. Conventional hand-drawn cartoon animation often uses 15 frames per second in order to save on the number of drawings needed, but this is usually accepted because of the stylized nature of cartoons. To produce more realistic imagery, computer animation demands higher frame rates.",
"title": "Explanation"
},
{
"paragraph_id": 5,
"text": "Films seen in theaters in the United States run at 24 frames per second, which is sufficient to create the illusion of continuous movement. For high resolution, adapters are used.",
"title": "Explanation"
},
{
"paragraph_id": 6,
"text": "Early digital computer animation was developed at Bell Telephone Laboratories in the 1960s by Edward E. Zajac, Frank W. Sinden, Kenneth C. Knowlton, and A. Michael Noll. Other digital animation was also practiced at the Lawrence Livermore National Laboratory.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In 1967, a computer animation named \"Hummingbird\" was created by Charles Csuri and James Shaffer. In 1968, a computer animation called \"Kitty\" was created with BESM-4 by Nikolai Konstantinov, depicting a cat moving around. In 1971, a computer animation called \"Metadata\" was created, showing various shapes.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "An early step in the history of computer animation was the sequel to the 1973 film Westworld, a science-fiction film about a society in which robots live and work among humans. The sequel, Futureworld (1976), used the 3D wire-frame imagery, which featured a computer-animated hand and face both created by University of Utah graduates Edwin Catmull and Fred Parke. This imagery originally appeared in their student film A Computer Animated Hand, which they completed in 1972.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Developments in CGI technologies are reported each year at SIGGRAPH, an annual conference on computer graphics and interactive techniques that is attended by thousands of computer professionals each year. Developers of computer games and 3D video cards strive to achieve the same visual quality on personal computers in real-time as is possible for CGI films and animation. With the rapid advancement of real-time rendering quality, artists began to use game engines to render non-interactive movies, which led to the art form Machinima.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "CGI short films have been produced as independent animation since 1976. Early examples of feature films incorporating CGI animation include the live-action films Star Trek II: The Wrath of Khan and Tron (both 1982), and the Japanese anime film Golgo 13: The Professional (1983). VeggieTales is the first American fully 3D computer-animated series sold directly (made in 1993); its success inspired other animation series, such as ReBoot (1994) and Transformers: Beast Wars (1996) to adopt a fully computer-generated style.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The first full length computer-animated television series was ReBoot, which debuted in September 1994; the series followed the adventures of characters who lived inside a computer. The first feature-length computer-animated film is Toy Story (1995), which was made by Disney and Pixar: following an adventure centered around anthropomorphic toys and their owners, this groundbreaking film was also the first of many fully computer-animated movies.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The popularity of computer animation (especially in the field of special effects) skyrocketed during the modern era of U.S. animation. Films like Avatar (2009) and The Jungle Book (2016) use CGI for the majority of the movie runtime, but still incorporate human actors into the mix. Computer animation in this era has achieved photorealism, to the point that computer-animated films such as The Lion King (2019) are able to be marketed as if they were live-action.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In most 3D computer animation systems, an animator creates a simplified representation of a character's anatomy, which is analogous to a skeleton or stick figure. They are arranged into a default position known as a bind pose, or T-Pose. The position of each segment of the skeletal model is defined by animation variables, or Avars for short. In human and animal characters, many parts of the skeletal model correspond to the actual bones, but skeletal animation is also used to animate other things, with facial features (though other methods for facial animation exist). The character \"Woody\" in Toy Story, for example, uses 712 Avars (212 in the face alone). The computer does not usually render the skeletal model directly (it is invisible), but it does use the skeletal model to compute the exact position and orientation of that certain character, which is eventually rendered into an image. Thus by changing the values of Avars over time, the animator creates motion by making the character move from frame to frame.",
"title": "Animation methods"
},
{
"paragraph_id": 14,
"text": "There are several methods for generating the Avar values to obtain realistic motion. Traditionally, animators manipulate the Avars directly. Rather than set Avars for every frame, they usually set Avars at strategic points (frames) in time and let the computer interpolate or tween between them in a process called keyframing. Keyframing puts control in the hands of the animator and has roots in hand-drawn traditional animation.",
"title": "Animation methods"
},
{
"paragraph_id": 15,
"text": "In contrast, a newer method called motion capture makes use of live action footage. When computer animation is driven by motion capture, a real performer acts out the scene as if they were the character to be animated. Their motion is recorded to a computer using video cameras and markers and that performance is then applied to the animated character.",
"title": "Animation methods"
},
{
"paragraph_id": 16,
"text": "Each method has its advantages and as of 2007, games and films are using either or both of these methods in productions. Keyframe animation can produce motions that would be difficult or impossible to act out, while motion capture can reproduce the subtleties of a particular actor. For example, in the 2006 film Pirates of the Caribbean: Dead Man's Chest, Bill Nighy provided the performance for the character Davy Jones. Even though Nighy does not appear in the movie himself, the movie benefited from his performance by recording the nuances of his body language, posture, facial expressions, etc. Thus motion capture is appropriate in situations where believable, realistic behavior and action is required, but the types of characters required exceed what can be done throughout the conventional costuming.",
"title": "Animation methods"
},
{
"paragraph_id": 17,
"text": "3D computer animation combines 3D models of objects and programmed or hand \"keyframed\" movement. These models are constructed out of geometrical vertices, faces, and edges in a 3D coordinate system. Objects are sculpted much like real clay or plaster, working from general forms to specific details with various sculpting tools. Unless a 3D model is intended to be a solid color, it must be painted with \"textures\" for realism. A bone/joint animation system is set up to deform the CGI model (e.g., to make a humanoid model walk). In a process known as rigging, the virtual marionette is given various controllers and handles for controlling movement. Animation data can be created using motion capture, or keyframing by a human animator, or a combination of the two.",
"title": "Modeling"
},
{
"paragraph_id": 18,
"text": "3D models rigged for animation may contain thousands of control points — for example, \"Woody\" from Toy Story uses 700 specialized animation controllers. Rhythm and Hues Studios labored for two years to create Aslan in the movie The Chronicles of Narnia: The Lion, the Witch and the Wardrobe, which had about 1,851 controllers (742 in the face alone). In the 2004 film The Day After Tomorrow, designers had to design forces of extreme weather with the help of video references and accurate meteorological facts. For the 2005 remake of King Kong, actor Andy Serkis was used to help designers pinpoint the gorilla's prime location in the shots and used his expressions to model \"human\" characteristics onto the creature. Serkis had earlier provided the voice and performance for Gollum in J. R. R. Tolkien's The Lord of the Rings trilogy.",
"title": "Modeling"
},
{
"paragraph_id": 19,
"text": "Computer animation can be created with a computer and an animation software. Some impressive animation can be achieved even with basic programs; however, the rendering can require much time on an ordinary home computer. Professional animators of movies, television and video games could make photorealistic animation with high detail. This level of quality for movie animation would take hundreds of years to create on a home computer. Instead, many powerful workstation computers are used. Graphics workstation computers use two to four processors, and they are a lot more powerful than an actual home computer and are specialized for rendering. Many workstations (known as a \"render farm\") are networked together to effectively act as a giant computer, resulting in a computer-animated movie that can be completed in about one to five years (however, this process is not composed solely of rendering). A workstation typically costs $2,000 to $16,000 with the more expensive stations being able to render much faster due to the more technologically-advanced hardware that they contain. Professionals also use digital movie cameras, motion/performance capture, bluescreens, film editing software, props, and other tools used for movie animation. Programs like Blender allow for people who can not afford expensive animation and rendering software to be able to work in a similar manner to those who use the commercial grade equipment.",
"title": "Equipment"
},
{
"paragraph_id": 20,
"text": "The realistic modeling of human facial features is both one of the most challenging and sought after elements in computer-generated imagery. Computer facial animation is a highly complex field where models typically include a very large number of animation variables. Historically speaking, the first SIGGRAPH tutorials on State of the art in Facial Animation in 1989 and 1990 proved to be a turning point in the field by bringing together and consolidating multiple research elements and sparked interest among a number of researchers.",
"title": "Facial animation"
},
{
"paragraph_id": 21,
"text": "The Facial Action Coding System (with 46 \"action units\", \"lip bite\" or \"squint\"), which had been developed in 1976, became a popular basis for many systems. As early as 2001, MPEG-4 included 68 Face Animation Parameters (FAPs) for lips, jaws, etc., and the field has made significant progress since then and the use of facial microexpression has increased.",
"title": "Facial animation"
},
{
"paragraph_id": 22,
"text": "In some cases, an affective space, the PAD emotional state model, can be used to assign specific emotions to the faces of avatars. In this approach, the PAD model is used as a high level emotional space and the lower level space is the MPEG-4 Facial Animation Parameters (FAP). A mid-level Partial Expression Parameters (PEP) space is then used to in a two-level structure – the PAD-PEP mapping and the PEP-FAP translation model.",
"title": "Facial animation"
},
{
"paragraph_id": 23,
"text": "Realism in computer animation can mean making each frame look photorealistic, in the sense that the scene is rendered to resemble a photograph or make the characters' animation believable and lifelike. Computer animation can also be realistic with or without the photorealistic rendering.",
"title": "Realism"
},
{
"paragraph_id": 24,
"text": "One of the greatest challenges in computer animation has been creating human characters that look and move with the highest degree of realism. Part of the difficulty in making pleasing, realistic human characters is the uncanny valley, the concept where the human audience (up to a point) tends to have an increasingly negative, emotional response as a human replica looks and acts more and more human. Films that have attempted photorealistic human characters, such as The Polar Express, Beowulf, and A Christmas Carol have been criticized as \"disconcerting\" and \"creepy\".",
"title": "Realism"
},
{
"paragraph_id": 25,
"text": "The goal of computer animation is not always to emulate live action as closely as possible, so many animated films instead feature characters who are anthropomorphic animals, legendary creatures and characters, superheroes, or otherwise have non-realistic, cartoon-like proportions. Computer animation can also be tailored to mimic or substitute for other kinds of animation, like traditional stop-motion animation (as shown in Flushed Away or The Peanuts Movie). Some of the long-standing basic principles of animation, like squash and stretch, call for movement that is not strictly realistic, and such principles still see widespread application in computer animation.",
"title": "Realism"
},
{
"paragraph_id": 26,
"text": "Some notable producers of computer-animated feature films include:",
"title": "Animation studios"
},
{
"paragraph_id": 27,
"text": "The popularity of websites that allow members to upload their own movies for others to view has created a growing community of independent and amateur computer animators. With utilities and programs often included free with modern operating systems, many users can make their own animated movies and shorts. Several free and open-source animation software applications exist as well. The ease at which these animations can be distributed has attracted professional animation talent also. Companies such as PowToon and Vyond attempt to bridge the gap by giving amateurs access to professional animations as clip art.",
"title": "Web animations"
},
{
"paragraph_id": 28,
"text": "The oldest (most backward compatible) web-based animations are in the animated GIF format, which can be uploaded and seen on the web easily. However, the raster graphics format of GIF animations slows the download and frame rate, especially with larger screen sizes. The growing demand for higher quality web-based animations was met by a vector graphics alternative that relied on the use of a plugin. For decades, Flash animations were the most popular format, until the web development community abandoned support for the Flash Player plugin. Web browsers on mobile devices and mobile operating systems never fully supported the Flash plugin.",
"title": "Web animations"
},
{
"paragraph_id": 29,
"text": "By this time, internet bandwidth and download speeds increased, making raster graphic animations more convenient. Some of the more complex vector graphic animations had a slower frame rate due to complex rendering compared to some of the raster graphic alternatives. Many of the GIF and Flash animations were already converted to digital video formats, which were compatible with mobile devices and reduced file sizes via video compression technology. However, compatibility was still problematic as some of the popular video formats such as Apple's QuickTime and Microsoft Silverlight required plugins. YouTube, the most popular video sharing website, was also relying on the Flash plugin to deliver digital video in the Flash Video format.",
"title": "Web animations"
},
{
"paragraph_id": 30,
"text": "The latest alternatives are HTML5 compatible animations. Technologies such as JavaScript and CSS animations made sequencing the movement of images in HTML5 web pages more convenient. SVG animations offered a vector graphic alternative to the original Flash graphic format, SmartSketch. YouTube offers an HTML5 alternative for digital video. APNG (Animated PNG) offered a raster graphic alternative to animated GIF files that enables multi-level transparency not available in GIFs.",
"title": "Web animations"
},
{
"paragraph_id": 31,
"text": "Computer animation uses different techniques to produce animations. Most frequently, sophisticated mathematics is used to manipulate complex three-dimensional polygons, apply \"textures\", lighting and other effects to the polygons and finally rendering the complete image. A sophisticated graphical user interface may be used to create the animation and arrange its choreography. Another technique called constructive solid geometry defines objects by conducting Boolean operations on regular shapes, and has the advantage that animations may be accurately produced at any resolution.",
"title": "Detailed examples"
},
{
"paragraph_id": 32,
"text": "To animate means, figuratively, to \"give life to\". There are two basic methods that animators commonly use to accomplish this.",
"title": "Computer-generated animation"
},
{
"paragraph_id": 33,
"text": "Computer-generated animation is known as three-dimensional (3D) animation. Creators design an object or character with an X, a Y and a Z axis. No pencil-to-paper drawings create the way computer-generated animation works. The object or character created will then be taken into a software. Key-framing and tweening are also carried out in computer-generated animation but so are many techniques unrelated to traditional animation. Animators can break physical laws by using mathematical algorithms to cheat mass, force and gravity rulings. Fundamentally, time scale and quality could be said to be a preferred way to produce animation as they are major aspects enhanced by using computer-generated animation. Another positive aspect of CGA is the fact one can create a flock of creatures to act independently when created as a group. An animal's fur can be programmed to wave in the wind and lie flat when it rains instead of separately programming each strand of hair.",
"title": "Computer-generated animation"
},
{
"paragraph_id": 34,
"text": "A few examples of computer-generated animation movies are Toy Story, Antz, Ice Age, Happy Feet, Despicable Me, Frozen, and Shrek.",
"title": "Computer-generated animation"
},
{
"paragraph_id": 35,
"text": "2D computer graphics are still used for stylistic, low bandwidth, and faster real-time renderings.",
"title": "2D computer animation"
},
{
"paragraph_id": 36,
"text": "Computer animation is essentially a digital successor to stop motion techniques, but using 3D models, and traditional animation techniques using frame-by-frame animation of 2D illustrations.",
"title": "2D computer animation"
},
{
"paragraph_id": 37,
"text": "For 2D figure animations, separate objects (illustrations) and separate transparent layers are used with or without that virtual skeleton.",
"title": "2D computer animation"
},
{
"paragraph_id": 38,
"text": "In 2D computer animation, moving objects are often referred to as \"sprites.\" A sprite is an image that has a location associated with it. The location of the sprite is changed slightly, between each displayed frame, to make the sprite appear to move. The following pseudocode makes a sprite move from left to right:",
"title": "2D computer animation"
},
{
"paragraph_id": 39,
"text": "Computer-assisted animation is usually classed as two-dimensional (2D) animation. Drawings are either hand drawn (pencil to paper) or interactively drawn (on the computer) using different assisting appliances and are positioned into specific software packages. Within the software package, the creator places drawings into different key frames which fundamentally create an outline of the most important movements. The computer then fills in the \"in-between frames\", a process commonly known as Tweening. Computer-assisted animation employs new technologies to produce content faster than is possible with traditional animation, while still retaining the stylistic elements of traditionally drawn characters or objects.",
"title": "2D computer animation"
},
{
"paragraph_id": 40,
"text": "Examples of films produced using computer-assisted animation are The Little Mermaid, The Rescuers Down Under, Beauty and the Beast, Aladdin, The Lion King, Pocahontas, The Hunchback of Notre Dame, Hercules, Mulan, Tarzan and The Road to El Dorado.",
"title": "2D computer animation"
},
{
"paragraph_id": 41,
"text": "A text-to-video model is a machine learning model which takes as input a natural language description and produces a video matching that description.",
"title": "2D computer animation"
}
] | Computer animation is the process used for digitally generating animations. The more general term computer-generated imagery (CGI) encompasses both static scenes and dynamic images, while computer animation only refers to moving images. Modern computer animation usually uses 3D computer graphics. The animation's target is sometimes the computer itself, while other times it is film. Computer animation is essentially a digital successor to stop motion techniques, but using models and traditional animation techniques using frame-by-frame animation illustrations. Also computer-generated animations allow a single graphic artist to produce such content without using actors, expensive set pieces, or props. To create the illusion of movement, an image is displayed on the computer monitor and repeatedly replaced by a new similar image but advanced slightly in time. This technique is identical to how the illusion of movement is achieved with television and motion pictures. For 3D animations, objects (models) are built on the computer monitor (modeled) and 3D figures are rigged with a virtual skeleton. Then the limbs, eyes, mouth, clothes, etc. of the figure are moved by the animator on key frames. The differences in appearance between key frames are automatically calculated by the computer in a process known as tweening or morphing. Finally, the animation is rendered. For 3D animations, all frames must be rendered after the modeling is complete. For pre-recorded presentations, the rendered frames are transferred to a different format or medium, like digital video. The frames may also be rendered in real-time as they are presented to the end-user audience. Low bandwidth animations transmitted via the internet often use the software on the end user's computer to render in real-time as an alternative to streaming or pre-loaded high bandwidth animations. | 2001-10-13T07:37:37Z | 2023-12-12T04:02:24Z | [
"Template:Computer science",
"Template:Sfn",
"Template:Refbegin",
"Template:Globalize",
"Template:Refend",
"Template:Commons category-inline",
"Template:Animation",
"Template:Authority control",
"Template:See also",
"Template:Reflist",
"Template:Cite news",
"Template:Em",
"Template:Cite web",
"Template:Library resources box",
"Template:Webarchive",
"Template:Div col end",
"Template:Cite magazine",
"Template:Excerpt",
"Template:Cite book",
"Template:Short description",
"Template:More citations needed",
"Template:Film genres",
"Template:Div col",
"Template:Cite journal",
"Template:Split",
"Template:Main",
"Template:Portal"
] | https://en.wikipedia.org/wiki/Computer_animation |
6,778 | Ceawlin of Wessex | Ceawlin ([ˈtʃæɑw.lin] CHOW-lin; also spelled Ceaulin, Caelin, Celin, died ca. 593) was a King of Wessex. He may have been the son of Cynric of Wessex and the grandson of Cerdic of Wessex, whom the Anglo-Saxon Chronicle represents as the leader of the first group of Saxons to come to the land which later became Wessex. Ceawlin was active during the last years of the Anglo-Saxon expansion, with little of southern England remaining in the control of the native Britons by the time of his death.
The chronology of Ceawlin's life is highly uncertain. The historical accuracy and dating of many of the events in the later Anglo-Saxon Chronicle have been called into question, and his reign is variously listed as lasting seven, seventeen, or thirty-two years. The Chronicle records several battles of Ceawlin's between the years 556 and 592, including the first record of a battle between different groups of Anglo-Saxons, and indicates that under Ceawlin Wessex acquired significant territory, some of which was later to be lost to other Anglo-Saxon kingdoms. Ceawlin is also named as one of the eight "bretwaldas", a title given in the Chronicle to eight rulers who had overlordship over southern Britain, although the extent of Ceawlin's control is not known.
Ceawlin died in 593, having been deposed the year before, possibly by his successor, Ceol. He is recorded in various sources as having two sons, Cutha and Cuthwine, but the genealogies in which this information is found are known to be unreliable.
The history of the sub-Roman period in Britain is poorly sourced and the subject of a number of important disagreements among historians. It appears, however, that in the fifth century, raids on Britain by continental peoples developed into migrations. The newcomers included Angles, Saxons, Jutes and Frisians. These peoples captured territory in the east and south of England, but at about the end of the fifth century, a British victory at the battle of Mons Badonicus halted the Anglo-Saxon advance for fifty years. Near the year 550, however, the British began to lose ground once more, and within twenty-five years, it appears that control of almost all of southern England was in the hands of the invaders.
The peace following the battle of Mons Badonicus is attested partly by Gildas, a monk, who wrote De Excidio et Conquestu Britanniae or On the Ruin and Conquest of Britain during the middle of the sixth century. This essay is a polemic against corruption and Gildas provides little in the way of names and dates. He appears, however, to state that peace had lasted from the year of his birth to the time he was writing. The Anglo-Saxon Chronicle is the other main source that bears on this period, in particular in an entry for the year 827 that records a list of the kings who bore the title "bretwalda", or "Britain-ruler". That list shows a gap in the early sixth century that matches Gildas's version of events.
Ceawlin's reign belongs to the period of Anglo-Saxon expansion at the end of the sixth century. Though there are many unanswered questions about the chronology and activities of the early West Saxon rulers, it is clear that Ceawlin was one of the key figures in the final Anglo-Saxon conquest of southern Britain.
The two main written sources for early West Saxon history are the Anglo-Saxon Chronicle and the West Saxon Genealogical Regnal List. The Chronicle is a set of annals which were compiled near the year 890, during the reign of King Alfred the Great of Wessex. They record earlier material for the older entries, which were assembled from earlier annals that no longer survive, as well as from saga material that might have been transmitted orally. The Chronicle dates the arrival of the future "West Saxons" in Britain to 495, when Cerdic and his son, Cynric, land at Cerdices ora, or Cerdic's shore. Almost twenty annals describing Cerdic's campaigns and those of his descendants appear interspersed through the next hundred years of entries in the Chronicle. Although these annals provide most of what is known about Ceawlin, the historicity of many of the entries is uncertain.
The West Saxon Genealogical Regnal List is a list of rulers of Wessex, including the lengths of their reigns. It survives in several forms, including as a preface to the [B] manuscript of the Chronicle. Like the Chronicle, the List was compiled in its present form during the reign of Alfred the Great, but an earlier version of the List was also one of the sources of the Chronicle itself. Both the list and the Chronicle are influenced by the desire of their writers to use a single line of descent to trace the lineage of the Kings of Wessex through Cerdic to Gewis, the legendary eponymous ancestor of the West Saxons, who is made to descend from Woden. The result served the political purposes of the scribe, but is riddled with contradictions for historians.
The contradictions may be seen clearly by calculating dates by different methods from the various sources. The first event in West Saxon history whose date can be regarded as reasonably certain is the baptism of Cynegils, which occurred in the late 630s, perhaps as late as 640. The Chronicle dates Cerdic's arrival to 495, but adding up the lengths of the reigns as given in the West Saxon Genealogical Regnal List leads to the conclusion that Cerdic's reign might have started in 532, a difference of 37 years. Neither 495 nor 532 may be treated as reliable; however, the latter date relies on the presumption that the Regnal List is correct in presenting the Kings of Wessex as having succeeded one another, with no omitted kings, and no joint kingships, and that the durations of the reigns are correct as given. None of these presumptions may be made safely.
The sources also are inconsistent on the length of Ceawlin's reign. The Chronicle gives it as thirty-two years, from 560 to 592, but the manuscripts of the Regnal List disagree: different copies give it as seven or seventeen years. David Dumville's detailed study of the Regnal List finds that it originally dated the arrival of the West Saxons in England to 532, and favours seven years as the earliest claimed length of Ceawlin's reign, with dates of 581–588 proposed. Dumville suggests that Ceawlin's reign length was then inflated to help extend the longevity of the Cerdicing dynasty further back into the past, and that Ceawlin's reign specifically was extended because he is mentioned by Bede, giving him a status which led later West Saxon historians to conclude that he deserved a more impressive-looking reign. The sources do agree that Ceawlin is the son of Cynric and he usually is named as the father of Cuthwine. There is one discrepancy in this case: the entry for 685 in the [A] version of the Chronicle assigns Ceawlin a son, Cutha, but in the 855 entry in the same manuscript, Cutha is listed as the son of Cuthwine. Cutha also is named as Ceawlin's brother in the [E] and [F] versions of the Chronicle, in the 571 and 568 entries, respectively.
Whether Ceawlin is a descendant of Cerdic is a matter of debate. Subgroupings of different West Saxon lineages give the impression of separate groups, of which Ceawlin's line is one. Some of the problems in the Wessex genealogies may have come about because of efforts to integrate Ceawlin's line with the other lineages: it became very important to the West Saxons to be able to trace the ancestors of their rulers back to Cerdic. Another reason for doubting the literal nature of these early genealogies is that the etymology of the names of several early members of the dynasty do not appear to be Germanic, as would be expected in the names of leaders of an apparently Anglo-Saxon dynasty. The name Ceawlin has no convincing Old English etymology; it seems more likely to be of British origin.
The earliest sources do not use the term "West Saxon". According to Bede's Ecclesiastical History of the English People, the term is interchangeable with the Gewisse. The term "West Saxon" appears only in the late seventh century, after the reign of Cædwalla.
Ultimately, the kingdom of Wessex occupied the southwest of England, but the initial stages in this expansion are not apparent from the sources. Cerdic's landing, whenever it is to be dated, seems to have been near the Isle of Wight, and the annals record the conquest of the island in 530. In 534, according to the Chronicle, Cerdic died and his son Cynric took the throne; the Chronicle adds that "they gave the Isle of Wight to their nephews, Stuf and Wihtgar". These records are in direct conflict with Bede, who states that the Isle of Wight was settled by Jutes, not Saxons; the archaeological record is somewhat in favour of Bede on this.
Subsequent entries in the Chronicle give details of some of the battles by which the West Saxons won their kingdom. Ceawlin's campaigns are not given as near the coast. They range along the Thames Valley and beyond, as far as Surrey in the east and the mouth of the Severn in the west. Ceawlin clearly is part of the West Saxon expansion, but the military history of the period is difficult to understand. In what follows the dates are as given in the Chronicle, although, as noted above, these are earlier than now thought accurate.
The first record of a battle fought by Ceawlin is in 556, when he and his father, Cynric, fought the native Britons at "Beran byrg", or Bera's Stronghold. This now is identified as Barbury Castle, an Iron Age hill fort in Wiltshire, near Swindon. Cynric would have been king of Wessex at this time.
The first battle Ceawlin fought as king is dated by the Chronicle to 568, when he and Cutha fought with Æthelberht, the king of Kent. The entry says "Here Ceawlin and Cutha fought against Aethelberht and drove him into Kent; and they killed two ealdormen, Oslaf and Cnebba, on Wibbandun." The location of "Wibbandun", which can be translated as "Wibba's Mount", has not been identified definitely; it was at one time thought to be Wimbledon, but this now is known to be incorrect.
David Cooper proposes Wyboston, a small village 8 miles north-east of Bedford on the west bank of the Great Ouse. Wibbandun is often written as Wibba's Dun, which is close phonetically to Wyboston and Æthelberht's dominance, from Kent to the Humber according to Bede, extended across those Anglian territories south of the Wash. It was this region that came under threat from Ceawlin as he looked to establish a defensible boundary on the Great Ouse River in the easternmost part of his territory. In addition, Cnebba, named as slain in this battle, has been associated with Knebworth, which lies 20 miles to the south of Wyboston. Half-a-mile south of Wyboston is a village called Chawston. The origin of the place-name is unknown but might be derived from the Old English Ceawston or Ceawlinston. A defeat at Wyboston for Æthelberht would have damaged his overlord status and diminished his influence over the Anglians. The idea that he was driven or 'pursued' into Kent (depending on which Anglo-Saxon Chronicle translation is preferred) should not be taken literally. Similar phraseology is often found in the Chronicle when one king bests another. A defeat suffered as part of an expedition to help his Anglian clients would have caused Æthelberht to withdraw into Kent to recover.
This battle is notable as the first recorded conflict between the invading peoples: previous battles recorded in the Chronicle are between the Anglo-Saxons and the native Britons.
There are multiple examples of joint kingship in Anglo-Saxon history, and this may be another: it is not clear what Cutha's relationship to Ceawlin is, but it certainly is possible he was also a king. The annal for 577, below, is another possible example.
The annal for 571 reads: "Here Cuthwulf fought against the Britons at Bedcanford, and took four settlements: Limbury and Aylesbury, Benson and Eynsham; and in the same year he passed away." Cuthwulf's relationship with Ceawlin is unknown, but the alliteration common to Anglo-Saxon royal families suggests Cuthwulf may be part of the West Saxon royal line. The location of the battle itself is unidentified. It has been suggested that it was Bedford, but what is known of the early history of Bedford's names does not support this. This battle is of interest because it is surprising that an area so far east should still be in Briton hands this late: there is ample archaeological evidence of early Saxon and Anglian presence in the Midlands, and historians generally have interpreted Gildas's De Excidio as implying that the Britons had lost control of this area by the mid-sixth century. One possible explanation is that this annal records a reconquest of land that was lost to the Britons in the campaigns ending in the battle of Mons Badonicus.
The annal for 577 reads "Here Cuthwine and Ceawlin fought against the Britons, and they killed three kings, Coinmail and Condidan and Farinmail, in the place which is called Dyrham, and took three cities: Gloucester and Cirencester and Bath." This entry is all that is known of these Briton kings; their names are in an archaic form that makes it very likely that this annal derives from a much older written source. The battle itself has long been regarded as a key moment in the Saxon advance, since in reaching the Bristol Channel, the West Saxons divided the Britons west of the Severn from land communication with those in the peninsula to the south of the Channel. Wessex almost certainly lost this territory to Penda of Mercia in 628, when the Chronicle records that "Cynegils and Cwichelm fought against Penda at Cirencester and then came to an agreement."
It is possible that when Ceawlin and Cuthwine took Bath, they found the Roman baths still operating to some extent. Nennius, a ninth-century historian, mentions a "Hot Lake" in the land of the Hwicce, which was along the Severn, and adds "It is surrounded by a wall, made of brick and stone, and men may go there to bathe at any time, and every man can have the kind of bath he likes. If he wants, it will be a cold bath; and if he wants a hot bath, it will be hot". Bede also describes hot baths in the geographical introduction to the Ecclesiastical History in terms very similar to those of Nennius.
Wansdyke, an early-medieval defensive linear earthwork, runs from south of Bristol to near Marlborough, Wiltshire, passing not far from Bath. It probably was built in the fifth or sixth centuries, perhaps by Ceawlin.
Ceawlin's last recorded victory is in 584. The entry reads "Here Ceawlin and Cutha fought against the Britons at the place which is named Fethan leag, and Cutha was killed; and Ceawlin took many towns and countless war-loot, and in anger he turned back to his own [territory]." There is a wood named "Fethelée" mentioned in a twelfth-century document that relates to Stoke Lyne, in Oxfordshire, and it now is thought that the battle of Fethan leag must have been fought in this area.
The phrase "in anger he turned back to his own" probably indicates that this annal is drawn from saga material, as perhaps are all of the early Wessex annals. It also has been used to argue that perhaps, Ceawlin did not win the battle and that the chronicler chose not to record the outcome fully—a king does not usually come home "in anger" after taking "many towns and countless war-loot". It may be that Ceawlin's overlordship of the southern Britons came to an end with this battle.
About 731, Bede, a Northumbrian monk and chronicler, wrote a work called the Ecclesiastical History of the English People. The work was not primarily a secular history, but Bede provides much information about the history of the Anglo-Saxons, including a list early in the history of seven kings who, he said, held "imperium" over the other kingdoms south of the Humber. The usual translation for "imperium" is "overlordship". Bede names Ceawlin as the second on the list, although he spells it "Caelin", and adds that he was "known in the speech of his own people as Ceaulin". Bede also makes it clear that Ceawlin was not a Christian—Bede mentions a later king, Æthelberht of Kent, as "the first to enter the kingdom of heaven".
The Anglo-Saxon Chronicle, in an entry for the year 827, repeats Bede's list, adds Egbert of Wessex, and also mentions that they were known as "bretwalda", or "Britain-ruler". A great deal of scholarly attention has been given to the meaning of this word. It has been described as a term "of encomiastic poetry", but there also is evidence that it implied a definite role of military leadership.
Bede says that these kings had authority "south of the Humber", but the span of control, at least of the earlier bretwaldas, likely was less than this. In Ceawlin's case the range of control is hard to determine accurately, but Bede's inclusion of Ceawlin in the list of kings who held imperium, and the list of battles he is recorded as having won, indicate an energetic and successful leader who, from a base in the upper Thames valley, dominated much of the surrounding area and held overlordship over the southern Britons for some period. Despite Ceawlin's military successes, the northern conquests he made could not always be retained: Mercia took much of the upper Thames valley, and the north-eastern towns won in 571 were among territory subsequently under the control of Kent and Mercia at different times.
Bede's concept of the power of these overlords also must be regarded as the product of his eighth-century viewpoint. When the Ecclesiastical History was written, Æthelbald of Mercia dominated the English south of the Humber, and Bede's view of the earlier kings was doubtless strongly coloured by the state of England at that time. For the earlier bretwaldas, such as Ælle and Ceawlin, there must be some element of anachronism in Bede's description. It also is possible that Bede only meant to refer to power over Anglo-Saxon kingdoms, not the native Britons.
Ceawlin is the second king in Bede's list. All the subsequent bretwaldas followed more or less consecutively, but there is a long gap, perhaps fifty years, between Ælle of Sussex, the first bretwalda, and Ceawlin. The lack of gaps between the overlordships of the later bretwaldas has been used to make an argument for Ceawlin's dates matching the later entries in the Chronicle with reasonable accuracy. According to this analysis, the next bretwalda, Æthelberht of Kent, must have been already a dominant king by the time Pope Gregory the Great wrote to him in 601, since Gregory would have not written to an underking. Ceawlin defeated Æthelberht in 568 according to the Chronicle. Æthelberht's dates are a matter of debate, but recent scholarly consensus has his reign starting no earlier than 580. The 568 date for the battle at Wibbandun is thought to be unlikely because of the assertion in various versions of the West Saxon Genealogical Regnal List that Ceawlin's reign lasted either seven or seventeen years. If this battle is placed near the year 590, before Æthelberht had established himself as a powerful king, then the subsequent annals relating to Ceawlin's defeat and death may be reasonably close to the correct date. In any case, the battle with Æthelberht is unlikely to have been more than a few years on either side of 590. The gap between Ælle and Ceawlin, on the other hand, has been taken as supporting evidence for the story told by Gildas in De Excidio of a peace lasting a generation or more following a Briton victory at Mons Badonicus.
Æthelberht of Kent succeeds Ceawlin on the list of bretwaldas, but the reigns may overlap somewhat: recent evaluations give Ceawlin a likely reign of 581–588, and place Æthelberht's accession near to the year 589, but these analyses are no more than scholarly guesses. Ceawlin's eclipse in 592, probably by Ceol, may have been the occasion for Æthelberht to rise to prominence; Æthelberht very likely was the dominant Anglo-Saxon king by 597. Æthelberht's rise may have been earlier: the 584 annal, even if it records a victory, is the last victory of Ceawlin's in the Chronicle, and the period after that may have been one of Æthelberht's ascent and Ceawlin's decline.
Ceawlin lost the throne of Wessex in 592. The annal for that year reads, in part: "Here there was great slaughter at Woden's Barrow, and Ceawlin was driven out." Woden's Barrow is a tumulus, now called Adam's Grave, at Alton Priors, Wiltshire. No details of his opponent are given. The medieval chronicler William of Malmesbury, writing in about 1120, says that it was "the Angles and the British conspiring together". Alternatively, it may have been Ceol, who is supposed to have been the next king of Wessex, ruling for six years according to the West Saxon Genealogical Regnal List. According to the Anglo-Saxon Chronicle, Ceawlin died the following year. The relevant part of the annal reads: "Here Ceawlin and Cwichelm and Crida perished." Nothing more is known of Cwichelm and Crida, although they may have been members of the Wessex royal house—their names fit the alliterative pattern common to royal houses of the time.
According to the Regnal List, Ceol was a son of Cutha, who was a son of Cynric; and Ceolwulf, his brother, reigned for seventeen years after him. It is possible that some fragmentation of control among the West Saxons occurred at Ceawlin's death: Ceol and Ceolwulf may have been based in Wiltshire, as opposed to the upper Thames valley. This split also may have contributed to Æthelberht's ability to rise to dominance in southern England. The West Saxons remained influential in military terms, however: the Chronicle and Bede record continued military activity against Essex and Sussex within twenty or thirty years of Ceawlin's death.
Primary sources
Secondary sources | [
{
"paragraph_id": 0,
"text": "Ceawlin ([ˈtʃæɑw.lin] CHOW-lin; also spelled Ceaulin, Caelin, Celin, died ca. 593) was a King of Wessex. He may have been the son of Cynric of Wessex and the grandson of Cerdic of Wessex, whom the Anglo-Saxon Chronicle represents as the leader of the first group of Saxons to come to the land which later became Wessex. Ceawlin was active during the last years of the Anglo-Saxon expansion, with little of southern England remaining in the control of the native Britons by the time of his death.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The chronology of Ceawlin's life is highly uncertain. The historical accuracy and dating of many of the events in the later Anglo-Saxon Chronicle have been called into question, and his reign is variously listed as lasting seven, seventeen, or thirty-two years. The Chronicle records several battles of Ceawlin's between the years 556 and 592, including the first record of a battle between different groups of Anglo-Saxons, and indicates that under Ceawlin Wessex acquired significant territory, some of which was later to be lost to other Anglo-Saxon kingdoms. Ceawlin is also named as one of the eight \"bretwaldas\", a title given in the Chronicle to eight rulers who had overlordship over southern Britain, although the extent of Ceawlin's control is not known.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Ceawlin died in 593, having been deposed the year before, possibly by his successor, Ceol. He is recorded in various sources as having two sons, Cutha and Cuthwine, but the genealogies in which this information is found are known to be unreliable.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The history of the sub-Roman period in Britain is poorly sourced and the subject of a number of important disagreements among historians. It appears, however, that in the fifth century, raids on Britain by continental peoples developed into migrations. The newcomers included Angles, Saxons, Jutes and Frisians. These peoples captured territory in the east and south of England, but at about the end of the fifth century, a British victory at the battle of Mons Badonicus halted the Anglo-Saxon advance for fifty years. Near the year 550, however, the British began to lose ground once more, and within twenty-five years, it appears that control of almost all of southern England was in the hands of the invaders.",
"title": "Historical context"
},
{
"paragraph_id": 4,
"text": "The peace following the battle of Mons Badonicus is attested partly by Gildas, a monk, who wrote De Excidio et Conquestu Britanniae or On the Ruin and Conquest of Britain during the middle of the sixth century. This essay is a polemic against corruption and Gildas provides little in the way of names and dates. He appears, however, to state that peace had lasted from the year of his birth to the time he was writing. The Anglo-Saxon Chronicle is the other main source that bears on this period, in particular in an entry for the year 827 that records a list of the kings who bore the title \"bretwalda\", or \"Britain-ruler\". That list shows a gap in the early sixth century that matches Gildas's version of events.",
"title": "Historical context"
},
{
"paragraph_id": 5,
"text": "Ceawlin's reign belongs to the period of Anglo-Saxon expansion at the end of the sixth century. Though there are many unanswered questions about the chronology and activities of the early West Saxon rulers, it is clear that Ceawlin was one of the key figures in the final Anglo-Saxon conquest of southern Britain.",
"title": "Historical context"
},
{
"paragraph_id": 6,
"text": "The two main written sources for early West Saxon history are the Anglo-Saxon Chronicle and the West Saxon Genealogical Regnal List. The Chronicle is a set of annals which were compiled near the year 890, during the reign of King Alfred the Great of Wessex. They record earlier material for the older entries, which were assembled from earlier annals that no longer survive, as well as from saga material that might have been transmitted orally. The Chronicle dates the arrival of the future \"West Saxons\" in Britain to 495, when Cerdic and his son, Cynric, land at Cerdices ora, or Cerdic's shore. Almost twenty annals describing Cerdic's campaigns and those of his descendants appear interspersed through the next hundred years of entries in the Chronicle. Although these annals provide most of what is known about Ceawlin, the historicity of many of the entries is uncertain.",
"title": "Early West Saxon sources"
},
{
"paragraph_id": 7,
"text": "The West Saxon Genealogical Regnal List is a list of rulers of Wessex, including the lengths of their reigns. It survives in several forms, including as a preface to the [B] manuscript of the Chronicle. Like the Chronicle, the List was compiled in its present form during the reign of Alfred the Great, but an earlier version of the List was also one of the sources of the Chronicle itself. Both the list and the Chronicle are influenced by the desire of their writers to use a single line of descent to trace the lineage of the Kings of Wessex through Cerdic to Gewis, the legendary eponymous ancestor of the West Saxons, who is made to descend from Woden. The result served the political purposes of the scribe, but is riddled with contradictions for historians.",
"title": "Early West Saxon sources"
},
{
"paragraph_id": 8,
"text": "The contradictions may be seen clearly by calculating dates by different methods from the various sources. The first event in West Saxon history whose date can be regarded as reasonably certain is the baptism of Cynegils, which occurred in the late 630s, perhaps as late as 640. The Chronicle dates Cerdic's arrival to 495, but adding up the lengths of the reigns as given in the West Saxon Genealogical Regnal List leads to the conclusion that Cerdic's reign might have started in 532, a difference of 37 years. Neither 495 nor 532 may be treated as reliable; however, the latter date relies on the presumption that the Regnal List is correct in presenting the Kings of Wessex as having succeeded one another, with no omitted kings, and no joint kingships, and that the durations of the reigns are correct as given. None of these presumptions may be made safely.",
"title": "Early West Saxon sources"
},
{
"paragraph_id": 9,
"text": "The sources also are inconsistent on the length of Ceawlin's reign. The Chronicle gives it as thirty-two years, from 560 to 592, but the manuscripts of the Regnal List disagree: different copies give it as seven or seventeen years. David Dumville's detailed study of the Regnal List finds that it originally dated the arrival of the West Saxons in England to 532, and favours seven years as the earliest claimed length of Ceawlin's reign, with dates of 581–588 proposed. Dumville suggests that Ceawlin's reign length was then inflated to help extend the longevity of the Cerdicing dynasty further back into the past, and that Ceawlin's reign specifically was extended because he is mentioned by Bede, giving him a status which led later West Saxon historians to conclude that he deserved a more impressive-looking reign. The sources do agree that Ceawlin is the son of Cynric and he usually is named as the father of Cuthwine. There is one discrepancy in this case: the entry for 685 in the [A] version of the Chronicle assigns Ceawlin a son, Cutha, but in the 855 entry in the same manuscript, Cutha is listed as the son of Cuthwine. Cutha also is named as Ceawlin's brother in the [E] and [F] versions of the Chronicle, in the 571 and 568 entries, respectively.",
"title": "Early West Saxon sources"
},
{
"paragraph_id": 10,
"text": "Whether Ceawlin is a descendant of Cerdic is a matter of debate. Subgroupings of different West Saxon lineages give the impression of separate groups, of which Ceawlin's line is one. Some of the problems in the Wessex genealogies may have come about because of efforts to integrate Ceawlin's line with the other lineages: it became very important to the West Saxons to be able to trace the ancestors of their rulers back to Cerdic. Another reason for doubting the literal nature of these early genealogies is that the etymology of the names of several early members of the dynasty do not appear to be Germanic, as would be expected in the names of leaders of an apparently Anglo-Saxon dynasty. The name Ceawlin has no convincing Old English etymology; it seems more likely to be of British origin.",
"title": "Early West Saxon sources"
},
{
"paragraph_id": 11,
"text": "The earliest sources do not use the term \"West Saxon\". According to Bede's Ecclesiastical History of the English People, the term is interchangeable with the Gewisse. The term \"West Saxon\" appears only in the late seventh century, after the reign of Cædwalla.",
"title": "Early West Saxon sources"
},
{
"paragraph_id": 12,
"text": "Ultimately, the kingdom of Wessex occupied the southwest of England, but the initial stages in this expansion are not apparent from the sources. Cerdic's landing, whenever it is to be dated, seems to have been near the Isle of Wight, and the annals record the conquest of the island in 530. In 534, according to the Chronicle, Cerdic died and his son Cynric took the throne; the Chronicle adds that \"they gave the Isle of Wight to their nephews, Stuf and Wihtgar\". These records are in direct conflict with Bede, who states that the Isle of Wight was settled by Jutes, not Saxons; the archaeological record is somewhat in favour of Bede on this.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 13,
"text": "Subsequent entries in the Chronicle give details of some of the battles by which the West Saxons won their kingdom. Ceawlin's campaigns are not given as near the coast. They range along the Thames Valley and beyond, as far as Surrey in the east and the mouth of the Severn in the west. Ceawlin clearly is part of the West Saxon expansion, but the military history of the period is difficult to understand. In what follows the dates are as given in the Chronicle, although, as noted above, these are earlier than now thought accurate.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 14,
"text": "The first record of a battle fought by Ceawlin is in 556, when he and his father, Cynric, fought the native Britons at \"Beran byrg\", or Bera's Stronghold. This now is identified as Barbury Castle, an Iron Age hill fort in Wiltshire, near Swindon. Cynric would have been king of Wessex at this time.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 15,
"text": "The first battle Ceawlin fought as king is dated by the Chronicle to 568, when he and Cutha fought with Æthelberht, the king of Kent. The entry says \"Here Ceawlin and Cutha fought against Aethelberht and drove him into Kent; and they killed two ealdormen, Oslaf and Cnebba, on Wibbandun.\" The location of \"Wibbandun\", which can be translated as \"Wibba's Mount\", has not been identified definitely; it was at one time thought to be Wimbledon, but this now is known to be incorrect.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 16,
"text": "David Cooper proposes Wyboston, a small village 8 miles north-east of Bedford on the west bank of the Great Ouse. Wibbandun is often written as Wibba's Dun, which is close phonetically to Wyboston and Æthelberht's dominance, from Kent to the Humber according to Bede, extended across those Anglian territories south of the Wash. It was this region that came under threat from Ceawlin as he looked to establish a defensible boundary on the Great Ouse River in the easternmost part of his territory. In addition, Cnebba, named as slain in this battle, has been associated with Knebworth, which lies 20 miles to the south of Wyboston. Half-a-mile south of Wyboston is a village called Chawston. The origin of the place-name is unknown but might be derived from the Old English Ceawston or Ceawlinston. A defeat at Wyboston for Æthelberht would have damaged his overlord status and diminished his influence over the Anglians. The idea that he was driven or 'pursued' into Kent (depending on which Anglo-Saxon Chronicle translation is preferred) should not be taken literally. Similar phraseology is often found in the Chronicle when one king bests another. A defeat suffered as part of an expedition to help his Anglian clients would have caused Æthelberht to withdraw into Kent to recover.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 17,
"text": "This battle is notable as the first recorded conflict between the invading peoples: previous battles recorded in the Chronicle are between the Anglo-Saxons and the native Britons.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 18,
"text": "There are multiple examples of joint kingship in Anglo-Saxon history, and this may be another: it is not clear what Cutha's relationship to Ceawlin is, but it certainly is possible he was also a king. The annal for 577, below, is another possible example.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 19,
"text": "The annal for 571 reads: \"Here Cuthwulf fought against the Britons at Bedcanford, and took four settlements: Limbury and Aylesbury, Benson and Eynsham; and in the same year he passed away.\" Cuthwulf's relationship with Ceawlin is unknown, but the alliteration common to Anglo-Saxon royal families suggests Cuthwulf may be part of the West Saxon royal line. The location of the battle itself is unidentified. It has been suggested that it was Bedford, but what is known of the early history of Bedford's names does not support this. This battle is of interest because it is surprising that an area so far east should still be in Briton hands this late: there is ample archaeological evidence of early Saxon and Anglian presence in the Midlands, and historians generally have interpreted Gildas's De Excidio as implying that the Britons had lost control of this area by the mid-sixth century. One possible explanation is that this annal records a reconquest of land that was lost to the Britons in the campaigns ending in the battle of Mons Badonicus.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 20,
"text": "The annal for 577 reads \"Here Cuthwine and Ceawlin fought against the Britons, and they killed three kings, Coinmail and Condidan and Farinmail, in the place which is called Dyrham, and took three cities: Gloucester and Cirencester and Bath.\" This entry is all that is known of these Briton kings; their names are in an archaic form that makes it very likely that this annal derives from a much older written source. The battle itself has long been regarded as a key moment in the Saxon advance, since in reaching the Bristol Channel, the West Saxons divided the Britons west of the Severn from land communication with those in the peninsula to the south of the Channel. Wessex almost certainly lost this territory to Penda of Mercia in 628, when the Chronicle records that \"Cynegils and Cwichelm fought against Penda at Cirencester and then came to an agreement.\"",
"title": "West Saxon expansion"
},
{
"paragraph_id": 21,
"text": "It is possible that when Ceawlin and Cuthwine took Bath, they found the Roman baths still operating to some extent. Nennius, a ninth-century historian, mentions a \"Hot Lake\" in the land of the Hwicce, which was along the Severn, and adds \"It is surrounded by a wall, made of brick and stone, and men may go there to bathe at any time, and every man can have the kind of bath he likes. If he wants, it will be a cold bath; and if he wants a hot bath, it will be hot\". Bede also describes hot baths in the geographical introduction to the Ecclesiastical History in terms very similar to those of Nennius.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 22,
"text": "Wansdyke, an early-medieval defensive linear earthwork, runs from south of Bristol to near Marlborough, Wiltshire, passing not far from Bath. It probably was built in the fifth or sixth centuries, perhaps by Ceawlin.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 23,
"text": "Ceawlin's last recorded victory is in 584. The entry reads \"Here Ceawlin and Cutha fought against the Britons at the place which is named Fethan leag, and Cutha was killed; and Ceawlin took many towns and countless war-loot, and in anger he turned back to his own [territory].\" There is a wood named \"Fethelée\" mentioned in a twelfth-century document that relates to Stoke Lyne, in Oxfordshire, and it now is thought that the battle of Fethan leag must have been fought in this area.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 24,
"text": "The phrase \"in anger he turned back to his own\" probably indicates that this annal is drawn from saga material, as perhaps are all of the early Wessex annals. It also has been used to argue that perhaps, Ceawlin did not win the battle and that the chronicler chose not to record the outcome fully—a king does not usually come home \"in anger\" after taking \"many towns and countless war-loot\". It may be that Ceawlin's overlordship of the southern Britons came to an end with this battle.",
"title": "West Saxon expansion"
},
{
"paragraph_id": 25,
"text": "About 731, Bede, a Northumbrian monk and chronicler, wrote a work called the Ecclesiastical History of the English People. The work was not primarily a secular history, but Bede provides much information about the history of the Anglo-Saxons, including a list early in the history of seven kings who, he said, held \"imperium\" over the other kingdoms south of the Humber. The usual translation for \"imperium\" is \"overlordship\". Bede names Ceawlin as the second on the list, although he spells it \"Caelin\", and adds that he was \"known in the speech of his own people as Ceaulin\". Bede also makes it clear that Ceawlin was not a Christian—Bede mentions a later king, Æthelberht of Kent, as \"the first to enter the kingdom of heaven\".",
"title": "Bretwaldaship"
},
{
"paragraph_id": 26,
"text": "The Anglo-Saxon Chronicle, in an entry for the year 827, repeats Bede's list, adds Egbert of Wessex, and also mentions that they were known as \"bretwalda\", or \"Britain-ruler\". A great deal of scholarly attention has been given to the meaning of this word. It has been described as a term \"of encomiastic poetry\", but there also is evidence that it implied a definite role of military leadership.",
"title": "Bretwaldaship"
},
{
"paragraph_id": 27,
"text": "Bede says that these kings had authority \"south of the Humber\", but the span of control, at least of the earlier bretwaldas, likely was less than this. In Ceawlin's case the range of control is hard to determine accurately, but Bede's inclusion of Ceawlin in the list of kings who held imperium, and the list of battles he is recorded as having won, indicate an energetic and successful leader who, from a base in the upper Thames valley, dominated much of the surrounding area and held overlordship over the southern Britons for some period. Despite Ceawlin's military successes, the northern conquests he made could not always be retained: Mercia took much of the upper Thames valley, and the north-eastern towns won in 571 were among territory subsequently under the control of Kent and Mercia at different times.",
"title": "Bretwaldaship"
},
{
"paragraph_id": 28,
"text": "Bede's concept of the power of these overlords also must be regarded as the product of his eighth-century viewpoint. When the Ecclesiastical History was written, Æthelbald of Mercia dominated the English south of the Humber, and Bede's view of the earlier kings was doubtless strongly coloured by the state of England at that time. For the earlier bretwaldas, such as Ælle and Ceawlin, there must be some element of anachronism in Bede's description. It also is possible that Bede only meant to refer to power over Anglo-Saxon kingdoms, not the native Britons.",
"title": "Bretwaldaship"
},
{
"paragraph_id": 29,
"text": "Ceawlin is the second king in Bede's list. All the subsequent bretwaldas followed more or less consecutively, but there is a long gap, perhaps fifty years, between Ælle of Sussex, the first bretwalda, and Ceawlin. The lack of gaps between the overlordships of the later bretwaldas has been used to make an argument for Ceawlin's dates matching the later entries in the Chronicle with reasonable accuracy. According to this analysis, the next bretwalda, Æthelberht of Kent, must have been already a dominant king by the time Pope Gregory the Great wrote to him in 601, since Gregory would have not written to an underking. Ceawlin defeated Æthelberht in 568 according to the Chronicle. Æthelberht's dates are a matter of debate, but recent scholarly consensus has his reign starting no earlier than 580. The 568 date for the battle at Wibbandun is thought to be unlikely because of the assertion in various versions of the West Saxon Genealogical Regnal List that Ceawlin's reign lasted either seven or seventeen years. If this battle is placed near the year 590, before Æthelberht had established himself as a powerful king, then the subsequent annals relating to Ceawlin's defeat and death may be reasonably close to the correct date. In any case, the battle with Æthelberht is unlikely to have been more than a few years on either side of 590. The gap between Ælle and Ceawlin, on the other hand, has been taken as supporting evidence for the story told by Gildas in De Excidio of a peace lasting a generation or more following a Briton victory at Mons Badonicus.",
"title": "Bretwaldaship"
},
{
"paragraph_id": 30,
"text": "Æthelberht of Kent succeeds Ceawlin on the list of bretwaldas, but the reigns may overlap somewhat: recent evaluations give Ceawlin a likely reign of 581–588, and place Æthelberht's accession near to the year 589, but these analyses are no more than scholarly guesses. Ceawlin's eclipse in 592, probably by Ceol, may have been the occasion for Æthelberht to rise to prominence; Æthelberht very likely was the dominant Anglo-Saxon king by 597. Æthelberht's rise may have been earlier: the 584 annal, even if it records a victory, is the last victory of Ceawlin's in the Chronicle, and the period after that may have been one of Æthelberht's ascent and Ceawlin's decline.",
"title": "Bretwaldaship"
},
{
"paragraph_id": 31,
"text": "Ceawlin lost the throne of Wessex in 592. The annal for that year reads, in part: \"Here there was great slaughter at Woden's Barrow, and Ceawlin was driven out.\" Woden's Barrow is a tumulus, now called Adam's Grave, at Alton Priors, Wiltshire. No details of his opponent are given. The medieval chronicler William of Malmesbury, writing in about 1120, says that it was \"the Angles and the British conspiring together\". Alternatively, it may have been Ceol, who is supposed to have been the next king of Wessex, ruling for six years according to the West Saxon Genealogical Regnal List. According to the Anglo-Saxon Chronicle, Ceawlin died the following year. The relevant part of the annal reads: \"Here Ceawlin and Cwichelm and Crida perished.\" Nothing more is known of Cwichelm and Crida, although they may have been members of the Wessex royal house—their names fit the alliterative pattern common to royal houses of the time.",
"title": "Wessex at Ceawlin's death"
},
{
"paragraph_id": 32,
"text": "According to the Regnal List, Ceol was a son of Cutha, who was a son of Cynric; and Ceolwulf, his brother, reigned for seventeen years after him. It is possible that some fragmentation of control among the West Saxons occurred at Ceawlin's death: Ceol and Ceolwulf may have been based in Wiltshire, as opposed to the upper Thames valley. This split also may have contributed to Æthelberht's ability to rise to dominance in southern England. The West Saxons remained influential in military terms, however: the Chronicle and Bede record continued military activity against Essex and Sussex within twenty or thirty years of Ceawlin's death.",
"title": "Wessex at Ceawlin's death"
},
{
"paragraph_id": 33,
"text": "Primary sources",
"title": "References"
},
{
"paragraph_id": 34,
"text": "Secondary sources",
"title": "References"
}
] | Ceawlin was a King of Wessex. He may have been the son of Cynric of Wessex and the grandson of Cerdic of Wessex, whom the Anglo-Saxon Chronicle represents as the leader of the first group of Saxons to come to the land which later became Wessex. Ceawlin was active during the last years of the Anglo-Saxon expansion, with little of southern England remaining in the control of the native Britons by the time of his death. The chronology of Ceawlin's life is highly uncertain. The historical accuracy and dating of many of the events in the later Anglo-Saxon Chronicle have been called into question, and his reign is variously listed as lasting seven, seventeen, or thirty-two years. The Chronicle records several battles of Ceawlin's between the years 556 and 592, including the first record of a battle between different groups of Anglo-Saxons, and indicates that under Ceawlin Wessex acquired significant territory, some of which was later to be lost to other Anglo-Saxon kingdoms. Ceawlin is also named as one of the eight "bretwaldas", a title given in the Chronicle to eight rulers who had overlordship over southern Britain, although the extent of Ceawlin's control is not known. Ceawlin died in 593, having been deposed the year before, possibly by his successor, Ceol. He is recorded in various sources as having two sons, Cutha and Cuthwine, but the genealogies in which this information is found are known to be unreliable. | 2001-10-13T08:55:03Z | 2023-10-23T14:29:42Z | [
"Template:S-reg",
"Template:Respell",
"Template:ISBN",
"Template:EB1911 poster",
"Template:S-end",
"Template:IPA-ang",
"Template:Main",
"Template:S-ttl",
"Template:Reflist",
"Template:Cite book",
"Template:Cite ODNB",
"Template:PASE",
"Template:S-bef",
"Template:Bots",
"Template:Featured article",
"Template:Infobox royalty",
"Template:Kings of Wessex",
"Template:Authority control",
"Template:S-start",
"Template:S-aft",
"Template:Bretwalda",
"Template:Short description",
"Template:For",
"Template:Lang"
] | https://en.wikipedia.org/wiki/Ceawlin_of_Wessex |
6,779 | Christchurch (disambiguation) | Christchurch is the largest city in the South Island of New Zealand.
Christchurch may also refer to: | [
{
"paragraph_id": 0,
"text": "Christchurch is the largest city in the South Island of New Zealand.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Christchurch may also refer to:",
"title": ""
}
] | Christchurch is the largest city in the South Island of New Zealand. Christchurch may also refer to: | 2001-10-13T12:03:25Z | 2023-08-20T12:25:08Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Christchurch_(disambiguation) |
6,780 | CD-R | CD-R (Compact disc-recordable) is a digital optical disc storage format. A CD-R disc is a compact disc that can be written once and read arbitrarily many times.
CD-R discs (CD-Rs) are readable by most CD readers manufactured prior to the introduction of CD-R, unlike CD-RW discs.
Originally named CD Write-Once (WO), the CD-R specification was first published in 1988 by Philips and Sony in the Orange Book, which consists of several parts that provide details of the CD-WO, CD-MO (Magneto-Optic), and later CD-RW (Re Writable). The latest editions have abandoned the use of the term CD-WO in favor of CD-R, while CD-MO was rarely used. Written CD-Rs and CD-RWs are, in the aspect of low-level encoding and data format, fully compatible with the audio CD (Red Book CD-DA) and data CD (Yellow Book CD-ROM) standards. The Yellow Book standard for CD-ROM only specifies a high-level data format and refers to the Red Book for all physical format and low-level code details, such as track pitch, linear bit density, and bitstream encoding. This means they use Eight-to-Fourteen Modulation, CIRC error correction, and, for CD-ROM, the third error correction layer defined in the Yellow Book. Properly written CD-R discs on blanks of less than 80 minutes in length are fully compatible with the audio CD and CD-ROM standards in all details including physical specifications. 80-minute CD-R discs marginally violate the Red Book physical format specifications, and longer discs are non-compliant. CD-RW discs have lower reflectivity than CD-R or pressed (non-writable) CDs and for this reason cannot meet the Red Book standard. Some hardware compatible with Red Book CDs may have difficulty reading CD-Rs and, because of their lower reflectivity, especially CD-RWs. To the extent that CD hardware can read extended-length discs or CD-RW discs, it is because that hardware has capability beyond the minimum required by the Red Book and Yellow Book standards (the hardware is more capable than it needs to be to bear the Compact Disc logo).
CD-R recording systems available in 1990 were similar to the washing machine-sized Meridian CD Publisher, based on the two-piece rack mount Yamaha PDS audio recorder costing $35,000, not including the required external ECC circuitry for data encoding, SCSI hard drive subsystem, and MS-DOS control computer.
On July 3, 1991, the first recording of a concert directly to CD was made using a Yamaha YPDR 601. The concert was performed by Claudio Baglioni at the Stadio Flaminio in Rome, Italy. At that time, it was generally anticipated that recordable CDs would have a lifetime of no more than 10 years. However, as of July 2020 the CD from this live recording still plays back with no uncorrectable errors.
In the same year, the first company to successfully & professionally duplicate CD-R media was CDRM Recordable Media. With quality technical media being limited from Taiyo Yuden, Early CD-R Media used Phthalocyanine dye for duplication, which has a light aqua color. By 1992, the cost of typical recorders was down to $10,000–12,000, and in September 1995, Hewlett-Packard introduced its model 4020i manufactured by Philips, which, at $995, was the first recorder to cost less than $1000. As of the 2010s, devices capable of writing to CD-Rs and other types of writable CDs could be found under $20.
The dye materials developed by Taiyo Yuden made it possible for CD-R discs to be compatible with Audio CD and CD-ROM discs.
In the United States, there is a market separation between "music" CD-Rs and "data" CD-Rs, the former being notably more expensive than the latter due to industry copyright arrangements with the RIAA. Specifically, the price of every music CD-R includes a mandatory royalty disbursed to RIAA members by the disc manufacturer; this grants the disc an "application flag" indicating that the royalty has been paid. Consumer standalone music recorders refuse to burn CD-Rs that are missing this flag. Professional CD recorders are not subject to this restriction and can record music to data discs. The two types of discs are functionally and physically identical other than this, and computer CD burners can record data and/or music to either. New music CD-Rs are still being manufactured as of the late 2010s, although demand for them has declined as CD-based music recorders have been supplanted by other devices incorporating the same or similar functionality.
Prior to CD-R, Tandy Corporation had announced a rewritable CD system known as the Tandy High-Density Optical Recording (THOR) system, claiming to offer support for erasable and rewritable discs, made possible by a "secret coating material" on which Tandy had applied for patents, and reportedly based partly on a process developed by Optical Data Inc., with research and development undertaken at Tandy's Magnetic Media Research Center. Known also as the Tandy High-Intensity Optical Recording system, THOR-CD media was intended to be playable in existing CD players, being compatible with existing CD audio and CD-ROM equipment, with the discs themselves employing a layer in which the "marks", "bumps" or "pits" readable by a conventional CD player could be established in, and removed from, the medium by a laser operating at a different frequency. Tandy's announcement was surprising enough to "catch half a dozen industries off guard", claiming availability of consumer-level audio and video products below $500 by the end of 1990, and inviting other organisations to license the technology. The announcement attracted enthusiasm but also skepticism of Tandy's capability to deliver the system, with the latter proving to be justified, the technology having been "announced... heavily promoted; then it was delayed, and finally, it just never appeared".
A standard CD-R is a 1.2 mm (0.047 in) thick disc made of polycarbonate about 120 mm (5") in diameter. The 120 mm (5") disc has a storage capacity of 74 minutes of audio or 650 Megabytes (MBs) of data. CD-R/RWs are available with capacities of 80 minutes of audio or 737,280,000 bytes (700 MB), which they achieve by molding the disc at the tightest allowable tolerances specified in the Orange Book CD-R/CD-RW standards. The engineering margin that was reserved for manufacturing tolerance has been used for data capacity instead, leaving no tolerance for manufacturing; for these discs to be truly compliant with the Orange Book standard, the manufacturing process must be perfect.
Despite the foregoing, most CD-Rs on the market have an 80-minute capacity. There are also 90 minute/790 MB and 99 minute/870 MB discs, although they are less common and depart from the Orange Book standard. Due to the limitations of the data structures in the ATIP, 90 and 99-minute blanks will identify as 80-minute ones. As the ATIP is part of the Orange Book standard, its design does not support some nonstandard disc configurations. In order to use the additional capacity, these discs have to be burned using overburn options in the CD recording software. Overburning itself is so named because it is outside the written standards, but, due to market demand, it has nonetheless become a de facto standard function in most CD writing drives and software for them.
Some drives use special techniques, such as Plextor's GigaRec or Sanyo's HD-BURN, to write more data onto a given disc; these techniques are deviations from the compact disc (Red, Yellow, and/or Orange Book) standards, making the recorded discs proprietary-formatted and not fully compatible with standard CD players and drives. In certain applications where discs will not be distributed or exchanged outside a private group and will not be archived for a long time, a proprietary format may be an acceptable way to obtain greater capacity (up to 1.2 GB with GigaRec or 1.8 GB with HD-BURN on 99-minute media). The greatest risk in using such a proprietary data storage format, assuming that it works reliably as designed, is that it may be difficult or impossible to repair or replace the hardware used to read the media if it fails, is damaged, or is lost after its original vendor discontinues it.
Nothing in the Red, Yellow, or Orange Book standards prohibits disc reading/writing devices from having the capacity to read/write discs beyond the compact disc standards. The standards do require discs to meet precise requirements in order to be called compact discs, but the other discs may be called by other names; if this were not true, no DVD drive could legally bear the compact disc logo. While disc players and drives may have capabilities beyond the standards, enabling them to read and write nonstandard discs, there is no assurance, in the absence of explicit additional manufacturer specifications beyond normal compact disc logo certification, that any particular player or drive will perform beyond the standards at all or consistently. If the same device with no explicit performance specs beyond the compact disc logo initially handles nonstandard discs reliably, there is no assurance that it will not later stop doing so, and in that case, there is no assurance that it can be made to do so again by service or adjustment. Discs with capacities larger than 650 MB, and especially those larger than 700 MB, are less interchangeable among players/drives than standard discs and are not very suitable for archival use, as their readability on future equipment, or even on the same equipment at a future time, is not assured unless specifically tested and certified in that combination, even under the assumption that the discs will not degrade at all.
The polycarbonate disc contains a spiral groove, called the pregroove because it is molded in before data are written to the disc; it guides the laser beam upon writing and reading information. The pregroove is molded into the top side of the polycarbonate disc, where the pits and lands would be molded if it were a pressed, nonrecordable Red Book CD. The bottom side, which faces the laser beam in the player or drive, is flat and smooth. The polycarbonate disc is coated on the pregroove side with a very thin layer of organic dye. Then, on top of the dye is coated a thin, reflecting layer of silver, a silver alloy, or gold. Finally, a protective coating of a photo-polymerizable lacquer is applied on top of the metal reflector and cured with UV light.
A blank CD-R is not "empty"; the pregroove has a wobble (the ATIP), which helps the writing laser to stay on track and to write the data to the disc at a constant rate. Maintaining a constant rate is essential to ensure the proper size and spacing of the pits and lands burned into the dye layer. As well as providing timing information, the ATIP (absolute time in pregroove) is also a data track containing information about the CD-R manufacturer, the dye used, and media information (disc length and so on). The pregroove is not destroyed when the data are written to the CD-R, a point which some copy protection schemes use to distinguish copies from an original CD.
There are three basic formulations of dye used in CD-Rs:
There are many hybrid variations of the dye formulations, such as Formazan by Kodak (a hybrid of cyanine and phthalocyanine).
Many manufacturers have added additional coloring to disguise their unstable cyanine CD-Rs in the past, so the formulation of a disc cannot be determined based purely on its color. Similarly, a gold reflective layer does not guarantee the use of phthalocyanine dye. The quality of the disc is also not only dependent on the dye used, it is also influenced by sealing, the top layer, the reflective layer, and the polycarbonate. Simply choosing a disc based on its dye type may be problematic. Furthermore, correct power calibration of the laser in the writer, as well as correct timing of the laser pulses, stable disc speed, and so on, is critical to not only the immediate readability but the longevity of the recorded disc, so for archiving it is important to have not only a high-quality disc but a high-quality writer. In fact, a high-quality writer may produce adequate results with medium-quality media, but high-quality media cannot compensate for a mediocre writer, and discs written by such a writer cannot achieve their maximum potential archival lifetime.
These times only include the actual optical writing pass over the disc. For most disc recording operations, additional time is used for overhead processes, such as organizing the files and tracks, which adds to the theoretical minimum total time required to produce a disc. (An exception might be making a disc from a prepared ISO image, for which the overhead would likely be trivial.) At the lowest write speeds, this overhead takes so much less time than the actual disc writing pass that it may be negligible, but at higher write speeds, the overhead time becomes a larger proportion of the overall time taken to produce a finished disc and may add significantly to it.
Also, above 20× speed, drives use a Zoned-CLV or CAV strategy, where the advertised maximum speed is only reached near the outer rim of the disc. This is not taken into account by the above table. (If this were not done, the faster rotation that would be required at the inner tracks could cause the disc to fracture and/or could cause excessive vibration which would make accurate and successful writing impossible.)
The blank disc has a pre-groove track onto which the data are written. The pre-groove track, which also contains timing information, ensures that the recorder follows the same spiral path as a conventional CD. A CD recorder writes data to a CD-R disc by pulsing its laser to heat areas of the organic dye layer. The writing process does not produce indentations (pits); instead, the heat permanently changes the optical properties of the dye, changing the reflectivity of those areas. Using a low laser power, so as not to further alter the dye, the disc is read back in the same way as a CD-ROM. However, the reflected light is modulated not by pits, but by the alternating regions of heated and unaltered dye. The change of the intensity of the reflected laser radiation is transformed into an electrical signal, from which the digital information is recovered ("decoded"). Once a section of a CD-R is written, it cannot be erased or rewritten, unlike a CD-RW. A CD-R can be recorded in multiple sessions. A CD recorder can write to a CD-R using several methods including:
With careful examination, the written and unwritten areas can be distinguished by the naked eye. CD-Rs are written from the center outwards, so the written area appears as an inner band with slightly different shading.
CDs have a Power Calibration Area, used to calibrate the writing laser before and during recording. CDs contain two such areas: one close to the inner edge of the disc, for low-speed calibration, and another on the outer edge on the disc, for high-speed calibration. The calibration results are recorded on a Recording Management Area (RMA) that can hold up to 99 calibrations. The disc cannot be written after the RMA is full, however, the RMA may be emptied in CD-RW discs.
Real-life (not accelerated aging) tests have revealed that some CD-Rs degrade quickly even if stored normally. The quality of a CD-R disc has a large and direct influence on longevity—low-quality discs should not be expected to last very long. According to research conducted by J. Perdereau, CD-Rs are expected to have an average life expectancy of 10 years. Branding is not a reliable guide to quality, because many brands (major as well as no name) do not manufacture their own discs. Instead, they are sourced from different manufacturers of varying quality. For best results, the actual manufacturer and material components of each batch of discs should be verified.
Burned CD-Rs suffer from material degradation, just like most writable media. CD-R media have an internal layer of dye used to store data. In a CD-RW disc, the recording layer is made of an alloy of silver and other metals—indium, antimony, and tellurium. In CD-R media, the dye itself can degrade, causing data to become unreadable.
As well as degradation of the dye, failure of a CD-R can be due to the reflective surface. While silver is less expensive and more widely used, it is more prone to oxidation resulting in a non-reflecting surface. Gold on the other hand, although more expensive and no longer widely used, is an inert material, so gold-based CD-Rs do not suffer from this problem. Manufacturers have estimated the longevity of gold-based CD-Rs to be as high as 100 years.
By measuring the rate of correctable data errors, the data integrity and/or manufacturing quality of CD-R media can be measured, allowing for a reliable prediction of future data losses caused by media degradation.
It is recommended if using adhesive-backed paper labels that the labels be specially made for CD-Rs. A balanced CD vibrates only slightly when rotated at high speed. Bad or improperly made labels, or labels applied off-center, unbalance the CD and can cause it to vibrate when it spins, which causes read errors and even risks damaging the drive.
A professional alternative to CD labels is pre-printed CDs using a 5-color silkscreen or offset press. Using a permanent marker pen is also a common practice. However, solvents from such pens can affect the dye layer.
Since CD-Rs, in general, cannot be logically erased to any degree, the disposal of CD-Rs presents a possible security issue if they contain sensitive/private data. Destroying the data requires physically destroying the disc or data layer. Heating the disc in a microwave oven for 10–15 seconds effectively destroys the data layer by causing arcing in the metal reflective layer, but this same arcing may cause damage or excessive wear to the microwave oven. Many office paper shredders are also designed to shred CDs.
Some recent burners (Plextor, LiteOn) support erase operations on -R media, by "overwriting" the stored data with strong laser power, although the erased area cannot be overwritten with new data.
The polycarbonate material and possible gold or silver in the reflective layer would make CD-Rs highly recyclable. However, the polycarbonate is of very little value and the quantity of precious metals is so small that it is not profitable to recover them. Consequently, recyclers that accept CD-Rs typically do not offer compensation for donating or transporting the materials. | [
{
"paragraph_id": 0,
"text": "CD-R (Compact disc-recordable) is a digital optical disc storage format. A CD-R disc is a compact disc that can be written once and read arbitrarily many times.",
"title": ""
},
{
"paragraph_id": 1,
"text": "CD-R discs (CD-Rs) are readable by most CD readers manufactured prior to the introduction of CD-R, unlike CD-RW discs.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Originally named CD Write-Once (WO), the CD-R specification was first published in 1988 by Philips and Sony in the Orange Book, which consists of several parts that provide details of the CD-WO, CD-MO (Magneto-Optic), and later CD-RW (Re Writable). The latest editions have abandoned the use of the term CD-WO in favor of CD-R, while CD-MO was rarely used. Written CD-Rs and CD-RWs are, in the aspect of low-level encoding and data format, fully compatible with the audio CD (Red Book CD-DA) and data CD (Yellow Book CD-ROM) standards. The Yellow Book standard for CD-ROM only specifies a high-level data format and refers to the Red Book for all physical format and low-level code details, such as track pitch, linear bit density, and bitstream encoding. This means they use Eight-to-Fourteen Modulation, CIRC error correction, and, for CD-ROM, the third error correction layer defined in the Yellow Book. Properly written CD-R discs on blanks of less than 80 minutes in length are fully compatible with the audio CD and CD-ROM standards in all details including physical specifications. 80-minute CD-R discs marginally violate the Red Book physical format specifications, and longer discs are non-compliant. CD-RW discs have lower reflectivity than CD-R or pressed (non-writable) CDs and for this reason cannot meet the Red Book standard. Some hardware compatible with Red Book CDs may have difficulty reading CD-Rs and, because of their lower reflectivity, especially CD-RWs. To the extent that CD hardware can read extended-length discs or CD-RW discs, it is because that hardware has capability beyond the minimum required by the Red Book and Yellow Book standards (the hardware is more capable than it needs to be to bear the Compact Disc logo).",
"title": "History"
},
{
"paragraph_id": 3,
"text": "CD-R recording systems available in 1990 were similar to the washing machine-sized Meridian CD Publisher, based on the two-piece rack mount Yamaha PDS audio recorder costing $35,000, not including the required external ECC circuitry for data encoding, SCSI hard drive subsystem, and MS-DOS control computer.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "On July 3, 1991, the first recording of a concert directly to CD was made using a Yamaha YPDR 601. The concert was performed by Claudio Baglioni at the Stadio Flaminio in Rome, Italy. At that time, it was generally anticipated that recordable CDs would have a lifetime of no more than 10 years. However, as of July 2020 the CD from this live recording still plays back with no uncorrectable errors.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "In the same year, the first company to successfully & professionally duplicate CD-R media was CDRM Recordable Media. With quality technical media being limited from Taiyo Yuden, Early CD-R Media used Phthalocyanine dye for duplication, which has a light aqua color. By 1992, the cost of typical recorders was down to $10,000–12,000, and in September 1995, Hewlett-Packard introduced its model 4020i manufactured by Philips, which, at $995, was the first recorder to cost less than $1000. As of the 2010s, devices capable of writing to CD-Rs and other types of writable CDs could be found under $20.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "The dye materials developed by Taiyo Yuden made it possible for CD-R discs to be compatible with Audio CD and CD-ROM discs.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In the United States, there is a market separation between \"music\" CD-Rs and \"data\" CD-Rs, the former being notably more expensive than the latter due to industry copyright arrangements with the RIAA. Specifically, the price of every music CD-R includes a mandatory royalty disbursed to RIAA members by the disc manufacturer; this grants the disc an \"application flag\" indicating that the royalty has been paid. Consumer standalone music recorders refuse to burn CD-Rs that are missing this flag. Professional CD recorders are not subject to this restriction and can record music to data discs. The two types of discs are functionally and physically identical other than this, and computer CD burners can record data and/or music to either. New music CD-Rs are still being manufactured as of the late 2010s, although demand for them has declined as CD-based music recorders have been supplanted by other devices incorporating the same or similar functionality.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Prior to CD-R, Tandy Corporation had announced a rewritable CD system known as the Tandy High-Density Optical Recording (THOR) system, claiming to offer support for erasable and rewritable discs, made possible by a \"secret coating material\" on which Tandy had applied for patents, and reportedly based partly on a process developed by Optical Data Inc., with research and development undertaken at Tandy's Magnetic Media Research Center. Known also as the Tandy High-Intensity Optical Recording system, THOR-CD media was intended to be playable in existing CD players, being compatible with existing CD audio and CD-ROM equipment, with the discs themselves employing a layer in which the \"marks\", \"bumps\" or \"pits\" readable by a conventional CD player could be established in, and removed from, the medium by a laser operating at a different frequency. Tandy's announcement was surprising enough to \"catch half a dozen industries off guard\", claiming availability of consumer-level audio and video products below $500 by the end of 1990, and inviting other organisations to license the technology. The announcement attracted enthusiasm but also skepticism of Tandy's capability to deliver the system, with the latter proving to be justified, the technology having been \"announced... heavily promoted; then it was delayed, and finally, it just never appeared\".",
"title": "History"
},
{
"paragraph_id": 9,
"text": "A standard CD-R is a 1.2 mm (0.047 in) thick disc made of polycarbonate about 120 mm (5\") in diameter. The 120 mm (5\") disc has a storage capacity of 74 minutes of audio or 650 Megabytes (MBs) of data. CD-R/RWs are available with capacities of 80 minutes of audio or 737,280,000 bytes (700 MB), which they achieve by molding the disc at the tightest allowable tolerances specified in the Orange Book CD-R/CD-RW standards. The engineering margin that was reserved for manufacturing tolerance has been used for data capacity instead, leaving no tolerance for manufacturing; for these discs to be truly compliant with the Orange Book standard, the manufacturing process must be perfect.",
"title": "Physical characteristics"
},
{
"paragraph_id": 10,
"text": "Despite the foregoing, most CD-Rs on the market have an 80-minute capacity. There are also 90 minute/790 MB and 99 minute/870 MB discs, although they are less common and depart from the Orange Book standard. Due to the limitations of the data structures in the ATIP, 90 and 99-minute blanks will identify as 80-minute ones. As the ATIP is part of the Orange Book standard, its design does not support some nonstandard disc configurations. In order to use the additional capacity, these discs have to be burned using overburn options in the CD recording software. Overburning itself is so named because it is outside the written standards, but, due to market demand, it has nonetheless become a de facto standard function in most CD writing drives and software for them.",
"title": "Physical characteristics"
},
{
"paragraph_id": 11,
"text": "Some drives use special techniques, such as Plextor's GigaRec or Sanyo's HD-BURN, to write more data onto a given disc; these techniques are deviations from the compact disc (Red, Yellow, and/or Orange Book) standards, making the recorded discs proprietary-formatted and not fully compatible with standard CD players and drives. In certain applications where discs will not be distributed or exchanged outside a private group and will not be archived for a long time, a proprietary format may be an acceptable way to obtain greater capacity (up to 1.2 GB with GigaRec or 1.8 GB with HD-BURN on 99-minute media). The greatest risk in using such a proprietary data storage format, assuming that it works reliably as designed, is that it may be difficult or impossible to repair or replace the hardware used to read the media if it fails, is damaged, or is lost after its original vendor discontinues it.",
"title": "Physical characteristics"
},
{
"paragraph_id": 12,
"text": "Nothing in the Red, Yellow, or Orange Book standards prohibits disc reading/writing devices from having the capacity to read/write discs beyond the compact disc standards. The standards do require discs to meet precise requirements in order to be called compact discs, but the other discs may be called by other names; if this were not true, no DVD drive could legally bear the compact disc logo. While disc players and drives may have capabilities beyond the standards, enabling them to read and write nonstandard discs, there is no assurance, in the absence of explicit additional manufacturer specifications beyond normal compact disc logo certification, that any particular player or drive will perform beyond the standards at all or consistently. If the same device with no explicit performance specs beyond the compact disc logo initially handles nonstandard discs reliably, there is no assurance that it will not later stop doing so, and in that case, there is no assurance that it can be made to do so again by service or adjustment. Discs with capacities larger than 650 MB, and especially those larger than 700 MB, are less interchangeable among players/drives than standard discs and are not very suitable for archival use, as their readability on future equipment, or even on the same equipment at a future time, is not assured unless specifically tested and certified in that combination, even under the assumption that the discs will not degrade at all.",
"title": "Physical characteristics"
},
{
"paragraph_id": 13,
"text": "The polycarbonate disc contains a spiral groove, called the pregroove because it is molded in before data are written to the disc; it guides the laser beam upon writing and reading information. The pregroove is molded into the top side of the polycarbonate disc, where the pits and lands would be molded if it were a pressed, nonrecordable Red Book CD. The bottom side, which faces the laser beam in the player or drive, is flat and smooth. The polycarbonate disc is coated on the pregroove side with a very thin layer of organic dye. Then, on top of the dye is coated a thin, reflecting layer of silver, a silver alloy, or gold. Finally, a protective coating of a photo-polymerizable lacquer is applied on top of the metal reflector and cured with UV light.",
"title": "Physical characteristics"
},
{
"paragraph_id": 14,
"text": "A blank CD-R is not \"empty\"; the pregroove has a wobble (the ATIP), which helps the writing laser to stay on track and to write the data to the disc at a constant rate. Maintaining a constant rate is essential to ensure the proper size and spacing of the pits and lands burned into the dye layer. As well as providing timing information, the ATIP (absolute time in pregroove) is also a data track containing information about the CD-R manufacturer, the dye used, and media information (disc length and so on). The pregroove is not destroyed when the data are written to the CD-R, a point which some copy protection schemes use to distinguish copies from an original CD.",
"title": "Physical characteristics"
},
{
"paragraph_id": 15,
"text": "There are three basic formulations of dye used in CD-Rs:",
"title": "Physical characteristics"
},
{
"paragraph_id": 16,
"text": "There are many hybrid variations of the dye formulations, such as Formazan by Kodak (a hybrid of cyanine and phthalocyanine).",
"title": "Physical characteristics"
},
{
"paragraph_id": 17,
"text": "Many manufacturers have added additional coloring to disguise their unstable cyanine CD-Rs in the past, so the formulation of a disc cannot be determined based purely on its color. Similarly, a gold reflective layer does not guarantee the use of phthalocyanine dye. The quality of the disc is also not only dependent on the dye used, it is also influenced by sealing, the top layer, the reflective layer, and the polycarbonate. Simply choosing a disc based on its dye type may be problematic. Furthermore, correct power calibration of the laser in the writer, as well as correct timing of the laser pulses, stable disc speed, and so on, is critical to not only the immediate readability but the longevity of the recorded disc, so for archiving it is important to have not only a high-quality disc but a high-quality writer. In fact, a high-quality writer may produce adequate results with medium-quality media, but high-quality media cannot compensate for a mediocre writer, and discs written by such a writer cannot achieve their maximum potential archival lifetime.",
"title": "Physical characteristics"
},
{
"paragraph_id": 18,
"text": "These times only include the actual optical writing pass over the disc. For most disc recording operations, additional time is used for overhead processes, such as organizing the files and tracks, which adds to the theoretical minimum total time required to produce a disc. (An exception might be making a disc from a prepared ISO image, for which the overhead would likely be trivial.) At the lowest write speeds, this overhead takes so much less time than the actual disc writing pass that it may be negligible, but at higher write speeds, the overhead time becomes a larger proportion of the overall time taken to produce a finished disc and may add significantly to it.",
"title": "Speed"
},
{
"paragraph_id": 19,
"text": "Also, above 20× speed, drives use a Zoned-CLV or CAV strategy, where the advertised maximum speed is only reached near the outer rim of the disc. This is not taken into account by the above table. (If this were not done, the faster rotation that would be required at the inner tracks could cause the disc to fracture and/or could cause excessive vibration which would make accurate and successful writing impossible.)",
"title": "Speed"
},
{
"paragraph_id": 20,
"text": "The blank disc has a pre-groove track onto which the data are written. The pre-groove track, which also contains timing information, ensures that the recorder follows the same spiral path as a conventional CD. A CD recorder writes data to a CD-R disc by pulsing its laser to heat areas of the organic dye layer. The writing process does not produce indentations (pits); instead, the heat permanently changes the optical properties of the dye, changing the reflectivity of those areas. Using a low laser power, so as not to further alter the dye, the disc is read back in the same way as a CD-ROM. However, the reflected light is modulated not by pits, but by the alternating regions of heated and unaltered dye. The change of the intensity of the reflected laser radiation is transformed into an electrical signal, from which the digital information is recovered (\"decoded\"). Once a section of a CD-R is written, it cannot be erased or rewritten, unlike a CD-RW. A CD-R can be recorded in multiple sessions. A CD recorder can write to a CD-R using several methods including:",
"title": "Writing methods"
},
{
"paragraph_id": 21,
"text": "With careful examination, the written and unwritten areas can be distinguished by the naked eye. CD-Rs are written from the center outwards, so the written area appears as an inner band with slightly different shading.",
"title": "Writing methods"
},
{
"paragraph_id": 22,
"text": "CDs have a Power Calibration Area, used to calibrate the writing laser before and during recording. CDs contain two such areas: one close to the inner edge of the disc, for low-speed calibration, and another on the outer edge on the disc, for high-speed calibration. The calibration results are recorded on a Recording Management Area (RMA) that can hold up to 99 calibrations. The disc cannot be written after the RMA is full, however, the RMA may be emptied in CD-RW discs.",
"title": "Writing methods"
},
{
"paragraph_id": 23,
"text": "Real-life (not accelerated aging) tests have revealed that some CD-Rs degrade quickly even if stored normally. The quality of a CD-R disc has a large and direct influence on longevity—low-quality discs should not be expected to last very long. According to research conducted by J. Perdereau, CD-Rs are expected to have an average life expectancy of 10 years. Branding is not a reliable guide to quality, because many brands (major as well as no name) do not manufacture their own discs. Instead, they are sourced from different manufacturers of varying quality. For best results, the actual manufacturer and material components of each batch of discs should be verified.",
"title": "Lifespan"
},
{
"paragraph_id": 24,
"text": "Burned CD-Rs suffer from material degradation, just like most writable media. CD-R media have an internal layer of dye used to store data. In a CD-RW disc, the recording layer is made of an alloy of silver and other metals—indium, antimony, and tellurium. In CD-R media, the dye itself can degrade, causing data to become unreadable.",
"title": "Lifespan"
},
{
"paragraph_id": 25,
"text": "As well as degradation of the dye, failure of a CD-R can be due to the reflective surface. While silver is less expensive and more widely used, it is more prone to oxidation resulting in a non-reflecting surface. Gold on the other hand, although more expensive and no longer widely used, is an inert material, so gold-based CD-Rs do not suffer from this problem. Manufacturers have estimated the longevity of gold-based CD-Rs to be as high as 100 years.",
"title": "Lifespan"
},
{
"paragraph_id": 26,
"text": "By measuring the rate of correctable data errors, the data integrity and/or manufacturing quality of CD-R media can be measured, allowing for a reliable prediction of future data losses caused by media degradation.",
"title": "Lifespan"
},
{
"paragraph_id": 27,
"text": "It is recommended if using adhesive-backed paper labels that the labels be specially made for CD-Rs. A balanced CD vibrates only slightly when rotated at high speed. Bad or improperly made labels, or labels applied off-center, unbalance the CD and can cause it to vibrate when it spins, which causes read errors and even risks damaging the drive.",
"title": "Labeling"
},
{
"paragraph_id": 28,
"text": "A professional alternative to CD labels is pre-printed CDs using a 5-color silkscreen or offset press. Using a permanent marker pen is also a common practice. However, solvents from such pens can affect the dye layer.",
"title": "Labeling"
},
{
"paragraph_id": 29,
"text": "Since CD-Rs, in general, cannot be logically erased to any degree, the disposal of CD-Rs presents a possible security issue if they contain sensitive/private data. Destroying the data requires physically destroying the disc or data layer. Heating the disc in a microwave oven for 10–15 seconds effectively destroys the data layer by causing arcing in the metal reflective layer, but this same arcing may cause damage or excessive wear to the microwave oven. Many office paper shredders are also designed to shred CDs.",
"title": "Disposal"
},
{
"paragraph_id": 30,
"text": "Some recent burners (Plextor, LiteOn) support erase operations on -R media, by \"overwriting\" the stored data with strong laser power, although the erased area cannot be overwritten with new data.",
"title": "Disposal"
},
{
"paragraph_id": 31,
"text": "The polycarbonate material and possible gold or silver in the reflective layer would make CD-Rs highly recyclable. However, the polycarbonate is of very little value and the quantity of precious metals is so small that it is not profitable to recover them. Consequently, recyclers that accept CD-Rs typically do not offer compensation for donating or transporting the materials.",
"title": "Disposal"
}
] | CD-R is a digital optical disc storage format. A CD-R disc is a compact disc that can be written once and read arbitrarily many times. CD-R discs (CD-Rs) are readable by most CD readers manufactured prior to the introduction of CD-R, unlike CD-RW discs. | 2001-10-13T13:14:15Z | 2023-12-09T14:23:58Z | [
"Template:Compact disc navbox",
"Template:Refimprove",
"Template:Original research",
"Template:Cite web",
"Template:Rainbow Books",
"Template:Reflist",
"Template:Short description",
"Template:Unreferenced section",
"Template:Anchor",
"Template:Clarify",
"Template:Cite press release",
"Template:Infobox storage medium",
"Template:Optical disc authoring",
"Template:Cn",
"Template:Cite news",
"Template:Cite magazine",
"Template:Distinguish",
"Template:Citation needed",
"Template:See also",
"Template:Convert"
] | https://en.wikipedia.org/wiki/CD-R |
6,781 | Cytosol | The cytosol, also known as cytoplasmic matrix or groundplasm, is one of the liquids found inside cells (intracellular fluid (ICF)). It is separated into compartments by membranes. For example, the mitochondrial matrix separates the mitochondrion into many compartments.
In the eukaryotic cell, the cytosol is surrounded by the cell membrane and is part of the cytoplasm, which also comprises the mitochondria, plastids, and other organelles (but not their internal fluids and structures); the cell nucleus is separate. The cytosol is thus a liquid matrix around the organelles. In prokaryotes, most of the chemical reactions of metabolism take place in the cytosol, while a few take place in membranes or in the periplasmic space. In eukaryotes, while many metabolic pathways still occur in the cytosol, others take place within organelles.
The cytosol is a complex mixture of substances dissolved in water. Although water forms the large majority of the cytosol, its structure and properties within cells is not well understood. The concentrations of ions such as sodium and potassium in the cytosol are different to those in the extracellular fluid; these differences in ion levels are important in processes such as osmoregulation, cell signaling, and the generation of action potentials in excitable cells such as endocrine, nerve and muscle cells. The cytosol also contains large amounts of macromolecules, which can alter how molecules behave, through macromolecular crowding.
Although it was once thought to be a simple solution of molecules, the cytosol has multiple levels of organization. These include concentration gradients of small molecules such as calcium, large complexes of enzymes that act together and take part in metabolic pathways, and protein complexes such as proteasomes and carboxysomes that enclose and separate parts of the cytosol.
The term "cytosol" was first introduced in 1965 by H. A. Lardy, and initially referred to the liquid that was produced by breaking cells apart and pelleting all the insoluble components by ultracentrifugation. Such a soluble cell extract is not identical to the soluble part of the cell cytoplasm and is usually called a cytoplasmic fraction.
The term cytosol is now used to refer to the liquid phase of the cytoplasm in an intact cell. This excludes any part of the cytoplasm that is contained within organelles. Due to the possibility of confusion between the use of the word "cytosol" to refer to both extracts of cells and the soluble part of the cytoplasm in intact cells, the phrase "aqueous cytoplasm" has been used to describe the liquid contents of the cytoplasm of living cells.
Prior to this, other terms, including hyaloplasm, were used for the cell fluid, not always synonymously, as its nature was not well understood (see protoplasm).
The proportion of cell volume that is cytosol varies: for example while this compartment forms the bulk of cell structure in bacteria, in plant cells the main compartment is the large central vacuole. The cytosol consists mostly of water, dissolved ions, small molecules, and large water-soluble molecules (such as proteins). The majority of these non-protein molecules have a molecular mass of less than 300 Da. This mixture of small molecules is extraordinarily complex, as the variety of molecules that are involved in metabolism (the metabolites) is immense. For example, up to 200,000 different small molecules might be made in plants, although not all these will be present in the same species, or in a single cell. Estimates of the number of metabolites in single cells such as E. coli and baker's yeast predict that under 1,000 are made.
Most of the cytosol is water, which makes up about 70% of the total volume of a typical cell. The pH of the intracellular fluid is 7.4. while human cytosolic pH ranges between 7.0–7.4, and is usually higher if a cell is growing. The viscosity of cytoplasm is roughly the same as pure water, although diffusion of small molecules through this liquid is about fourfold slower than in pure water, due mostly to collisions with the large numbers of macromolecules in the cytosol. Studies in the brine shrimp have examined how water affects cell functions; these saw that a 20% reduction in the amount of water in a cell inhibits metabolism, with metabolism decreasing progressively as the cell dries out and all metabolic activity halting when the water level reaches 70% below normal.
Although water is vital for life, the structure of this water in the cytosol is not well understood, mostly because methods such as nuclear magnetic resonance spectroscopy only give information on the average structure of water, and cannot measure local variations at the microscopic scale. Even the structure of pure water is poorly understood, due to the ability of water to form structures such as water clusters through hydrogen bonds.
The classic view of water in cells is that about 5% of this water is strongly bound in by solutes or macromolecules as water of solvation, while the majority has the same structure as pure water. This water of solvation is not active in osmosis and may have different solvent properties, so that some dissolved molecules are excluded, while others become concentrated. However, others argue that the effects of the high concentrations of macromolecules in cells extend throughout the cytosol and that water in cells behaves very differently from the water in dilute solutions. These ideas include the proposal that cells contain zones of low and high-density water, which could have widespread effects on the structures and functions of the other parts of the cell. However, the use of advanced nuclear magnetic resonance methods to directly measure the mobility of water in living cells contradicts this idea, as it suggests that 85% of cell water acts like that pure water, while the remainder is less mobile and probably bound to macromolecules.
The concentrations of the other ions in cytosol are quite different from those in extracellular fluid and the cytosol also contains much higher amounts of charged macromolecules such as proteins and nucleic acids than the outside of the cell structure.
In contrast to extracellular fluid, cytosol has a high concentration of potassium ions and a low concentration of sodium ions. This difference in ion concentrations is critical for osmoregulation, since if the ion levels were the same inside a cell as outside, water would enter constantly by osmosis - since the levels of macromolecules inside cells are higher than their levels outside. Instead, sodium ions are expelled and potassium ions taken up by the Na⁺/K⁺-ATPase, potassium ions then flow down their concentration gradient through potassium-selection ion channels, this loss of positive charge creates a negative membrane potential. To balance this potential difference, negative chloride ions also exit the cell, through selective chloride channels. The loss of sodium and chloride ions compensates for the osmotic effect of the higher concentration of organic molecules inside the cell.
Cells can deal with even larger osmotic changes by accumulating osmoprotectants such as betaines or trehalose in their cytosol. Some of these molecules can allow cells to survive being completely dried out and allow an organism to enter a state of suspended animation called cryptobiosis. In this state the cytosol and osmoprotectants become a glass-like solid that helps stabilize proteins and cell membranes from the damaging effects of desiccation.
The low concentration of calcium in the cytosol allows calcium ions to function as a second messenger in calcium signaling. Here, a signal such as a hormone or an action potential opens calcium channel so that calcium floods into the cytosol. This sudden increase in cytosolic calcium activates other signalling molecules, such as calmodulin and protein kinase C. Other ions such as chloride and potassium may also have signaling functions in the cytosol, but these are not well understood.
Protein molecules that do not bind to cell membranes or the cytoskeleton are dissolved in the cytosol. The amount of protein in cells is extremely high, and approaches 200 mg/ml, occupying about 20–30% of the volume of the cytosol. However, measuring precisely how much protein is dissolved in cytosol in intact cells is difficult, since some proteins appear to be weakly associated with membranes or organelles in whole cells and are released into solution upon cell lysis. Indeed, in experiments where the plasma membrane of cells were carefully disrupted using saponin, without damaging the other cell membranes, only about one quarter of cell protein was released. These cells were also able to synthesize proteins if given ATP and amino acids, implying that many of the enzymes in cytosol are bound to the cytoskeleton. However, the idea that the majority of the proteins in cells are tightly bound in a network called the microtrabecular lattice is now seen as unlikely.
In prokaryotes the cytosol contains the cell's genome, within a structure known as a nucleoid. This is an irregular mass of DNA and associated proteins that control the transcription and replication of the bacterial chromosome and plasmids. In eukaryotes the genome is held within the cell nucleus, which is separated from the cytosol by nuclear pores that block the free diffusion of any molecule larger than about 10 nanometres in diameter.
This high concentration of macromolecules in cytosol causes an effect called macromolecular crowding, which is when the effective concentration of other macromolecules is increased, since they have less volume to move in. This crowding effect can produce large changes in both the rates and the position of chemical equilibrium of reactions in the cytosol. It is particularly important in its ability to alter dissociation constants by favoring the association of macromolecules, such as when multiple proteins come together to form protein complexes, or when DNA-binding proteins bind to their targets in the genome.
Although the components of the cytosol are not separated into regions by cell membranes, these components do not always mix randomly and several levels of organization can localize specific molecules to defined sites within the cytosol.
Although small molecules diffuse rapidly in the cytosol, concentration gradients can still be produced within this compartment. A well-studied example of these are the "calcium sparks" that are produced for a short period in the region around an open calcium channel. These are about 2 micrometres in diameter and last for only a few milliseconds, although several sparks can merge to form larger gradients, called "calcium waves". Concentration gradients of other small molecules, such as oxygen and adenosine triphosphate may be produced in cells around clusters of mitochondria, although these are less well understood.
Proteins can associate to form protein complexes, these often contain a set of proteins with similar functions, such as enzymes that carry out several steps in the same metabolic pathway. This organization can allow substrate channeling, which is when the product of one enzyme is passed directly to the next enzyme in a pathway without being released into solution. Channeling can make a pathway more rapid and efficient than it would be if the enzymes were randomly distributed in the cytosol, and can also prevent the release of unstable reaction intermediates. Although a wide variety of metabolic pathways involve enzymes that are tightly bound to each other, others may involve more loosely associated complexes that are very difficult to study outside the cell. Consequently, the importance of these complexes for metabolism in general remains unclear.
Some protein complexes contain a large central cavity that is isolated from the remainder of the cytosol. One example of such an enclosed compartment is the proteasome. Here, a set of subunits form a hollow barrel containing proteases that degrade cytosolic proteins. Since these would be damaging if they mixed freely with the remainder of the cytosol, the barrel is capped by a set of regulatory proteins that recognize proteins with a signal directing them for degradation (a ubiquitin tag) and feed them into the proteolytic cavity.
Another large class of protein compartments are bacterial microcompartments, which are made of a protein shell that encapsulates various enzymes. These compartments are typically about 100–200 nanometres across and made of interlocking proteins. A well-understood example is the carboxysome, which contains enzymes involved in carbon fixation such as RuBisCO.
Non-membrane bound organelles can form as biomolecular condensates, which arise by clustering, oligomerisation, or polymerisation of macromolecules to drive colloidal phase separation of the cytoplasm or nucleus.
Although the cytoskeleton is not part of the cytosol, the presence of this network of filaments restricts the diffusion of large particles in the cell. For example, in several studies tracer particles larger than about 25 nanometres (about the size of a ribosome) were excluded from parts of the cytosol around the edges of the cell and next to the nucleus. These "excluding compartments" may contain a much denser meshwork of actin fibres than the remainder of the cytosol. These microdomains could influence the distribution of large structures such as ribosomes and organelles within the cytosol by excluding them from some areas and concentrating them in others.
The cytosol is the site of multiple cell processes. Examples of these processes include signal transduction from the cell membrane to sites within the cell, such as the cell nucleus, or organelles. This compartment is also the site of many of the processes of cytokinesis, after the breakdown of the nuclear membrane in mitosis. Another major function of cytosol is to transport metabolites from their site of production to where they are used. This is relatively simple for water-soluble molecules, such as amino acids, which can diffuse rapidly through the cytosol. However, hydrophobic molecules, such as fatty acids or sterols, can be transported through the cytosol by specific binding proteins, which shuttle these molecules between cell membranes. Molecules taken into the cell by endocytosis or on their way to be secreted can also be transported through the cytosol inside vesicles, which are small spheres of lipids that are moved along the cytoskeleton by motor proteins.
The cytosol is the site of most metabolism in prokaryotes, and a large proportion of the metabolism of eukaryotes. For instance, in mammals about half of the proteins in the cell are localized to the cytosol. The most complete data are available in yeast, where metabolic reconstructions indicate that the majority of both metabolic processes and metabolites occur in the cytosol. Major metabolic pathways that occur in the cytosol in animals are protein biosynthesis, the pentose phosphate pathway, glycolysis and gluconeogenesis. The localization of pathways can be different in other organisms, for instance fatty acid synthesis occurs in chloroplasts in plants and in apicoplasts in apicomplexa. | [
{
"paragraph_id": 0,
"text": "The cytosol, also known as cytoplasmic matrix or groundplasm, is one of the liquids found inside cells (intracellular fluid (ICF)). It is separated into compartments by membranes. For example, the mitochondrial matrix separates the mitochondrion into many compartments.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In the eukaryotic cell, the cytosol is surrounded by the cell membrane and is part of the cytoplasm, which also comprises the mitochondria, plastids, and other organelles (but not their internal fluids and structures); the cell nucleus is separate. The cytosol is thus a liquid matrix around the organelles. In prokaryotes, most of the chemical reactions of metabolism take place in the cytosol, while a few take place in membranes or in the periplasmic space. In eukaryotes, while many metabolic pathways still occur in the cytosol, others take place within organelles.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The cytosol is a complex mixture of substances dissolved in water. Although water forms the large majority of the cytosol, its structure and properties within cells is not well understood. The concentrations of ions such as sodium and potassium in the cytosol are different to those in the extracellular fluid; these differences in ion levels are important in processes such as osmoregulation, cell signaling, and the generation of action potentials in excitable cells such as endocrine, nerve and muscle cells. The cytosol also contains large amounts of macromolecules, which can alter how molecules behave, through macromolecular crowding.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Although it was once thought to be a simple solution of molecules, the cytosol has multiple levels of organization. These include concentration gradients of small molecules such as calcium, large complexes of enzymes that act together and take part in metabolic pathways, and protein complexes such as proteasomes and carboxysomes that enclose and separate parts of the cytosol.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The term \"cytosol\" was first introduced in 1965 by H. A. Lardy, and initially referred to the liquid that was produced by breaking cells apart and pelleting all the insoluble components by ultracentrifugation. Such a soluble cell extract is not identical to the soluble part of the cell cytoplasm and is usually called a cytoplasmic fraction.",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "The term cytosol is now used to refer to the liquid phase of the cytoplasm in an intact cell. This excludes any part of the cytoplasm that is contained within organelles. Due to the possibility of confusion between the use of the word \"cytosol\" to refer to both extracts of cells and the soluble part of the cytoplasm in intact cells, the phrase \"aqueous cytoplasm\" has been used to describe the liquid contents of the cytoplasm of living cells.",
"title": "Definition"
},
{
"paragraph_id": 6,
"text": "Prior to this, other terms, including hyaloplasm, were used for the cell fluid, not always synonymously, as its nature was not well understood (see protoplasm).",
"title": "Definition"
},
{
"paragraph_id": 7,
"text": "The proportion of cell volume that is cytosol varies: for example while this compartment forms the bulk of cell structure in bacteria, in plant cells the main compartment is the large central vacuole. The cytosol consists mostly of water, dissolved ions, small molecules, and large water-soluble molecules (such as proteins). The majority of these non-protein molecules have a molecular mass of less than 300 Da. This mixture of small molecules is extraordinarily complex, as the variety of molecules that are involved in metabolism (the metabolites) is immense. For example, up to 200,000 different small molecules might be made in plants, although not all these will be present in the same species, or in a single cell. Estimates of the number of metabolites in single cells such as E. coli and baker's yeast predict that under 1,000 are made.",
"title": "Properties and composition"
},
{
"paragraph_id": 8,
"text": "Most of the cytosol is water, which makes up about 70% of the total volume of a typical cell. The pH of the intracellular fluid is 7.4. while human cytosolic pH ranges between 7.0–7.4, and is usually higher if a cell is growing. The viscosity of cytoplasm is roughly the same as pure water, although diffusion of small molecules through this liquid is about fourfold slower than in pure water, due mostly to collisions with the large numbers of macromolecules in the cytosol. Studies in the brine shrimp have examined how water affects cell functions; these saw that a 20% reduction in the amount of water in a cell inhibits metabolism, with metabolism decreasing progressively as the cell dries out and all metabolic activity halting when the water level reaches 70% below normal.",
"title": "Properties and composition"
},
{
"paragraph_id": 9,
"text": "Although water is vital for life, the structure of this water in the cytosol is not well understood, mostly because methods such as nuclear magnetic resonance spectroscopy only give information on the average structure of water, and cannot measure local variations at the microscopic scale. Even the structure of pure water is poorly understood, due to the ability of water to form structures such as water clusters through hydrogen bonds.",
"title": "Properties and composition"
},
{
"paragraph_id": 10,
"text": "The classic view of water in cells is that about 5% of this water is strongly bound in by solutes or macromolecules as water of solvation, while the majority has the same structure as pure water. This water of solvation is not active in osmosis and may have different solvent properties, so that some dissolved molecules are excluded, while others become concentrated. However, others argue that the effects of the high concentrations of macromolecules in cells extend throughout the cytosol and that water in cells behaves very differently from the water in dilute solutions. These ideas include the proposal that cells contain zones of low and high-density water, which could have widespread effects on the structures and functions of the other parts of the cell. However, the use of advanced nuclear magnetic resonance methods to directly measure the mobility of water in living cells contradicts this idea, as it suggests that 85% of cell water acts like that pure water, while the remainder is less mobile and probably bound to macromolecules.",
"title": "Properties and composition"
},
{
"paragraph_id": 11,
"text": "The concentrations of the other ions in cytosol are quite different from those in extracellular fluid and the cytosol also contains much higher amounts of charged macromolecules such as proteins and nucleic acids than the outside of the cell structure.",
"title": "Properties and composition"
},
{
"paragraph_id": 12,
"text": "In contrast to extracellular fluid, cytosol has a high concentration of potassium ions and a low concentration of sodium ions. This difference in ion concentrations is critical for osmoregulation, since if the ion levels were the same inside a cell as outside, water would enter constantly by osmosis - since the levels of macromolecules inside cells are higher than their levels outside. Instead, sodium ions are expelled and potassium ions taken up by the Na⁺/K⁺-ATPase, potassium ions then flow down their concentration gradient through potassium-selection ion channels, this loss of positive charge creates a negative membrane potential. To balance this potential difference, negative chloride ions also exit the cell, through selective chloride channels. The loss of sodium and chloride ions compensates for the osmotic effect of the higher concentration of organic molecules inside the cell.",
"title": "Properties and composition"
},
{
"paragraph_id": 13,
"text": "Cells can deal with even larger osmotic changes by accumulating osmoprotectants such as betaines or trehalose in their cytosol. Some of these molecules can allow cells to survive being completely dried out and allow an organism to enter a state of suspended animation called cryptobiosis. In this state the cytosol and osmoprotectants become a glass-like solid that helps stabilize proteins and cell membranes from the damaging effects of desiccation.",
"title": "Properties and composition"
},
{
"paragraph_id": 14,
"text": "The low concentration of calcium in the cytosol allows calcium ions to function as a second messenger in calcium signaling. Here, a signal such as a hormone or an action potential opens calcium channel so that calcium floods into the cytosol. This sudden increase in cytosolic calcium activates other signalling molecules, such as calmodulin and protein kinase C. Other ions such as chloride and potassium may also have signaling functions in the cytosol, but these are not well understood.",
"title": "Properties and composition"
},
{
"paragraph_id": 15,
"text": "Protein molecules that do not bind to cell membranes or the cytoskeleton are dissolved in the cytosol. The amount of protein in cells is extremely high, and approaches 200 mg/ml, occupying about 20–30% of the volume of the cytosol. However, measuring precisely how much protein is dissolved in cytosol in intact cells is difficult, since some proteins appear to be weakly associated with membranes or organelles in whole cells and are released into solution upon cell lysis. Indeed, in experiments where the plasma membrane of cells were carefully disrupted using saponin, without damaging the other cell membranes, only about one quarter of cell protein was released. These cells were also able to synthesize proteins if given ATP and amino acids, implying that many of the enzymes in cytosol are bound to the cytoskeleton. However, the idea that the majority of the proteins in cells are tightly bound in a network called the microtrabecular lattice is now seen as unlikely.",
"title": "Properties and composition"
},
{
"paragraph_id": 16,
"text": "In prokaryotes the cytosol contains the cell's genome, within a structure known as a nucleoid. This is an irregular mass of DNA and associated proteins that control the transcription and replication of the bacterial chromosome and plasmids. In eukaryotes the genome is held within the cell nucleus, which is separated from the cytosol by nuclear pores that block the free diffusion of any molecule larger than about 10 nanometres in diameter.",
"title": "Properties and composition"
},
{
"paragraph_id": 17,
"text": "This high concentration of macromolecules in cytosol causes an effect called macromolecular crowding, which is when the effective concentration of other macromolecules is increased, since they have less volume to move in. This crowding effect can produce large changes in both the rates and the position of chemical equilibrium of reactions in the cytosol. It is particularly important in its ability to alter dissociation constants by favoring the association of macromolecules, such as when multiple proteins come together to form protein complexes, or when DNA-binding proteins bind to their targets in the genome.",
"title": "Properties and composition"
},
{
"paragraph_id": 18,
"text": "Although the components of the cytosol are not separated into regions by cell membranes, these components do not always mix randomly and several levels of organization can localize specific molecules to defined sites within the cytosol.",
"title": "Organization"
},
{
"paragraph_id": 19,
"text": "Although small molecules diffuse rapidly in the cytosol, concentration gradients can still be produced within this compartment. A well-studied example of these are the \"calcium sparks\" that are produced for a short period in the region around an open calcium channel. These are about 2 micrometres in diameter and last for only a few milliseconds, although several sparks can merge to form larger gradients, called \"calcium waves\". Concentration gradients of other small molecules, such as oxygen and adenosine triphosphate may be produced in cells around clusters of mitochondria, although these are less well understood.",
"title": "Organization"
},
{
"paragraph_id": 20,
"text": "Proteins can associate to form protein complexes, these often contain a set of proteins with similar functions, such as enzymes that carry out several steps in the same metabolic pathway. This organization can allow substrate channeling, which is when the product of one enzyme is passed directly to the next enzyme in a pathway without being released into solution. Channeling can make a pathway more rapid and efficient than it would be if the enzymes were randomly distributed in the cytosol, and can also prevent the release of unstable reaction intermediates. Although a wide variety of metabolic pathways involve enzymes that are tightly bound to each other, others may involve more loosely associated complexes that are very difficult to study outside the cell. Consequently, the importance of these complexes for metabolism in general remains unclear.",
"title": "Organization"
},
{
"paragraph_id": 21,
"text": "Some protein complexes contain a large central cavity that is isolated from the remainder of the cytosol. One example of such an enclosed compartment is the proteasome. Here, a set of subunits form a hollow barrel containing proteases that degrade cytosolic proteins. Since these would be damaging if they mixed freely with the remainder of the cytosol, the barrel is capped by a set of regulatory proteins that recognize proteins with a signal directing them for degradation (a ubiquitin tag) and feed them into the proteolytic cavity.",
"title": "Organization"
},
{
"paragraph_id": 22,
"text": "Another large class of protein compartments are bacterial microcompartments, which are made of a protein shell that encapsulates various enzymes. These compartments are typically about 100–200 nanometres across and made of interlocking proteins. A well-understood example is the carboxysome, which contains enzymes involved in carbon fixation such as RuBisCO.",
"title": "Organization"
},
{
"paragraph_id": 23,
"text": "Non-membrane bound organelles can form as biomolecular condensates, which arise by clustering, oligomerisation, or polymerisation of macromolecules to drive colloidal phase separation of the cytoplasm or nucleus.",
"title": "Organization"
},
{
"paragraph_id": 24,
"text": "Although the cytoskeleton is not part of the cytosol, the presence of this network of filaments restricts the diffusion of large particles in the cell. For example, in several studies tracer particles larger than about 25 nanometres (about the size of a ribosome) were excluded from parts of the cytosol around the edges of the cell and next to the nucleus. These \"excluding compartments\" may contain a much denser meshwork of actin fibres than the remainder of the cytosol. These microdomains could influence the distribution of large structures such as ribosomes and organelles within the cytosol by excluding them from some areas and concentrating them in others.",
"title": "Organization"
},
{
"paragraph_id": 25,
"text": "The cytosol is the site of multiple cell processes. Examples of these processes include signal transduction from the cell membrane to sites within the cell, such as the cell nucleus, or organelles. This compartment is also the site of many of the processes of cytokinesis, after the breakdown of the nuclear membrane in mitosis. Another major function of cytosol is to transport metabolites from their site of production to where they are used. This is relatively simple for water-soluble molecules, such as amino acids, which can diffuse rapidly through the cytosol. However, hydrophobic molecules, such as fatty acids or sterols, can be transported through the cytosol by specific binding proteins, which shuttle these molecules between cell membranes. Molecules taken into the cell by endocytosis or on their way to be secreted can also be transported through the cytosol inside vesicles, which are small spheres of lipids that are moved along the cytoskeleton by motor proteins.",
"title": "Function"
},
{
"paragraph_id": 26,
"text": "The cytosol is the site of most metabolism in prokaryotes, and a large proportion of the metabolism of eukaryotes. For instance, in mammals about half of the proteins in the cell are localized to the cytosol. The most complete data are available in yeast, where metabolic reconstructions indicate that the majority of both metabolic processes and metabolites occur in the cytosol. Major metabolic pathways that occur in the cytosol in animals are protein biosynthesis, the pentose phosphate pathway, glycolysis and gluconeogenesis. The localization of pathways can be different in other organisms, for instance fatty acid synthesis occurs in chloroplasts in plants and in apicoplasts in apicomplexa.",
"title": "Function"
},
{
"paragraph_id": 27,
"text": "",
"title": "Further reading"
}
] | The cytosol, also known as cytoplasmic matrix or groundplasm, is one of the liquids found inside cells. It is separated into compartments by membranes. For example, the mitochondrial matrix separates the mitochondrion into many compartments. In the eukaryotic cell, the cytosol is surrounded by the cell membrane and is part of the cytoplasm, which also comprises the mitochondria, plastids, and other organelles; the cell nucleus is separate. The cytosol is thus a liquid matrix around the organelles. In prokaryotes, most of the chemical reactions of metabolism take place in the cytosol, while a few take place in membranes or in the periplasmic space. In eukaryotes, while many metabolic pathways still occur in the cytosol, others take place within organelles. The cytosol is a complex mixture of substances dissolved in water. Although water forms the large majority of the cytosol, its structure and properties within cells is not well understood. The concentrations of ions such as sodium and potassium in the cytosol are different to those in the extracellular fluid; these differences in ion levels are important in processes such as osmoregulation, cell signaling, and the generation of action potentials in excitable cells such as endocrine, nerve and muscle cells. The cytosol also contains large amounts of macromolecules, which can alter how molecules behave, through macromolecular crowding. Although it was once thought to be a simple solution of molecules, the cytosol has multiple levels of organization. These include concentration gradients of small molecules such as calcium, large complexes of enzymes that act together and take part in metabolic pathways, and protein complexes such as proteasomes and carboxysomes that enclose and separate parts of the cytosol. | 2002-02-25T15:43:11Z | 2023-12-21T11:15:33Z | [
"Template:Authority control",
"Template:Reflist",
"Template:Cite journal",
"Template:Cite web",
"Template:Cellular structures",
"Template:Good article",
"Template:Short description",
"Template:Organelle diagram",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Cytosol |
6,782 | Compound | Compound may refer to: | [
{
"paragraph_id": 0,
"text": "Compound may refer to:",
"title": ""
}
] | Compound may refer to: | 2001-10-13T14:00:40Z | 2023-07-30T23:24:05Z | [
"Template:Wiktionary",
"Template:Tocright",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Compound |
6,784 | Citizenship | Citizenship is the enjoyment by a natural person of civil and political rights of a polity, as well as the incurring of duties, which are not afforded to non-citizens.
Though citizenship is often legally conflated with nationality in today's Anglo-Saxon world, international law does not usually use the term citizenship to refer to nationality, these two notions being conceptually different dimensions of collective membership.
Generally citizenships have no expiration and allow persons to work, reside and vote in the polity, as well as identify with the polity, possibly acquiring a passport. Though through discriminatory laws, like disfranchisement and outright apartheid citizens have been made second-class citizens. Historically, populations of states were mostly subjects, while citizenship was a particular status which originated in the rights of urban populations, like the rights of the male public of cities and republics, particularly ancient city-states, giving rise to a civitas and the social class of the burgher or bourgeoisie. Since then states have expanded the status of citizenship to most of their national people, while the extent of citizen rights remain contested.
Conceptually citizenship and nationality are different dimensions of state membership. Citizenship is focused on the internal political life of the state and nationality is the dimension of state membership in international law. Article 15 of the Universal Declaration of Human Rights states that everyone has the right to nationality. As such nationality in international law can be called and understood as citizenship, or more generally as subject or belonging to a sovereign state, and not as ethnicity. This notwithstanding, around 10 million people are stateless.
In the contemporary era, the concept of full citizenship encompasses not only active political rights, but full civil rights and social rights.
A person can be recognized as a citizen on a number of bases.
Every citizen has obligations that are required by law and some responsibilities that benefit the community. Obeying the laws of a country and paying taxes are some of the obligations required of citizens by law. Voting and community services form part of responsibilities of a citizen that benefits the community.
The Constitution of Ghana (1992), Article 41, obligates citizens to promote the prestige and good name of Ghana and respect the symbols of Ghana. Examples of national symbols includes the Ghanaian flag, coat of arms, money, and state sword. These national symbols must be treated with respect and high esteem by citizens since they best represent Ghanaians.
Apart from responsibilities, citizens also have rights. Some of the rights are the right to pursue life, liberty and happiness, the right to worship, right to run for elected office and right to express oneself.
Many thinkers such as Giorgio Agamben in his work extending the biopolitical framework of Foucault's History of Sexuality in the book, Homo Sacer, point to the concept of citizenship beginning in the early city-states of ancient Greece, although others see it as primarily a modern phenomenon dating back only a few hundred years and, for humanity, that the concept of citizenship arose with the first laws. Polis meant both the political assembly of the city-state as well as the entire society. Citizenship concept has generally been identified as a western phenomenon. There is a general view that citizenship in ancient times was a simpler relation than modern forms of citizenship, although this view has come under scrutiny. The relation of citizenship has not been a fixed or static relation but constantly changed within each society, and that according to one view, citizenship might "really have worked" only at select periods during certain times, such as when the Athenian politician Solon made reforms in the early Athenian state. Citizenship was also contingent on a variety of biopolitical assemblages, such as the bioethics of emerging Theo-Philosophical traditions. It was necessary to fit Aristotle's definition of the besouled (the animate) to obtain citizenship: neither the sacred olive tree nor spring would have any rights.
An essential part of the framework of Greco-Roman ethics is the figure of Homo Sacer or the bare life.
Historian Geoffrey Hosking in his 2005 Modern Scholar lecture course suggested that citizenship in ancient Greece arose from an appreciation for the importance of freedom. Hosking explained:
It can be argued that this growth of slavery was what made Greeks particularly conscious of the value of freedom. After all, any Greek farmer might fall into debt and therefore might become a slave, at almost any time ... When the Greeks fought together, they fought in order to avoid being enslaved by warfare, to avoid being defeated by those who might take them into slavery. And they also arranged their political institutions so as to remain free men.
Slavery permitted slave-owners to have substantial free time and enabled participation in public life. Polis citizenship was marked by exclusivity. Inequality of status was widespread; citizens (πολίτης politēs < πόλις 'city') had a higher status than non-citizens, such as women, slaves, and resident foreigners (metics). The first form of citizenship was based on the way people lived in the ancient Greek times, in small-scale organic communities of the polis. The obligations of citizenship were deeply connected to one's everyday life in the polis. These small-scale organic communities were generally seen as a new development in world history, in contrast to the established ancient civilizations of Egypt or Persia, or the hunter-gatherer bands elsewhere. From the viewpoint of the ancient Greeks, a person's public life could not be separated from their private life, and Greeks did not distinguish between the two worlds according to the modern western conception. The obligations of citizenship were deeply connected with everyday life. To be truly human, one had to be an active citizen to the community, which Aristotle famously expressed: "To take no part in the running of the community's affairs is to be either a beast or a god!" This form of citizenship was based on the obligations of citizens towards the community, rather than rights given to the citizens of the community. This was not a problem because they all had a strong affinity with the polis; their own destiny and the destiny of the community were strongly linked. Also, citizens of the polis saw obligations to the community as an opportunity to be virtuous, it was a source of honor and respect. In Athens, citizens were both rulers and ruled, important political and judicial offices were rotated and all citizens had the right to speak and vote in the political assembly.
In the Roman Empire, citizenship expanded from small-scale communities to the entirety of the empire. Romans realized that granting citizenship to people from all over the empire legitimized Roman rule over conquered areas. Roman citizenship was no longer a status of political agency, as it had been reduced to a judicial safeguard and the expression of rule and law. Rome carried forth Greek ideas of citizenship such as the principles of equality under the law, civic participation in government, and notions that "no one citizen should have too much power for too long", but Rome offered relatively generous terms to its captives, including chances for lesser forms of citizenship. If Greek citizenship was an "emancipation from the world of things", the Roman sense increasingly reflected the fact that citizens could act upon material things as well as other citizens, in the sense of buying or selling property, possessions, titles, goods. One historian explained:
The person was defined and represented through his actions upon things; in the course of time, the term property came to mean, first, the defining characteristic of a human or other being; second, the relation which a person had with a thing; and third, the thing defined as the possession of some person.
Roman citizenship reflected a struggle between the upper-class patrician interests against the lower-order working groups known as the plebeian class. A citizen came to be understood as a person "free to act by law, free to ask and expect the law's protection, a citizen of such and such a legal community, of such and such a legal standing in that community". Citizenship meant having rights to have possessions, immunities, expectations, which were "available in many kinds and degrees, available or unavailable to many kinds of person for many kinds of reason". The law itself was a kind of bond uniting people. Roman citizenship was more impersonal, universal, multiform, having different degrees and applications.
During the European Middle Ages, citizenship was usually associated with cities and towns (see medieval commune), and applied mainly to middle-class folk. Titles such as burgher, grand burgher (German Großbürger) and the bourgeoisie denoted political affiliation and identity in relation to a particular locality, as well as membership in a mercantile or trading class; thus, individuals of respectable means and socioeconomic status were interchangeable with citizens.
During this era, members of the nobility had a range of privileges above commoners (see aristocracy), though political upheavals and reforms, beginning most prominently with the French Revolution, abolished privileges and created an egalitarian concept of citizenship.
During the Renaissance, people transitioned from being subjects of a king or queen to being citizens of a city and later to a nation. Each city had its own law, courts, and independent administration. And being a citizen often meant being subject to the city's law in addition to having power in some instances to help choose officials. City dwellers who had fought alongside nobles in battles to defend their cities were no longer content with having a subordinate social status but demanded a greater role in the form of citizenship. Membership in guilds was an indirect form of citizenship in that it helped their members succeed financially. The rise of citizenship was linked to the rise of republicanism, according to one account, since independent citizens meant that kings had less power. Citizenship became an idealized, almost abstract, concept, and did not signify a submissive relation with a lord or count, but rather indicated the bond between a person and the state in the rather abstract sense of having rights and duties.
The modern idea of citizenship still respects the idea of political participation, but it is usually done through "elaborate systems of political representation at a distance" such as representative democracy. Modern citizenship is much more passive; action is delegated to others; citizenship is often a constraint on acting, not an impetus to act. Nevertheless, citizens are usually aware of their obligations to authorities and are aware that these bonds often limit what they can do.
From 1790 until the mid-twentieth century, United States law used racial criteria to establish citizenship rights and regulate who was eligible to become a naturalized citizen. The Naturalization Act of 1790, the first law in U.S. history to establish rules for citizenship and naturalization, barred citizenship to all people who were not of European descent, stating that "any alien being a free white person, who shall have resided within the limits and under the jurisdiction of the United States for the term of two years, maybe admitted to becoming a citizen thereof."
Under early U.S. laws, African Americans were not eligible for citizenship. In 1857, these laws were upheld in the US Supreme Court case Dred Scott v. Sandford, which ruled that "a free negro of the African race, whose ancestors were brought to this country and sold as slaves, is not a 'citizen' within the meaning of the Constitution of the United States," and that "the special rights and immunities guaranteed to citizens do not apply to them."
It was not until the abolition of slavery following the American Civil War that African Americans were granted citizenship rights. The 14th Amendment to the U.S. Constitution, ratified on July 9, 1868, stated that "all persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside." Two years later, the Naturalization Act of 1870 would extend the right to become a naturalized citizen to include "aliens of African nativity and to persons of African descent".
Despite the gains made by African Americans after the Civil War, Native Americans, Asians, and others not considered "free white persons" were still denied the ability to become citizens. The 1882 Chinese Exclusion Act explicitly denied naturalization rights to all people of Chinese origin, while subsequent acts passed by the US Congress, such as laws in 1906, 1917, and 1924, would include clauses that denied immigration and naturalization rights to people based on broadly defined racial categories. Supreme Court cases such as Ozawa v. the United States (1922) and U.S. v. Bhagat Singh Thind (1923), would later clarify the meaning of the phrase "free white persons," ruling that ethnically Japanese, Indian, and other non-European people were not "white persons", and were therefore ineligible for naturalization under U.S. law.
Native Americans were not granted full US citizenship until the passage of the Indian Citizenship Act in 1924. However, even well into the 1960s, some state laws prevented Native Americans from exercising their full rights as citizens, such as the right to vote. In 1962, New Mexico became the last state to enfranchise Native Americans.
It was not until the passage of the Immigration and Nationality Act of 1952 that the racial and gender restrictions for naturalization were explicitly abolished. However, the act still contained restrictions regarding who was eligible for US citizenship and retained a national quota system which limited the number of visas given to immigrants based on their national origin, to be fixed "at a rate of one-sixth of one percent of each nationality's population in the United States in 1920". It was not until the passage of the Immigration and Nationality Act of 1965 that these immigration quota systems were drastically altered in favor of a less discriminatory system.
The 1918 constitution of revolutionary Russia granted citizenship to any foreigners who were living within the Russian Soviet Federative Socialist Republic, so long as they were "engaged in work and [belonged] to the working class." It recognized "the equal rights of all citizens, irrespective of their racial or national connections" and declared oppression of any minority group or race "to be contrary to the fundamental laws of the Republic." The 1918 constitution also established the right to vote and be elected to soviets for both men and women "irrespective of religion, nationality, domicile, etc. [...] who shall have completed their eighteenth year by the day of the election." The later constitutions of the USSR would grant universal Soviet citizenship to the citizens of all member republics in concord with the principles of non-discrimination laid out in the original 1918 constitution of Russia.
Nazism, the German variant of twentieth-century fascism, classified inhabitants of the country into three main hierarchical categories, each of which would have different rights in relation to the state: citizens, subjects, and aliens. The first category, citizens, were to possess full civic rights and responsibilities. Citizenship was conferred only on males of German (or so-called "Aryan") heritage who had completed military service, and could be revoked at any time by the state. The Reich Citizenship Law of 1935 established racial criteria for citizenship in the German Reich, and because of this law Jews and others who could not "prove German racial heritage" were stripped of their citizenship.
The second category, subjects, referred to all others who were born within the nation's boundaries who did not fit the racial criteria for citizenship. Subjects would have no voting rights, could not hold any position within the state, and possessed none of the other rights and civic responsibilities conferred on citizens. All women were to be conferred "subject" status upon birth, and could only obtain "citizen" status if they worked independently or if they married a German citizen (see women in Nazi Germany).
The final category, aliens, referred to those who were citizens of another state, who also had no rights.
In 2021, the German government passed Article 116 (2) of the Basic Law, which entitles the restoration of citizenship to individuals who had their German citizenship revoked "on political, racial, or religious grounds" between 30 January 1933 and 8 May 1945. This also entitles their descendants to German citizenship.
The primary principles of Israeli citizenship is jus sanguinis (citizenship by descent) for Jews and jus soli (citizenship by place of birth) for others.
Many theorists suggest that there are two opposing conceptions of citizenship: an economic one, and a political one. For further information, see History of citizenship. Citizenship status, under social contract theory, carries with it both rights and duties. In this sense, citizenship was described as "a bundle of rights -- primarily, political participation in the life of the community, the right to vote, and the right to receive certain protection from the community, as well as obligations." Citizenship is seen by most scholars as culture-specific, in the sense that the meaning of the term varies considerably from culture to culture, and over time. In China, for example, there is a cultural politics of citizenship which could be called "peopleship", argued by an academic article.
How citizenship is understood depends on the person making the determination. The relation of citizenship has never been fixed or static, but constantly changes within each society. While citizenship has varied considerably throughout history, and within societies over time, there are some common elements but they vary considerably as well. As a bond, citizenship extends beyond basic kinship ties to unite people of different genetic backgrounds. It usually signifies membership in a political body. It is often based on or was a result of, some form of military service or expectation of future service. It usually involves some form of political participation, but this can vary from token acts to active service in government.
Citizenship is a status in society. It is an ideal state as well. It generally describes a person with legal rights within a given political order. It almost always has an element of exclusion, meaning that some people are not citizens and that this distinction can sometimes be very important, or not important, depending on a particular society. Citizenship as a concept is generally hard to isolate intellectually and compare with related political notions since it relates to many other aspects of society such as the family, military service, the individual, freedom, religion, ideas of right, and wrong, ethnicity, and patterns for how a person should behave in society. When there are many different groups within a nation, citizenship may be the only real bond that unites everybody as equals without discrimination—it is a "broad bond" linking "a person with the state" and gives people a universal identity as a legal member of a specific nation.
Modern citizenship has often been looked at as two competing underlying ideas:
Responsibilities of citizens
Responsibility is an action that individuals of a state or country must take note of in the interest of a common good. These responsibilities can be categorised into personal and civic responsibilities.
Scholars suggest that the concept of citizenship contains many unresolved issues, sometimes called tensions, existing within the relation, that continue to reflect uncertainty about what citizenship is supposed to mean. Some unresolved issues regarding citizenship include questions about what is the proper balance between duties and rights. Another is a question about what is the proper balance between political citizenship versus social citizenship. Some thinkers see benefits with people being absent from public affairs, since too much participation such as revolution can be destructive, yet too little participation such as total apathy can be problematic as well. Citizenship can be seen as a special elite status, and it can also be seen as a democratizing force and something that everybody has; the concept can include both senses. According to sociologist Arthur Stinchcombe, citizenship is based on the extent that a person can control one's own destiny within the group in the sense of being able to influence the government of the group. One last distinction within citizenship is the so-called consent descent distinction, and this issue addresses whether citizenship is a fundamental matter determined by a person choosing to belong to a particular nation––by their consent––or is citizenship a matter of where a person was born––that is, by their descent.
Some intergovernmental organizations have extended the concept and terminology associated with citizenship to the international level, where it is applied to the totality of the citizens of their constituent countries combined. Citizenship at this level is a secondary concept, with rights deriving from national citizenship.
The Maastricht Treaty introduced the concept of citizenship of the European Union. Article 17 (1) of the Treaty on European Union stated that:
Citizenship of the Union is hereby established. Every person holding the nationality of a Member State shall be a citizen of the Union. Citizenship of the Union shall be additional to and not replace national citizenship.
An agreement is known as the amended EC Treaty established certain minimal rights for European Union citizens. Article 12 of the amended EC Treaty guaranteed a general right of non-discrimination within the scope of the Treaty. Article 18 provided a limited right to free movement and residence in the Member States other than that of which the European Union citizen is a national. Articles 18-21 and 225 provide certain political rights.
Union citizens have also extensive rights to move in order to exercise economic activity in any of the Member States which predate the introduction of Union citizenship.
Citizenship of the Mercosur is granted to eligible citizens of the Southern Common Market member states. It was approved in 2010 through the Citizenship Statute and should be fully implemented by the member countries in 2021 when the program will be transformed in an international treaty incorporated into the national legal system of the countries, under the concept of "Mercosur Citizen".
The concept of "Commonwealth Citizenship" has been in place ever since the establishment of the Commonwealth of Nations. As with the EU, one holds Commonwealth citizenship only by being a citizen of a Commonwealth member state. This form of citizenship offers certain privileges within some Commonwealth countries:
Although Ireland was excluded from the Commonwealth in 1949 because it declared itself a republic, Ireland is generally treated as if it were still a member. Legislation often specifically provides for equal treatment between Commonwealth countries and Ireland and refers to "Commonwealth countries and Ireland". Ireland's citizens are not classified as foreign nationals in the United Kingdom.
Canada departed from the principle of nationality being defined in terms of allegiance in 1921. In 1935 the Irish Free State was the first to introduce its own citizenship. However, Irish citizens were still treated as subjects of the Crown, and they are still not regarded as foreign, even though Ireland is not a member of the Commonwealth. The Canadian Citizenship Act of 1946 provided for a distinct Canadian Citizenship, automatically conferred upon most individuals born in Canada, with some exceptions, and defined the conditions under which one could become a naturalized citizen. The concept of Commonwealth citizenship was introduced in 1948 in the British Nationality Act 1948. Other dominions adopted this principle such as New Zealand, by way of the British Nationality and New Zealand Citizenship Act 1948.
Citizenship most usually relates to membership of the nation-state, but the term can also apply at the subnational level. Subnational entities may impose requirements, of residency or otherwise, which permit citizens to participate in the political life of that entity or to enjoy benefits provided by the government of that entity. But in such cases, those eligible are also sometimes seen as "citizens" of the relevant state, province, or region. An example of this is how the fundamental basis of Swiss citizenship is a citizenship of an individual commune, from which follows citizenship of a canton and of the Confederation. Another example is Åland where the residents enjoy special provincial citizenship within Finland, hembygdsrätt.
The United States has a federal system in which a person is a citizen of their specific state of residence, such as New York or California, as well as a citizen of the United States. State constitutions may grant certain rights above and beyond what is granted under the United States Constitution and may impose their own obligations including the sovereign right of taxation and military service; each state maintains at least one military force subject to national militia transfer service, the state's national guard, and some states maintain a second military force not subject to nationalization.
"Active citizenship" is the philosophy that citizens should work towards the betterment of their community through economic participation, public, volunteer work, and other such efforts to improve life for all citizens. In this vein, citizenship education is taught in schools, as an academic subject in some countries. By the time children reach secondary education there is an emphasis on such unconventional subjects to be included in an academic curriculum. While the diagram on citizenship to the right is rather facile and depthless, it is simplified to explain the general model of citizenship that is taught to many secondary school pupils. The idea behind this model within education is to instill in young pupils that their actions (i.e. their vote) affect collective citizenship and thus in turn them.
It is taught in the Republic of Ireland as an exam subject for the Junior Certificate. It is known as Civic, Social and Political Education (CSPE). A new Leaving Certificate exam subject with the working title 'Politics & Society' is being developed by the National Council for Curriculum and Assessment (NCCA) and is expected to be introduced to the curriculum sometime after 2012.
Citizenship is offered as a General Certificate of Secondary Education (GCSE) course in many schools in the United Kingdom. As well as teaching knowledge about democracy, parliament, government, the justice system, human rights and the UK's relations with the wider world, students participate in active citizenship, often involving a social action or social enterprise in their local community.
The concept of citizenship is criticized by open borders advocates, who argue that it functions as a caste, feudal, or apartheid system in which people are assigned dramatically different opportunities based on the accident of birth. It is also criticized by some libertarians, especially anarcho-capitalists. In 1987, moral philosopher Joseph Carens argued that "citizenship in Western liberal democracies is the modern equivalent of feudal privilege—an inherited status that greatly enhances one's life chances. Like feudal birthright privileges, restrictive citizenship is hard to justify when one thinks about it closely".
.. | [
{
"paragraph_id": 0,
"text": "Citizenship is the enjoyment by a natural person of civil and political rights of a polity, as well as the incurring of duties, which are not afforded to non-citizens.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Though citizenship is often legally conflated with nationality in today's Anglo-Saxon world, international law does not usually use the term citizenship to refer to nationality, these two notions being conceptually different dimensions of collective membership.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Generally citizenships have no expiration and allow persons to work, reside and vote in the polity, as well as identify with the polity, possibly acquiring a passport. Though through discriminatory laws, like disfranchisement and outright apartheid citizens have been made second-class citizens. Historically, populations of states were mostly subjects, while citizenship was a particular status which originated in the rights of urban populations, like the rights of the male public of cities and republics, particularly ancient city-states, giving rise to a civitas and the social class of the burgher or bourgeoisie. Since then states have expanded the status of citizenship to most of their national people, while the extent of citizen rights remain contested.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Conceptually citizenship and nationality are different dimensions of state membership. Citizenship is focused on the internal political life of the state and nationality is the dimension of state membership in international law. Article 15 of the Universal Declaration of Human Rights states that everyone has the right to nationality. As such nationality in international law can be called and understood as citizenship, or more generally as subject or belonging to a sovereign state, and not as ethnicity. This notwithstanding, around 10 million people are stateless.",
"title": "Definition"
},
{
"paragraph_id": 4,
"text": "In the contemporary era, the concept of full citizenship encompasses not only active political rights, but full civil rights and social rights.",
"title": "Definition"
},
{
"paragraph_id": 5,
"text": "A person can be recognized as a citizen on a number of bases.",
"title": "Determining factors"
},
{
"paragraph_id": 6,
"text": "Every citizen has obligations that are required by law and some responsibilities that benefit the community. Obeying the laws of a country and paying taxes are some of the obligations required of citizens by law. Voting and community services form part of responsibilities of a citizen that benefits the community.",
"title": "Determining factors"
},
{
"paragraph_id": 7,
"text": "The Constitution of Ghana (1992), Article 41, obligates citizens to promote the prestige and good name of Ghana and respect the symbols of Ghana. Examples of national symbols includes the Ghanaian flag, coat of arms, money, and state sword. These national symbols must be treated with respect and high esteem by citizens since they best represent Ghanaians.",
"title": "Determining factors"
},
{
"paragraph_id": 8,
"text": "Apart from responsibilities, citizens also have rights. Some of the rights are the right to pursue life, liberty and happiness, the right to worship, right to run for elected office and right to express oneself.",
"title": "Determining factors"
},
{
"paragraph_id": 9,
"text": "Many thinkers such as Giorgio Agamben in his work extending the biopolitical framework of Foucault's History of Sexuality in the book, Homo Sacer, point to the concept of citizenship beginning in the early city-states of ancient Greece, although others see it as primarily a modern phenomenon dating back only a few hundred years and, for humanity, that the concept of citizenship arose with the first laws. Polis meant both the political assembly of the city-state as well as the entire society. Citizenship concept has generally been identified as a western phenomenon. There is a general view that citizenship in ancient times was a simpler relation than modern forms of citizenship, although this view has come under scrutiny. The relation of citizenship has not been a fixed or static relation but constantly changed within each society, and that according to one view, citizenship might \"really have worked\" only at select periods during certain times, such as when the Athenian politician Solon made reforms in the early Athenian state. Citizenship was also contingent on a variety of biopolitical assemblages, such as the bioethics of emerging Theo-Philosophical traditions. It was necessary to fit Aristotle's definition of the besouled (the animate) to obtain citizenship: neither the sacred olive tree nor spring would have any rights.",
"title": "Determining factors"
},
{
"paragraph_id": 10,
"text": "An essential part of the framework of Greco-Roman ethics is the figure of Homo Sacer or the bare life.",
"title": "Determining factors"
},
{
"paragraph_id": 11,
"text": "Historian Geoffrey Hosking in his 2005 Modern Scholar lecture course suggested that citizenship in ancient Greece arose from an appreciation for the importance of freedom. Hosking explained:",
"title": "Determining factors"
},
{
"paragraph_id": 12,
"text": "It can be argued that this growth of slavery was what made Greeks particularly conscious of the value of freedom. After all, any Greek farmer might fall into debt and therefore might become a slave, at almost any time ... When the Greeks fought together, they fought in order to avoid being enslaved by warfare, to avoid being defeated by those who might take them into slavery. And they also arranged their political institutions so as to remain free men.",
"title": "Determining factors"
},
{
"paragraph_id": 13,
"text": "Slavery permitted slave-owners to have substantial free time and enabled participation in public life. Polis citizenship was marked by exclusivity. Inequality of status was widespread; citizens (πολίτης politēs < πόλις 'city') had a higher status than non-citizens, such as women, slaves, and resident foreigners (metics). The first form of citizenship was based on the way people lived in the ancient Greek times, in small-scale organic communities of the polis. The obligations of citizenship were deeply connected to one's everyday life in the polis. These small-scale organic communities were generally seen as a new development in world history, in contrast to the established ancient civilizations of Egypt or Persia, or the hunter-gatherer bands elsewhere. From the viewpoint of the ancient Greeks, a person's public life could not be separated from their private life, and Greeks did not distinguish between the two worlds according to the modern western conception. The obligations of citizenship were deeply connected with everyday life. To be truly human, one had to be an active citizen to the community, which Aristotle famously expressed: \"To take no part in the running of the community's affairs is to be either a beast or a god!\" This form of citizenship was based on the obligations of citizens towards the community, rather than rights given to the citizens of the community. This was not a problem because they all had a strong affinity with the polis; their own destiny and the destiny of the community were strongly linked. Also, citizens of the polis saw obligations to the community as an opportunity to be virtuous, it was a source of honor and respect. In Athens, citizens were both rulers and ruled, important political and judicial offices were rotated and all citizens had the right to speak and vote in the political assembly.",
"title": "Determining factors"
},
{
"paragraph_id": 14,
"text": "In the Roman Empire, citizenship expanded from small-scale communities to the entirety of the empire. Romans realized that granting citizenship to people from all over the empire legitimized Roman rule over conquered areas. Roman citizenship was no longer a status of political agency, as it had been reduced to a judicial safeguard and the expression of rule and law. Rome carried forth Greek ideas of citizenship such as the principles of equality under the law, civic participation in government, and notions that \"no one citizen should have too much power for too long\", but Rome offered relatively generous terms to its captives, including chances for lesser forms of citizenship. If Greek citizenship was an \"emancipation from the world of things\", the Roman sense increasingly reflected the fact that citizens could act upon material things as well as other citizens, in the sense of buying or selling property, possessions, titles, goods. One historian explained:",
"title": "Determining factors"
},
{
"paragraph_id": 15,
"text": "The person was defined and represented through his actions upon things; in the course of time, the term property came to mean, first, the defining characteristic of a human or other being; second, the relation which a person had with a thing; and third, the thing defined as the possession of some person.",
"title": "Determining factors"
},
{
"paragraph_id": 16,
"text": "Roman citizenship reflected a struggle between the upper-class patrician interests against the lower-order working groups known as the plebeian class. A citizen came to be understood as a person \"free to act by law, free to ask and expect the law's protection, a citizen of such and such a legal community, of such and such a legal standing in that community\". Citizenship meant having rights to have possessions, immunities, expectations, which were \"available in many kinds and degrees, available or unavailable to many kinds of person for many kinds of reason\". The law itself was a kind of bond uniting people. Roman citizenship was more impersonal, universal, multiform, having different degrees and applications.",
"title": "Determining factors"
},
{
"paragraph_id": 17,
"text": "During the European Middle Ages, citizenship was usually associated with cities and towns (see medieval commune), and applied mainly to middle-class folk. Titles such as burgher, grand burgher (German Großbürger) and the bourgeoisie denoted political affiliation and identity in relation to a particular locality, as well as membership in a mercantile or trading class; thus, individuals of respectable means and socioeconomic status were interchangeable with citizens.",
"title": "Determining factors"
},
{
"paragraph_id": 18,
"text": "During this era, members of the nobility had a range of privileges above commoners (see aristocracy), though political upheavals and reforms, beginning most prominently with the French Revolution, abolished privileges and created an egalitarian concept of citizenship.",
"title": "Determining factors"
},
{
"paragraph_id": 19,
"text": "During the Renaissance, people transitioned from being subjects of a king or queen to being citizens of a city and later to a nation. Each city had its own law, courts, and independent administration. And being a citizen often meant being subject to the city's law in addition to having power in some instances to help choose officials. City dwellers who had fought alongside nobles in battles to defend their cities were no longer content with having a subordinate social status but demanded a greater role in the form of citizenship. Membership in guilds was an indirect form of citizenship in that it helped their members succeed financially. The rise of citizenship was linked to the rise of republicanism, according to one account, since independent citizens meant that kings had less power. Citizenship became an idealized, almost abstract, concept, and did not signify a submissive relation with a lord or count, but rather indicated the bond between a person and the state in the rather abstract sense of having rights and duties.",
"title": "Determining factors"
},
{
"paragraph_id": 20,
"text": "The modern idea of citizenship still respects the idea of political participation, but it is usually done through \"elaborate systems of political representation at a distance\" such as representative democracy. Modern citizenship is much more passive; action is delegated to others; citizenship is often a constraint on acting, not an impetus to act. Nevertheless, citizens are usually aware of their obligations to authorities and are aware that these bonds often limit what they can do.",
"title": "Determining factors"
},
{
"paragraph_id": 21,
"text": "From 1790 until the mid-twentieth century, United States law used racial criteria to establish citizenship rights and regulate who was eligible to become a naturalized citizen. The Naturalization Act of 1790, the first law in U.S. history to establish rules for citizenship and naturalization, barred citizenship to all people who were not of European descent, stating that \"any alien being a free white person, who shall have resided within the limits and under the jurisdiction of the United States for the term of two years, maybe admitted to becoming a citizen thereof.\"",
"title": "Determining factors"
},
{
"paragraph_id": 22,
"text": "Under early U.S. laws, African Americans were not eligible for citizenship. In 1857, these laws were upheld in the US Supreme Court case Dred Scott v. Sandford, which ruled that \"a free negro of the African race, whose ancestors were brought to this country and sold as slaves, is not a 'citizen' within the meaning of the Constitution of the United States,\" and that \"the special rights and immunities guaranteed to citizens do not apply to them.\"",
"title": "Determining factors"
},
{
"paragraph_id": 23,
"text": "It was not until the abolition of slavery following the American Civil War that African Americans were granted citizenship rights. The 14th Amendment to the U.S. Constitution, ratified on July 9, 1868, stated that \"all persons born or naturalized in the United States, and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside.\" Two years later, the Naturalization Act of 1870 would extend the right to become a naturalized citizen to include \"aliens of African nativity and to persons of African descent\".",
"title": "Determining factors"
},
{
"paragraph_id": 24,
"text": "Despite the gains made by African Americans after the Civil War, Native Americans, Asians, and others not considered \"free white persons\" were still denied the ability to become citizens. The 1882 Chinese Exclusion Act explicitly denied naturalization rights to all people of Chinese origin, while subsequent acts passed by the US Congress, such as laws in 1906, 1917, and 1924, would include clauses that denied immigration and naturalization rights to people based on broadly defined racial categories. Supreme Court cases such as Ozawa v. the United States (1922) and U.S. v. Bhagat Singh Thind (1923), would later clarify the meaning of the phrase \"free white persons,\" ruling that ethnically Japanese, Indian, and other non-European people were not \"white persons\", and were therefore ineligible for naturalization under U.S. law.",
"title": "Determining factors"
},
{
"paragraph_id": 25,
"text": "Native Americans were not granted full US citizenship until the passage of the Indian Citizenship Act in 1924. However, even well into the 1960s, some state laws prevented Native Americans from exercising their full rights as citizens, such as the right to vote. In 1962, New Mexico became the last state to enfranchise Native Americans.",
"title": "Determining factors"
},
{
"paragraph_id": 26,
"text": "It was not until the passage of the Immigration and Nationality Act of 1952 that the racial and gender restrictions for naturalization were explicitly abolished. However, the act still contained restrictions regarding who was eligible for US citizenship and retained a national quota system which limited the number of visas given to immigrants based on their national origin, to be fixed \"at a rate of one-sixth of one percent of each nationality's population in the United States in 1920\". It was not until the passage of the Immigration and Nationality Act of 1965 that these immigration quota systems were drastically altered in favor of a less discriminatory system.",
"title": "Determining factors"
},
{
"paragraph_id": 27,
"text": "The 1918 constitution of revolutionary Russia granted citizenship to any foreigners who were living within the Russian Soviet Federative Socialist Republic, so long as they were \"engaged in work and [belonged] to the working class.\" It recognized \"the equal rights of all citizens, irrespective of their racial or national connections\" and declared oppression of any minority group or race \"to be contrary to the fundamental laws of the Republic.\" The 1918 constitution also established the right to vote and be elected to soviets for both men and women \"irrespective of religion, nationality, domicile, etc. [...] who shall have completed their eighteenth year by the day of the election.\" The later constitutions of the USSR would grant universal Soviet citizenship to the citizens of all member republics in concord with the principles of non-discrimination laid out in the original 1918 constitution of Russia.",
"title": "Determining factors"
},
{
"paragraph_id": 28,
"text": "Nazism, the German variant of twentieth-century fascism, classified inhabitants of the country into three main hierarchical categories, each of which would have different rights in relation to the state: citizens, subjects, and aliens. The first category, citizens, were to possess full civic rights and responsibilities. Citizenship was conferred only on males of German (or so-called \"Aryan\") heritage who had completed military service, and could be revoked at any time by the state. The Reich Citizenship Law of 1935 established racial criteria for citizenship in the German Reich, and because of this law Jews and others who could not \"prove German racial heritage\" were stripped of their citizenship.",
"title": "Determining factors"
},
{
"paragraph_id": 29,
"text": "The second category, subjects, referred to all others who were born within the nation's boundaries who did not fit the racial criteria for citizenship. Subjects would have no voting rights, could not hold any position within the state, and possessed none of the other rights and civic responsibilities conferred on citizens. All women were to be conferred \"subject\" status upon birth, and could only obtain \"citizen\" status if they worked independently or if they married a German citizen (see women in Nazi Germany).",
"title": "Determining factors"
},
{
"paragraph_id": 30,
"text": "The final category, aliens, referred to those who were citizens of another state, who also had no rights.",
"title": "Determining factors"
},
{
"paragraph_id": 31,
"text": "In 2021, the German government passed Article 116 (2) of the Basic Law, which entitles the restoration of citizenship to individuals who had their German citizenship revoked \"on political, racial, or religious grounds\" between 30 January 1933 and 8 May 1945. This also entitles their descendants to German citizenship.",
"title": "Determining factors"
},
{
"paragraph_id": 32,
"text": "The primary principles of Israeli citizenship is jus sanguinis (citizenship by descent) for Jews and jus soli (citizenship by place of birth) for others.",
"title": "Determining factors"
},
{
"paragraph_id": 33,
"text": "Many theorists suggest that there are two opposing conceptions of citizenship: an economic one, and a political one. For further information, see History of citizenship. Citizenship status, under social contract theory, carries with it both rights and duties. In this sense, citizenship was described as \"a bundle of rights -- primarily, political participation in the life of the community, the right to vote, and the right to receive certain protection from the community, as well as obligations.\" Citizenship is seen by most scholars as culture-specific, in the sense that the meaning of the term varies considerably from culture to culture, and over time. In China, for example, there is a cultural politics of citizenship which could be called \"peopleship\", argued by an academic article.",
"title": "Different senses"
},
{
"paragraph_id": 34,
"text": "How citizenship is understood depends on the person making the determination. The relation of citizenship has never been fixed or static, but constantly changes within each society. While citizenship has varied considerably throughout history, and within societies over time, there are some common elements but they vary considerably as well. As a bond, citizenship extends beyond basic kinship ties to unite people of different genetic backgrounds. It usually signifies membership in a political body. It is often based on or was a result of, some form of military service or expectation of future service. It usually involves some form of political participation, but this can vary from token acts to active service in government.",
"title": "Different senses"
},
{
"paragraph_id": 35,
"text": "Citizenship is a status in society. It is an ideal state as well. It generally describes a person with legal rights within a given political order. It almost always has an element of exclusion, meaning that some people are not citizens and that this distinction can sometimes be very important, or not important, depending on a particular society. Citizenship as a concept is generally hard to isolate intellectually and compare with related political notions since it relates to many other aspects of society such as the family, military service, the individual, freedom, religion, ideas of right, and wrong, ethnicity, and patterns for how a person should behave in society. When there are many different groups within a nation, citizenship may be the only real bond that unites everybody as equals without discrimination—it is a \"broad bond\" linking \"a person with the state\" and gives people a universal identity as a legal member of a specific nation.",
"title": "Different senses"
},
{
"paragraph_id": 36,
"text": "Modern citizenship has often been looked at as two competing underlying ideas:",
"title": "Different senses"
},
{
"paragraph_id": 37,
"text": "Responsibilities of citizens",
"title": "Different senses"
},
{
"paragraph_id": 38,
"text": "Responsibility is an action that individuals of a state or country must take note of in the interest of a common good. These responsibilities can be categorised into personal and civic responsibilities.",
"title": "Different senses"
},
{
"paragraph_id": 39,
"text": "Scholars suggest that the concept of citizenship contains many unresolved issues, sometimes called tensions, existing within the relation, that continue to reflect uncertainty about what citizenship is supposed to mean. Some unresolved issues regarding citizenship include questions about what is the proper balance between duties and rights. Another is a question about what is the proper balance between political citizenship versus social citizenship. Some thinkers see benefits with people being absent from public affairs, since too much participation such as revolution can be destructive, yet too little participation such as total apathy can be problematic as well. Citizenship can be seen as a special elite status, and it can also be seen as a democratizing force and something that everybody has; the concept can include both senses. According to sociologist Arthur Stinchcombe, citizenship is based on the extent that a person can control one's own destiny within the group in the sense of being able to influence the government of the group. One last distinction within citizenship is the so-called consent descent distinction, and this issue addresses whether citizenship is a fundamental matter determined by a person choosing to belong to a particular nation––by their consent––or is citizenship a matter of where a person was born––that is, by their descent.",
"title": "Different senses"
},
{
"paragraph_id": 40,
"text": "Some intergovernmental organizations have extended the concept and terminology associated with citizenship to the international level, where it is applied to the totality of the citizens of their constituent countries combined. Citizenship at this level is a secondary concept, with rights deriving from national citizenship.",
"title": "International"
},
{
"paragraph_id": 41,
"text": "The Maastricht Treaty introduced the concept of citizenship of the European Union. Article 17 (1) of the Treaty on European Union stated that:",
"title": "International"
},
{
"paragraph_id": 42,
"text": "Citizenship of the Union is hereby established. Every person holding the nationality of a Member State shall be a citizen of the Union. Citizenship of the Union shall be additional to and not replace national citizenship.",
"title": "International"
},
{
"paragraph_id": 43,
"text": "An agreement is known as the amended EC Treaty established certain minimal rights for European Union citizens. Article 12 of the amended EC Treaty guaranteed a general right of non-discrimination within the scope of the Treaty. Article 18 provided a limited right to free movement and residence in the Member States other than that of which the European Union citizen is a national. Articles 18-21 and 225 provide certain political rights.",
"title": "International"
},
{
"paragraph_id": 44,
"text": "Union citizens have also extensive rights to move in order to exercise economic activity in any of the Member States which predate the introduction of Union citizenship.",
"title": "International"
},
{
"paragraph_id": 45,
"text": "Citizenship of the Mercosur is granted to eligible citizens of the Southern Common Market member states. It was approved in 2010 through the Citizenship Statute and should be fully implemented by the member countries in 2021 when the program will be transformed in an international treaty incorporated into the national legal system of the countries, under the concept of \"Mercosur Citizen\".",
"title": "International"
},
{
"paragraph_id": 46,
"text": "The concept of \"Commonwealth Citizenship\" has been in place ever since the establishment of the Commonwealth of Nations. As with the EU, one holds Commonwealth citizenship only by being a citizen of a Commonwealth member state. This form of citizenship offers certain privileges within some Commonwealth countries:",
"title": "International"
},
{
"paragraph_id": 47,
"text": "Although Ireland was excluded from the Commonwealth in 1949 because it declared itself a republic, Ireland is generally treated as if it were still a member. Legislation often specifically provides for equal treatment between Commonwealth countries and Ireland and refers to \"Commonwealth countries and Ireland\". Ireland's citizens are not classified as foreign nationals in the United Kingdom.",
"title": "International"
},
{
"paragraph_id": 48,
"text": "Canada departed from the principle of nationality being defined in terms of allegiance in 1921. In 1935 the Irish Free State was the first to introduce its own citizenship. However, Irish citizens were still treated as subjects of the Crown, and they are still not regarded as foreign, even though Ireland is not a member of the Commonwealth. The Canadian Citizenship Act of 1946 provided for a distinct Canadian Citizenship, automatically conferred upon most individuals born in Canada, with some exceptions, and defined the conditions under which one could become a naturalized citizen. The concept of Commonwealth citizenship was introduced in 1948 in the British Nationality Act 1948. Other dominions adopted this principle such as New Zealand, by way of the British Nationality and New Zealand Citizenship Act 1948.",
"title": "International"
},
{
"paragraph_id": 49,
"text": "Citizenship most usually relates to membership of the nation-state, but the term can also apply at the subnational level. Subnational entities may impose requirements, of residency or otherwise, which permit citizens to participate in the political life of that entity or to enjoy benefits provided by the government of that entity. But in such cases, those eligible are also sometimes seen as \"citizens\" of the relevant state, province, or region. An example of this is how the fundamental basis of Swiss citizenship is a citizenship of an individual commune, from which follows citizenship of a canton and of the Confederation. Another example is Åland where the residents enjoy special provincial citizenship within Finland, hembygdsrätt.",
"title": "Subnational"
},
{
"paragraph_id": 50,
"text": "The United States has a federal system in which a person is a citizen of their specific state of residence, such as New York or California, as well as a citizen of the United States. State constitutions may grant certain rights above and beyond what is granted under the United States Constitution and may impose their own obligations including the sovereign right of taxation and military service; each state maintains at least one military force subject to national militia transfer service, the state's national guard, and some states maintain a second military force not subject to nationalization.",
"title": "Subnational"
},
{
"paragraph_id": 51,
"text": "\"Active citizenship\" is the philosophy that citizens should work towards the betterment of their community through economic participation, public, volunteer work, and other such efforts to improve life for all citizens. In this vein, citizenship education is taught in schools, as an academic subject in some countries. By the time children reach secondary education there is an emphasis on such unconventional subjects to be included in an academic curriculum. While the diagram on citizenship to the right is rather facile and depthless, it is simplified to explain the general model of citizenship that is taught to many secondary school pupils. The idea behind this model within education is to instill in young pupils that their actions (i.e. their vote) affect collective citizenship and thus in turn them.",
"title": "Education"
},
{
"paragraph_id": 52,
"text": "It is taught in the Republic of Ireland as an exam subject for the Junior Certificate. It is known as Civic, Social and Political Education (CSPE). A new Leaving Certificate exam subject with the working title 'Politics & Society' is being developed by the National Council for Curriculum and Assessment (NCCA) and is expected to be introduced to the curriculum sometime after 2012.",
"title": "Education"
},
{
"paragraph_id": 53,
"text": "Citizenship is offered as a General Certificate of Secondary Education (GCSE) course in many schools in the United Kingdom. As well as teaching knowledge about democracy, parliament, government, the justice system, human rights and the UK's relations with the wider world, students participate in active citizenship, often involving a social action or social enterprise in their local community.",
"title": "Education"
},
{
"paragraph_id": 54,
"text": "The concept of citizenship is criticized by open borders advocates, who argue that it functions as a caste, feudal, or apartheid system in which people are assigned dramatically different opportunities based on the accident of birth. It is also criticized by some libertarians, especially anarcho-capitalists. In 1987, moral philosopher Joseph Carens argued that \"citizenship in Western liberal democracies is the modern equivalent of feudal privilege—an inherited status that greatly enhances one's life chances. Like feudal birthright privileges, restrictive citizenship is hard to justify when one thinks about it closely\".",
"title": "Criticism"
},
{
"paragraph_id": 55,
"text": "..",
"title": "Notes[12]"
}
] | Citizenship is the enjoyment by a natural person of civil and political rights of a polity, as well as the incurring of duties, which are not afforded to non-citizens. Though citizenship is often legally conflated with nationality in today's Anglo-Saxon world, international law does not usually use the term citizenship to refer to nationality, these two notions being conceptually different dimensions of collective membership. Generally citizenships have no expiration and allow persons to work, reside and vote in the polity, as well as identify with the polity, possibly acquiring a passport. Though through discriminatory laws, like disfranchisement and outright apartheid citizens have been made second-class citizens. Historically, populations of states were mostly subjects, while citizenship was a particular status which originated in the rights of urban populations, like the rights of the male public of cities and republics, particularly ancient city-states, giving rise to a civitas and the social class of the burgher or bourgeoisie. Since then states have expanded the status of citizenship to most of their national people, while the extent of citizen rights remain contested. | 2002-02-25T15:43:11Z | 2023-12-17T21:52:13Z | [
"Template:Citation needed",
"Template:Library resources box",
"Template:Wiktionary-inline",
"Template:Cite SEP",
"Template:Excerpt",
"Template:Rp",
"Template:Webarchive",
"Template:Cite encyclopedia",
"Template:Authority control",
"Template:Blockquote",
"Template:Cite web",
"Template:Cite book",
"Template:Cite journal",
"Template:Anchor",
"Template:Reflist",
"Template:More citations needed section",
"Template:Main",
"Template:Main article",
"Template:Further",
"Template:Short description",
"Template:See also",
"Template:Notelist",
"Template:Harvnb",
"Template:Redirect",
"Template:Citation",
"Template:Commons category-inline",
"Template:Social class",
"Template:Legal status of persons",
"Template:Sfn",
"Template:Doi",
"Template:Wikiquote-inline"
] | https://en.wikipedia.org/wiki/Citizenship |
6,787 | Chiapas | Chiapas (Spanish pronunciation: [ˈtʃjapas] ; Tzotzil and Tzeltal: Chyapas [ˈtʃʰjapʰas]), officially the Free and Sovereign State of Chiapas (Spanish: Estado Libre y Soberano de Chiapas), is one of the states that make up the 32 federal entities of Mexico. It comprises 124 municipalities as of September 2017 and its capital and largest city is Tuxtla Gutiérrez. Other important population centers in Chiapas include Ocosingo, Tapachula, San Cristóbal de las Casas, Comitán, and Arriaga. Chiapas is the southernmost state in Mexico, and it borders the states of Oaxaca to the west, Veracruz to the northwest, and Tabasco to the north, and the Petén, Quiché, Huehuetenango, and San Marcos departments of Guatemala to the east and southeast. Chiapas has a significant coastline on the Pacific Ocean to the southwest.
In general, Chiapas has a humid, tropical climate. In the northern area bordering Tabasco, near Teapa, rainfall can average more than 3,000 mm (120 in) per year. In the past, natural vegetation in this region was lowland, tall perennial rainforest, but this vegetation has been almost completely cleared to allow agriculture and ranching. Rainfall decreases moving towards the Pacific Ocean, but it is still abundant enough to allow the farming of bananas and many other tropical crops near Tapachula. On the several parallel sierras or mountain ranges running along the center of Chiapas, the climate can be quite moderate and foggy, allowing the development of cloud forests like those of Reserva de la Biosfera El Triunfo, home to a handful of horned guans, resplendent quetzals, and azure-rumped tanagers.
Chiapas is home to the ancient Mayan ruins of Palenque, Yaxchilán, Bonampak, Chinkultic and Toniná. It is also home to one of the largest indigenous populations in the country, with ten federally recognized ethnicities.
The official name of the state is Chiapas, which is believed to have come from the ancient city of Chiapan, which in Náhuatl means "the place where the chia sage grows." After the Spanish arrived (1522), they established two cities called Chiapas de los Indios and Chiapas de los Españoles (1528), with the name of Provincia de Chiapas for the area around the cities. The first coat of arms of the region dates from 1535 as that of the Ciudad Real (San Cristóbal de las Casas). Chiapas painter Javier Vargas Ballinas designed the modern coat of arms.
Hunter gatherers began to occupy the central valley of the state around 7000 BCE, but little is known about them. The oldest archaeological remains in the seat are located at the Santa Elena Ranch in Ocozocoautla whose finds include tools and weapons made of stone and bone. It also includes burials. In the pre Classic period from 1800 BCE to 300 CE, agricultural villages appeared all over the state although hunter gather groups would persist for long after the era.
Recent excavations in the Soconusco region of the state indicate that the oldest civilization to appear in what is now modern Chiapas is that of the Mokaya, which were cultivating corn and living in houses as early as 1500 BCE, making them one of the oldest in Mesoamerica. There is speculation that these were the forefathers of the Olmec, migrating across the Grijalva Valley and onto the coastal plain of the Gulf of Mexico to the north, which was Olmec territory. One of these people's ancient cities is now the archeological site of Chiapa de Corzo, in which was found the oldest calendar known on a piece of ceramic with a date of 36 BCE. This is three hundred years before the Mayans developed their calendar. The descendants of Mokaya are the Mixe-Zoque.
During the pre Classic era, it is known that most of Chiapas was not Olmec, but had close relations with them, especially the Olmecs of the Isthmus of Tehuantepec. Olmec-influenced sculpture can be found in Chiapas and products from the state including amber, magnetite, and ilmenite were exported to Olmec lands. The Olmecs came to what is now the northwest of the state looking for amber with one of the main pieces of evidence for this called the Simojovel Ax.
Mayan civilization began in the pre-Classic period as well, but did not come into prominence until the Classic period (300–900 CE). Development of this culture was agricultural villages during the pre-Classic period with city building during the Classic as social stratification became more complex. The Mayans built cities on the Yucatán Peninsula and west into Guatemala. In Chiapas, Mayan sites are concentrated along the state's borders with Tabasco and Guatemala, near Mayan sites in those entities. Most of this area belongs to the Lacandon Jungle.
Mayan civilization in the Lacandon area is marked by rising exploitation of rain forest resources, rigid social stratification, fervent local identity, waging war against neighboring peoples. At its height, it had large cities, a writing system, and development of scientific knowledge, such as mathematics and astronomy. Cities were centered on large political and ceremonial structures elaborately decorated with murals and inscriptions. Among these cities are Palenque, Bonampak, Yaxchilan, Chinkultic, Toniná and Tenón. The Mayan civilization had extensive trade networks and large markets trading in goods such as animal skins, indigo, amber, vanilla and quetzal feathers. It is not known what ended the civilization but theories range from over population size, natural disasters, disease, and loss of natural resources through over exploitation or climate change.
Nearly all Mayan cities collapsed around the same time, 900 CE. From then until 1500 CE, social organization of the region fragmented into much smaller units and social structure became much less complex. There was some influence from the rising powers of central Mexico but two main indigenous groups emerged during this time, the Zoques and the various Mayan descendants. The Chiapans, for whom the state is named, migrated into the center of the state during this time and settled around Chiapa de Corzo, the old Mixe–Zoque stronghold. There is evidence that the Aztecs appeared in the center of the state around Chiapa de Corza in the 15th century, but were unable to displace the native Chiapa tribe. However, they had enough influence so that the name of this area and of the state would come from Nahuatl.
When the Spanish arrived in the 16th century, they found the indigenous peoples divided into Mayan and non-Mayan, with the latter dominated by the Zoques and Chiapanecas. The first contact between Spaniards and the people of Chiapas came in 1522, when Hernán Cortés sent tax collectors to the area after Aztec Empire was subdued. The first military incursion was headed by Luis Marín, who arrived in 1523. After three years, Marín was able to subjugate a number of the local peoples, but met with fierce resistance from the Tzotzils in the highlands. The Spanish colonial government then sent a new expedition under Diego de Mazariegos. Mazariegos had more success than his predecessor, but many natives preferred to commit suicide rather than submit to the Spanish. One famous example of this is the Battle of Tepetchia, where many jumped to their deaths in the Sumidero Canyon.
Indigenous resistance was weakened by continual warfare with the Spaniards and disease. By 1530 almost all of the indigenous peoples of the area had been subdued with the exception of the Lacandons in the deep jungles who actively resisted until 1695. However, the main two groups, the Tzotzils and Tzeltals of the central highlands were subdued enough to establish the first Spanish city, today called San Cristóbal de las Casas, in 1528. It was one of two settlements initially called Villa Real de Chiapa de los Españoles and the other called Chiapa de los Indios.
Soon after, the encomienda system was introduced, which reduced most of the indigenous population to serfdom and many even as slaves as a form of tribute and way of locking in a labor supply for tax payments. The conquistadors brought previously unknown diseases. This, as well as overwork on plantations, dramatically decreased the indigenous population. The Spanish also established missions, mostly under the Dominicans, with the Diocese of Chiapas established in 1538 by Pope Paul III. The Dominican evangelizers became early advocates of the indigenous' people's plight, with Bartolomé de las Casas winning a battle with the passing of a law in 1542 for their protection. This order also worked to make sure that communities would keep their indigenous name with a saint's prefix leading to names such as San Juan Chamula and San Lorenzo Zinacantán. He also advocated adapting the teaching of Christianity to indigenous language and culture. The encomienda system that had perpetrated much of the abuse of the indigenous peoples declined by the end of the 16th century, and was replaced by haciendas. However, the use and misuse of Indian labor remained a large part of Chiapas politics into modern times. Maltreatment and tribute payments created an undercurrent of resentment in the indigenous population that passed on from generation to generation. One uprising against high tribute payments occurred in the Tzeltal communities in the Los Alto region in 1712. Soon, the Tzoltzils and Ch'ols joined the Tzeltales in rebellion, but within a year the government was able to extinguish the rebellion.
As of 1778, Thomas Kitchin described Chiapas as "the metropolis of the original Mexicans," with a population of approximately 20,000, and consisting mainly of indigenous peoples. The Spanish introduced new crops such as sugar cane, wheat, barley and indigo as main economic staples along native ones such as corn, cotton, cacao and beans. Livestock such as cattle, horses and sheep were introduced as well. Regions would specialize in certain crops and animals depending on local conditions and for many of these regions, communication and travel were difficult. Most Europeans and their descendants tended to concentrate in cities such as Ciudad Real, Comitán, Chiapa and Tuxtla. Intermixing of the races was prohibited by colonial law but by the end of the 17th century there was a significant mestizo population. Added to this was a population of African slaves brought in by the Spanish in the middle of the 16th century due to the loss of native workforce.
Initially, "Chiapas" referred to the first two cities established by the Spanish in what is now the center of the state and the area surrounding them. Two other regions were also established, the Soconusco and Tuxtla, all under the regional colonial government of Guatemala. Chiapas, Soconusco and Tuxla regions were united to the first time as an intendencia during the Bourbon Reforms in 1790 as an administrative region under the name of Chiapas. However, within this intendencia, the division between Chiapas and Soconusco regions would remain strong and have consequences at the end of the colonial period.
From the colonial period Chiapas was relatively isolated from the colonial authorities in Mexico City and regional authorities in Guatemala. One reason for this was the rugged terrain. Another was that much of Chiapas was not attractive to the Spanish. It lacked mineral wealth, large areas of arable land, and easy access to markets. This isolation spared it from battles related to Independence. José María Morelos y Pavón did enter the city of Tonalá but incurred no resistance. The only other insurgent activity was the publication of a newspaper called El Pararrayos by Matías de Córdova in San Cristóbal de las Casas.
Following the end of Spanish rule in New Spain, it was unclear what new political arrangements would emerge. The isolation of Chiapas from centers of power, along with the strong internal divisions in the intendencia caused a political crisis after the royal government collapsed in Mexico City in 1821, ending the Mexican War of Independence. During this war, a group of influential Chiapas merchants and ranchers sought the establishment of the Free State of Chiapas. This group became known as the La Familia Chiapaneca. However, this alliance did not last with the lowlands preferring inclusion among the new republics of Central America and the highlands annexation to Mexico. In 1821, a number of cities in Chiapas, starting in Comitán, declared the state's separation from the Spanish empire. In 1823, Guatemala became part of the United Provinces of Central America, which united to form a federal republic that would last from 1823 to 1839. With the exception of the pro-Mexican Ciudad Real (San Cristóbal) and some others, many Chiapanecan towns and villages favored a Chiapas independent of Mexico and some favored unification with Guatemala.
Elites in highland cities pushed for incorporation into Mexico. In 1822, then-Emperor Agustín de Iturbide decreed that Chiapas was part of Mexico. In 1823, the Junta General de Gobierno was held and Chiapas declared independence again. In July 1824, the Soconusco District of southwestern Chiapas split off from Chiapas, announcing that it would join the Central American Federation. In September of the same year, a referendum was held on whether the intendencia would join Central America or Mexico, with many of the elite endorsing union with Mexico. This referendum ended in favor of incorporation with Mexico (allegedly through manipulation of the elite in the highlands), but the Soconusco region maintained a neutral status until 1842, when Oaxacans under General Antonio López de Santa Anna occupied the area, and declared it reincorporated into Mexico. Elites of the area would not accept this until 1844. Guatemala would not recognize Mexico's annexation of the Soconusco region until 1895, even though the border between Chiapas and Guatemala had been agreed upon in 1882. The State of Chiapas was officially declared in 1824, with its first constitution in 1826. Ciudad Real was renamed San Cristóbal de las Casas in 1828.
In the decades after the official end of the war, the provinces of Chiapas and Soconusco unified, with power concentrated in San Cristóbal de las Casas. The state's society evolved into three distinct spheres: indigenous peoples, mestizos from the farms and haciendas and the Spanish colonial cities. Most of the political struggles were between the last two groups especially over who would control the indigenous labor force. Economically, the state lost one of its main crops, indigo, to synthetic dyes. There was a small experiment with democracy in the form of "open city councils" but it was short-lived because voting was heavily rigged.
The Universidad Pontificia y Literaria de Chiapas was founded in 1826, with Mexico's second teacher's college founded in the state in 1828.
With the ouster of conservative Antonio López de Santa Anna, Mexican liberals came to power. The Reform War (1858–1861) fought between Liberals, who favored federalism and sought economic development, decreased power of the Roman Catholic Church, and Mexican army, and Conservatives, who favored centralized autocratic government, retention of elite privileges, did not lead to any military battles in the state. Despite that it strongly affected Chiapas politics. In Chiapas, the Liberal-Conservative division had its own twist. Much of the division between the highland and lowland ruling families was for whom the Indians should work for and for how long as the main shortage was of labor. These families split into Liberals in the lowlands, who wanted further reform and Conservatives in the highlands who still wanted to keep some of the traditional colonial and church privileges. For most of the early and mid 19th century, Conservatives held most of the power and were concentrated in the larger cities of San Cristóbal de las Casas, Chiapa (de Corzo), Tuxtla and Comitán. As Liberals gained the upper hand nationally in the mid-19th century, one Liberal politician Ángel Albino Corzo gained control of the state. Corzo became the primary exponent of Liberal ideas in the southeast of Mexico and defended the Palenque and Pichucalco areas from annexation by Tabasco. However, Corzo's rule would end in 1875, when he opposed the regime of Porfirio Díaz.
Liberal land reforms would have negative effects on the state's indigenous population unlike in other areas of the country. Liberal governments expropriated lands that were previously held by the Spanish Crown and Catholic Church in order to sell them into private hands. This was not only motivated by ideology, but also due to the need to raise money. However, many of these lands had been in a kind of "trust" with the local indigenous populations, who worked them. Liberal reforms took away this arrangement and many of these lands fell into the hands of large landholders who when made the local Indian population work for three to five days a week just for the right to continue to cultivate the lands. This requirement caused many to leave and look for employment elsewhere. Most became "free" workers on other farms, but they were often paid only with food and basic necessities from the farm shop. If this was not enough, these workers became indebted to these same shops and then unable to leave.
The opening up of these lands also allowed many whites and mestizos (often called Ladinos in Chiapas) to encroach on what had been exclusively indigenous communities in the state. These communities had had almost no contact with the Ladino world, except for a priest. The new Ladino landowners occupied their acquired lands as well as others, such as shopkeepers, opened up businesses in the center of Indian communities. In 1848, a group of Tzeltals plotted to kill the new mestizos in their midst, but this plan was discovered, and was punished by the removal of large number of the community's male members. The changing social order had severe negative effects on the indigenous population with alcoholism spreading, leading to more debts as it was expensive. The struggles between Conservatives and Liberals nationally disrupted commerce and confused power relations between Indian communities and Ladino authorities. It also resulted in some brief respites for Indians during times when the instability led to uncollected taxes.
One other effect that Liberal land reforms had was the start of coffee plantations, especially in the Soconusco region. One reason for this push in this area was that Mexico was still working to strengthen its claim on the area against Guatemala's claims on the region. The land reforms brought colonists from other areas of the country as well as foreigners from England, the United States and France. These foreign immigrants would introduce coffee production to the areas, as well as modern machinery and professional administration of coffee plantations. Eventually, this production of coffee would become the state's most important crop.
Although the Liberals had mostly triumphed in the state and the rest of the country by the 1860s, Conservatives still held considerable power in Chiapas. Liberal politicians sought to solidify their power among the indigenous groups by weakening the Roman Catholic Church. The more radical of these even allowed indigenous groups the religious freedoms to return to a number of native rituals and beliefs such as pilgrimages to natural shrines such as mountains and waterfalls.
This culminated in the Chiapas "caste war", which was an uprising of Tzotzils beginning in 1868. The basis of the uprising was the establishment of the "three stones cult" in Tzajahemal. Agustina Gómez Checheb was a girl tending her father's sheep when three stones fell from the sky. Collecting them, she put them on her father's altar and soon claimed that the stone communicated with her. Word of this soon spread and the "talking stones" of Tzajahemel soon became a local indigenous pilgrimage site. The cult was taken over by one pilgrim, Pedro Díaz Cuzcat, who also claimed to be able to communicate with the stones, and had knowledge of Catholic ritual, becoming a kind of priest. However, this challenged the traditional Catholic faith and non Indians began to denounce the cult. Stories about the cult include embellishments such as the crucifixion of a young Indian boy.
This led to the arrest of Checheb and Cuzcat in December 1868. This caused resentment among the Tzotzils. Although the Liberals had earlier supported the cult, Liberal landowners had also lost control of much of their Indian labor and Liberal politicians were having a harder time collecting taxes from indigenous communities. An Indian army gathered at Zontehuitz then attacked various villages and haciendas. By the following June the city of San Cristóbal was surrounded by several thousand Indians, who offered the exchanged of several Ladino captives for their religious leaders and stones. Chiapas governor Dominguéz came to San Cristóbal with about three hundred heavily armed men, who then attacked the Indian force armed only with sticks and machetes. The indigenous force was quickly dispersed and routed with government troops pursuing pockets of guerrilla resistance in the mountains until 1870. The event effectively returned control of the indigenous workforce back to the highland elite.
The Porfirio Díaz era at the end of the 19th century and beginning of the 20th was initially thwarted by regional bosses called caciques, bolstered by a wave of Spanish and mestizo farmers who migrated to the state and added to the elite group of wealthy landowning families. There was some technological progress such as a highway from San Cristóbal to the Oaxaca border and the first telephone line in the 1880s, but Porfirian era economic reforms would not begin until 1891 with Governor Emilio Rabasa. This governor took on the local and regional caciques and centralized power into the state capital, which he moved from San Cristóbal de las Casas to Tuxtla in 1892. He modernized public administration, transportation and promoted education. Rabasa also introduced the telegraph, limited public schooling, sanitation and road construction, including a route from San Cristóbal to Tuxtla then Oaxaca, which signaled the beginning of favoritism of development in the central valley over the highlands. He also changed state policies to favor foreign investment, favored large land mass consolidation for the production of cash crops such as henequen, rubber, guayule, cochineal and coffee. Agricultural production boomed, especially coffee, which induced the construction of port facilities in Tonalá. The economic expansion and investment in roads also increased access to tropical commodities such as hardwoods, rubber and chicle.
These still required cheap and steady labor to be provided by the indigenous population. By the end of the 19th century, the four main indigenous groups, Tzeltals, Tzotzils, Tojolabals and Ch’ols were living in "reducciones" or reservations, isolated from one another. Conditions on the farms of the Porfirian era was serfdom, as bad if not worse than for other indigenous and mestizo populations leading to the Mexican Revolution. While this coming event would affect the state, Chiapas did not follow the uprisings in other areas that would end the Porfirian era.
Japanese immigration to Mexico began in 1897 when the first thirty five migrants arrived in Chiapas to work on coffee farms, so that Mexico was the first Latin American country to receive organized Japanese immigration. Although this colony ultimately failed, there remains a small Japanese community in Acacoyagua, Chiapas.
In the early 20th century and into the Mexican Revolution, the production of coffee was particularly important but labor-intensive. This would lead to a practice called enganche (hook), where recruiters would lure workers with advanced pay and other incentives such as alcohol and then trap them with debts for travel and other items to be worked off. This practice would lead to a kind of indentured servitude and uprisings in areas of the state, although they never led to large rebel armies as in other parts of Mexico.
A small war broke out between Tuxtla Gutiérrez and San Cristobal in 1911. San Cristóbal, allied with San Juan Chamula, tried to regain the state's capital but the effort failed. San Cristóbal de las Casas, which had a very limited budget, to the extent that it had to ally with San Juan Chamula challenged Tuxtla Gutierrez which, with only a small ragtag army overwhelmingly defeated the army helped by chamulas from San Cristóbal. There were three years of peace after that until troops allied with the "First Chief" of the revolutionary Constitutionalist forces, Venustiano Carranza, entered in 1914 taking over the government, with the aim of imposing the Ley de Obreros (Workers' Law) to address injustices against the state's mostly indigenous workers. Conservatives responded violently months later when they were certain the Carranza forces would take their lands. This was mostly by way of guerrilla actions headed by farm owners who called themselves the Mapaches. This action continued for six years, until President Carranza was assassinated in 1920 and revolutionary general Álvaro Obregón became president of Mexico. This allowed the Mapaches to gain political power in the state and effectively stop many of the social reforms occurring in other parts of Mexico.
The Mapaches continued to fight against socialists and communists in Mexico from 1920 to 1936, to maintain their control over the state. In general, elite landowners also allied with the nationally dominant party founded by Plutarco Elías Calles following the assassination of president-elect Obregón in 1928; that party was renamed the Institutional Revolutionary Party in 1946. Through that alliance, they could block land reform in this way as well. The Mapaches were first defeated in 1925 when an alliance of socialists and former Carranza loyalists had Carlos A. Vidal selected as governor, although he was assassinated two years later. The last of the Mapache resistance was overcome in the early 1930s by Governor Victorico Grajales, who pursued President Lázaro Cárdenas' social and economic policies including persecution of the Catholic Church. These policies would have some success in redistributing lands and organizing indigenous workers but the state would remain relatively isolated for the rest of the 20th century. The territory was reorganized into municipalities in 1916. The current state constitution was written in 1921.
There was political stability from the 1940s to the early 1970s; however, regionalism regained with people thinking of themselves as from their local city or municipality over the state. This regionalism impeded the economy as local authorities restrained outside goods. For this reason, construction of highways and communications were pushed to help with economic development. Most of the work was done around Tuxtla Gutiérrez and Tapachula. This included the Sureste railroad connecting northern municipalities such as Pichucalco, Salto de Agua, Palenque, Catazajá and La Libertad. The Cristobal Colon highway linked Tuxtla to the Guatemalan border. Other highways included El Escopetazo to Pichucalco, a highway between San Cristóbal and Palenque with branches to Cuxtepeques and La Frailesca. This helped to integrate the state's economy, but it also permitted the political rise of communal land owners called ejidatarios.
In the mid-20th century, the state experienced a significant rise in population, which outstripped local resources, especially land in the highland areas. Since the 1930s, many indigenous and mestizos have migrated from the highland areas into the Lacandon Jungle with the populations of Altamirano, Las Margaritas, Ocosingo and Palenque rising from less than 11,000 in 1920 to over 376,000 in 2000. These migrants came to the jungle area to clear forest and grow crops and raise livestock, especially cattle. Economic development in general raised the output of the state, especially in agriculture, but it had the effect of deforesting many areas, especially the Lacandon. Added to this was there were still serf like conditions for many workers and insufficient educational infrastructure. Population continued to increase faster than the economy could absorb. There were some attempts to resettle peasant farmers onto non cultivated lands, but they were met with resistance. President Gustavo Díaz Ordaz awarded a land grant to the town of Venustiano Carranza in 1967, but that land was already being used by cattle-ranchers who refused to leave. The peasants tried to take over the land anyway, but when violence broke out, they were forcibly removed. In Chiapas poor farmland and severe poverty afflict the Mayan Indians which led to unsuccessful non violent protests and eventually armed struggle started by the Zapatista National Liberation Army in January 1994.
These events began to lead to political crises in the 1970s, with more frequent land invasions and takeovers of municipal halls. This was the beginning of a process that would lead to the emergence of the Zapatista movement in the 1990s. Another important factor to this movement would be the role of the Catholic Church from the 1960s to the 1980s. In 1960, Samuel Ruiz became the bishop of the Diocese of Chiapas, centered in San Cristóbal. He supported and worked with Marist priests and nuns following an ideology called liberation theology. In 1974, he organized a statewide "Indian Congress" with representatives from the Tzeltal, Tzotzil, Tojolabal and Ch'ol peoples from 327 communities as well as Marists and the Maoist People's Union. This congress was the first of its kind with the goal of uniting the indigenous peoples politically. These efforts were also supported by leftist organizations from outside Mexico, especially to form unions of ejido organizations. These unions would later form the base of the EZLN organization. One reason for the Church's efforts to reach out to the indigenous population was that starting in the 1970s, a shift began from traditional Catholic affiliation to Protestant, Evangelical and other Christian sects.
The 1980s saw a large wave of refugees coming into the state from Central America as a number of these countries, especially Guatemala, were in the midst of violent political turmoil. The Chiapas/Guatemala border had been relatively porous with people traveling back and forth easily in the 19th and 20th centuries, much like the Mexico/U.S. border around the same time. This is in spite of tensions caused by Mexico's annexation of the Soconusco region in the 19th century. The border between Mexico and Guatemala had been traditionally poorly guarded, due to diplomatic considerations, lack of resources and pressure from landowners who need cheap labor sources.
The arrival of thousands of refugees from Central America stressed Mexico's relationship with Guatemala, at one point coming close to war as well as a politically destabilized Chiapas. Although Mexico is not a signatory to the UN Convention Relating to the Status of Refugees, international pressure forced the government to grant official protection to at least some of the refugees. Camps were established in Chiapas and other southern states, and mostly housed Mayan peoples. However, most Central American refugees from that time never received any official status, estimated by church and charity groups at about half a million from El Salvador alone. The Mexican government resisted direct international intervention in the camps, but eventually relented somewhat because of finances. By 1984, there were 92 camps with 46,000 refugees in Chiapas, concentrated in three areas, mostly near the Guatemalan border. To make matters worse, the Guatemalan army conducted raids into camps on Mexican territories with significant casualties, terrifying the refugees and local populations. From within Mexico, refugees faced threats by local governments who threatened to deport them, legally or not, and local paramilitary groups funded by those worried about the political situation in Central America spilling over into the state. The official government response was to militarize the areas around the camps, which limited international access and migration into Mexico from Central America was restricted. By 1990, it was estimated that there were over 200,000 Guatemalans and half a million from El Salvador, almost all peasant farmers and most under age twenty.
In the 1980s, the politization of the indigenous and rural populations of the state that began in the 1960s and 1970s continued. In 1980, several ejido (communal land organizations) joined to form the Union of Ejidal Unions and United Peasants of Chiapas, generally called the Union of Unions, or UU. It had a membership of 12,000 families from over 180 communities. By 1988, this organization joined with other to form the ARIC-Union of Unions (ARIC-UU) and took over much of the Lacandon Jungle portion of the state. Most of the members of these organization were from Protestant and Evangelical sects as well as "Word of God" Catholics affiliated with the political movements of the Diocese of Chiapas. What they held in common was indigenous identity vis-à-vis the non-indigenous, using the old 19th century "caste war" word "Ladino" for them.
The adoption of liberal economic reforms by the Mexican federal government clashed with the leftist political ideals of these groups, notably as the reforms were believed to have begun to have negative economic effects on poor farmers, especially small-scale indigenous coffee-growers. Opposition would coalesce into the Zapatista movement in the 1990s. Although the Zapatista movement couched its demands and cast its role in response to contemporary issues, especially in its opposition to neoliberalism, it operates in the tradition of a long line of peasant and indigenous uprisings that have occurred in the state since the colonial era. This is reflected in its indigenous vs. Mestizo character. However, the movement was an economic one as well. Although the area has extensive resources, much of the local population of the state, especially in rural areas, did not benefit from this bounty. In the 1990s, two thirds of the state's residents did not have sewage service, only a third had electricity and half did not have potable water. Over half of the schools offered education only to the third grade and most pupils dropped out by the end of first grade. Grievances, strongest in the San Cristóbal and Lacandon Jungle areas, were taken up by a small leftist guerrilla band led by a man called only "Subcomandante Marcos."
This small band, called the Zapatista Army of National Liberation (Ejército Zapatista de Liberación Nacional, EZLN), came to the world's attention when on January 1, 1994 (the day the NAFTA treaty went into effect) EZLN forces occupied and took over the towns of San Cristobal de las Casas, Las Margaritas, Altamirano, Ocosingo and three others. They read their proclamation of revolt to the world and then laid siege to a nearby military base, capturing weapons and releasing many prisoners from the jails. This action followed previous protests in the state in opposition to neoliberal economic policies.
Although it has been estimated as having no more than 300 armed guerrilla members, the EZLN paralyzed the Mexican government, which balked at the political risks of direct confrontation. The major reason for this was that the rebellion caught the attention of the national and world press, as Marcos made full use of the then-new Internet to get the group's message out, putting the spotlight on indigenous issues in Mexico in general. Furthermore, the opposition press in Mexico City, especially La Jornada, actively supported the rebels. These factors encouraged the rebellion to go national. Many blamed the unrest on infiltration of leftists among the large Central American refugee population in Chiapas, and the rebellion opened up splits in the countryside between those supporting and opposing the EZLN. Zapatista sympathizers have included mostly Protestants and Word of God Catholics, opposing those "traditionalist" Catholics who practiced a syncretic form of Catholicism and indigenous beliefs. This split had existed in Chiapas since the 1970s, with the latter group supported by the caciques and others in the traditional power-structure. Protestants and Word of God Catholics (allied directly with the bishopric in San Cristóbal) tended to oppose traditional power structures.
The Bishop of Chiapas, Samuel Ruiz, and the Diocese of Chiapas reacted by offering to mediate between the rebels and authorities. However, because of this diocese's activism since the 1960s, authorities accused the clergy of being involved with the rebels. There was some ambiguity about the relationship between Ruiz and Marcos and it was a constant feature of news coverage, with many in official circles using such to discredit Ruiz. Eventually, the activities of the Zapatistas began to worry the Roman Catholic Church in general and to upstage the diocese's attempts to re establish itself among Chiapan indigenous communities against Protestant evangelization. This would lead to a breach between the Church and the Zapatistas.
The Zapatista story remained in headlines for a number of years. One reason for this was the December 1997 massacre of forty-five unarmed Tzotzil peasants, mostly women and children, in the Zapatista-controlled village of Acteal in the Chenhaló municipality just north of San Cristóbal. This allowed many media outlets in Mexico to step up their criticisms of the government.
Despite this, the armed conflict was brief, mostly because the Zapatistas, unlike many other guerilla movements, did not try to gain traditional political power. It focused more on trying to manipulate public opinion in order to obtain concessions from the government. This has linked the Zapatistas to other indigenous and identity-politics movements that arose in the late-20th century. The main concession that the group received was the San Andrés Accords (1996), also known as the Law on Indian Rights and Culture. The Accords appear to grant certain indigenous zones autonomy, but this is against the Mexican constitution, so its legitimacy has been questioned. Zapatista declarations since the mid-1990s have called for a new constitution. As of 1999 the government had not found a solution to this problem. The revolt also pressed the government to institute anti-poverty programs such as "Progresa" (later called "Oportunidades") and the "Puebla-Panama Plan" – aiming to increase trade between southern Mexico and Central America.
As of the first decade of the 2000s the Zapatista movement remained popular in many indigenous communities. The uprising gave indigenous peoples a more active role in the state's politics. However, it did not solve the economic issues that many peasant farmers face, especially the lack of land to cultivate. This problem has been at crisis proportions since the 1970s, and the government's reaction has been to encourage peasant farmers—mostly indigenous—to migrate into the sparsely populated Lacandon Jungle, a trend since earlier in the century.
From the 1970s on, some 100,000 people set up homes in this rainforest area, with many being recognized as ejidos, or communal land-holding organizations. These migrants included Tzeltals, Tojolabals, Ch'ols and mestizos, mostly farming corn and beans and raising livestock. However, the government changed policies in the late 1980s with the establishment of the Montes Azules Biosphere Reserve, as much of the Lacandon Jungle had been destroyed or severely damaged. While armed resistance has wound down, the Zapatistas have remained a strong political force, especially around San Cristóbal and the Lacandon Jungle, its traditional bases. Since the Accords, they have shifted focus in gaining autonomy for the communities they control.
Since the 1994 uprising, migration into the Lacandon Jungle has significantly increased, involving illegal settlements and cutting in the protected biosphere reserve. The Zapatistas support these actions as part of indigenous rights, but that has put them in conflict with international environmental groups and with the indigenous inhabitants of the rainforest area, the Lacandons. Environmental groups state that the settlements pose grave risks to what remains of the Lacandon, while the Zapatistas accuse them of being fronts for the government, which wants to open the rainforest up to multinational corporations. Added to this is the possibility that significant oil and gas deposits exist under this area.
The Zapatista movement has had some successes. The agricultural sector of the economy now favors ejidos and other commonly-owned land. There have been some other gains economically as well. In the last decades of the 20th century, Chiapas's traditional agricultural economy has diversified somewhat with the construction of more roads and better infrastructure by the federal and state governments. Tourism has become important in some areas of the state, especially in San Cristóbal de las Casas and Palenque. Its economy is important to Mexico as a whole as well, producing coffee, corn, cacao, tobacco, sugar, fruit, vegetables and honey for export. It is also a key state for the nation's petrochemical and hydroelectric industries. A significant percentage of PEMEX's drilling and refining takes place in Chiapas and Tabasco, and Chiapas produces fifty-five percent of Mexico's hydroelectric energy.
However, Chiapas remains one of the poorest states in Mexico. Ninety-four of its 111 municipalities have a large percentage of the population living in poverty. In areas such as Ocosingo, Altamirano and Las Margaritas, the towns where the Zapatistas first came into prominence in 1994, 48% of the adults were illiterate. Chiapas is still considered isolated and distant from the rest of Mexico, both culturally and geographically. It has significantly underdeveloped infrastructure compared to the rest of the country, and its significant indigenous population with isolationist tendencies keep the state distinct culturally. Cultural stratification, neglect and lack of investment by the Mexican federal government has exacerbated this problem.
In early November 2023, signed by rebel Subcomandante Moises and EZLN that announced the dissolution of the Rebel Zapatista Autonomous Municipalities due to the cartel violence generated by Sinaloa Cartel and Jalisco New Generation Cartel and violent border clashes in Guatemala due to the increasing violence growing on the border. Caracoles will remain open to locals but remain closed to outsiders, and the previous MAREZ system will be reorganized into a new autonomous system.
Chiapas is located in the south east of Mexico, bordering the states of Tabasco, Veracruz and Oaxaca with the Pacific Ocean to the south and Guatemala to the east. It has a territory of 74,415 km, the eighth largest state in Mexico. The state consists of 118 municipalities organized into nine political regions called Center, Altos, Fronteriza, Frailesca, Norte, Selva, Sierra, Soconusco and Istmo-Costa. There are 18 cities, twelve towns (villas) and 111 pueblos (villages). Major cities include Tuxtla Gutiérrez, San Cristóbal de las Casas, Tapachula, Palenque, Comitán, and Chiapa de Corzo.
The state has a complex geography with seven distinct regions according to the Mullerried classification system. These include the Pacific Coast Plains, the Sierra Madre de Chiapas, the Central Depression, the Central Highlands, the Eastern Mountains, the Northern Mountains and the Gulf Coast Plains. The Pacific Coast Plains is a strip of land parallel to the ocean. It is composed mostly of sediment from the mountains that border it on the northern side. It is uniformly flat, and stretches from the Bernal Mountain south to Tonalá. It has deep salty soils due to its proximity to the sea. It has mostly deciduous rainforest although most has been converted to pasture for cattle and fields for crops. It has numerous estuaries with mangroves and other aquatic vegetation.
The Sierra Madre de Chiapas runs parallel to the Pacific coastline of the state, northwest to southeast as a continuation of the Sierra Madre del Sur. This area has the highest altitudes in Chiapas including the Tacaná Volcano, which rises 4,093 m (13,428 ft) above sea level. Most of these mountains are volcanic in origin although the nucleus is metamorphic rock. It has a wide range of climates but little arable land. It is mostly covered in middle altitude rainforest, high altitude rainforest, and forests of oaks and pines. The mountains partially block rain clouds from the Pacific, a process known as Orographic lift, which creates a particularly rich coastal region called the Soconusco. The main commercial center of the sierra is the town of Motozintla, also near the Guatemalan border.
The Central Depression is in the center of the state. It is an extensive semi flat area bordered by the Sierra Madre de Chiapas, the Central Highlands and the Northern Mountains. Within the depression there are a number of distinct valleys. The climate here can be very hot and humid in the summer, especially due to the large volume of rain received in July and August. The original vegetation was lowland deciduous forest with some rainforest of middle altitudes and some oaks above 1,500 m (4,900 ft) above sea level.
The Central Highlands, also referred to as Los Altos, are mountains oriented from northwest to southeast with altitudes ranging from one thousand two hundred to one thousand six hundred m (3,900 to 5,200 ft) above sea level. The western highlands are displaced faults, while the eastern highlands are mainly folds of sedimentary formations – mainly limestone, shale, and sandstone. These mountains, along the Sierra Madre of Chiapas become the Cuchumatanes where they extend over the border into Guatemala. Its topography is mountainous with many narrow valleys and karst formations called uvalas or poljés, depending on the size. Most of the rock is limestone allowing for a number of formations such as caves and sinkholes. There are also some isolated pockets of volcanic rock with the tallest peaks being the Tzontehuitz and Huitepec volcanos. There are no significant surface water systems as they are almost all underground. The original vegetation was forest of oak and pine but these have been heavily damaged. The highlands climate in the Koeppen modified classification system for Mexico is humid temperate C(m) and subhumid temperate C (w 2 ) (w). This climate exhibits a summer rainy season and a dry winter, with possibilities of frost from December to March. The Central Highlands have been the population center of Chiapas since the Conquest. European epidemics were hindered by the tierra fría climate, allowing the indigenous peoples in the highlands to retain their large numbers.
The Eastern Mountains (Montañas del Oriente) are in the east of the state, formed by various parallel mountain chains mostly made of limestone and sandstone. Its altitude varies from 500 to 1,500 m (1,600 to 4,900 ft). This area receives moisture from the Gulf of Mexico with abundant rainfall and exuberant vegetation, which creates the Lacandon Jungle, one of the most important rainforests in Mexico. The Northern Mountains (Montañas del Norte) are in the north of the state. They separate the flatlands of the Gulf Coast Plains from the Central Depression. Its rock is mostly limestone. These mountains also receive large amounts of rainfall with moisture from the Gulf of Mexico giving it a mostly hot and humid climate with rains year round. In the highest elevations around 1,800 m (5,900 ft), temperatures are somewhat cooler and do experience a winter. The terrain is rugged with small valleys whose natural vegetation is high altitude rainforest.
The Gulf Coast Plains (Llanura Costera del Golfo) stretch into Chiapas from the state of Tabasco, which gives it the alternate name of the Tabasqueña Plains. These plains are found only in the extreme north of the state. The terrain is flat and prone to flooding during the rainy season as it was built by sediments deposited by rivers and streams heading to the Gulf.
The Lacandon Jungle is situated in north eastern Chiapas, centered on a series of canyonlike valleys called the Cañadas, between smaller mountain ridges oriented from northwest to southeast. The ecosystem covers an area of approximately 1.9×10 ha (4.7×10 acres) extending from Chiapas into northern Guatemala and southern Yucatán Peninsula and into Belize. This area contains as much as 25% of Mexico's total species diversity, most of which has not been researched. It has a predominantly hot and humid climate (Am w" i g) with most rain falling from summer to part of fall, with an average of between 2300 and 2600 mm per year. There is a short dry season from March to May. The predominant wild vegetation is perennial high rainforest. The Lacandon comprises a biosphere reserve (Montes Azules); four natural protected areas (Bonampak, Yaxchilan, Chan Kin, and Lacantum); and the communal reserve (La Cojolita), which functions as a biological corridor with the area of Petén in Guatemala. Flowing within the Rainforest is the Usumacinta River, considered to be one of the largest rivers in Mexico and seventh largest in the world based on volume of water.
During the 20th century, the Lacandon has had a dramatic increase in population and along with it, severe deforestation. The population of municipalities in this area, Altamirano, Las Margaritas, Ocosingo and Palenque have risen from 11,000 in 1920 to over 376,000 in 2000. Migrants include Ch'ol, Tzeltal, Tzotzil, Tojolabal indigenous peoples along with mestizos, Guatemalan refugees and others. Most of these migrants are peasant farmers, who cut forest to plant crops. However, the soil of this area cannot support annual crop farming for more than three or four harvests. The increase in population and the need to move on to new lands has pitted migrants against each other, the native Lacandon people, and the various ecological reserves for land. It is estimated that only ten percent of the original Lacandon rainforest in Mexico remains, with the rest strip-mined, logged and farmed. It once stretched over a large part of eastern Chiapas but all that remains is along the northern edge of the Guatemalan border. Of this remaining portion, Mexico is losing over five percent each year.
The best preserved portion of the Lacandon is within the Montes Azules Biosphere Reserve. It is centered on what was a commercial logging grant by the Porfirio Díaz government, which the government later nationalized. However, this nationalization and conversion into a reserve has made it one of the most contested lands in Chiapas, with the already existing ejidos and other settlements within the park along with new arrivals squatting on the land.
The Soconusco region encompasses a coastal plain and a mountain range with elevations of up to 2,000 m (6,600 ft) above sea levels paralleling the Pacific Coast. The highest peak in Chiapas is the Tacaná Volcano at 4,060 m (13,320 ft) above sea level. In accordance with an 1882 treaty, the dividing line between Mexico and Guatemala goes right over the summit of this volcano. The climate is tropical, with a number of rivers and evergreen forests in the mountains. This is Chiapas's major coffee-producing area, as it has the best soils and climate for coffee. Before the arrival of the Spanish, this area was the principal source of cocoa seeds in the Aztec empire, which they used as currency, and for the highly prized quetzal feathers used by the nobility. It would become the first area to produce coffee, introduced by an Italian entrepreneur on the La Chacara farm. Coffee is cultivated on the slopes of these mountains mostly between 600 and 1,200 m (2,000 and 3,900 ft) asl. Mexico produces about 4 million sacks of green coffee each year, fifth in the world behind Brazil, Colombia, Indonesia and Vietnam. Most producers are small with plots of land under five ha (12 acres). From November to January, the annual crop is harvested and processed employing thousands of seasonal workers. Lately, a number of coffee haciendas have been developing tourism infrastructure as well.
Chiapas is located in the tropical belt of the planet, but the climate is moderated in many areas by altitude. For this reason, there are hot, semi-hot, temperate and even cold climates. Some areas have abundant rainfall year-round and others receive most of their rain between May and October, with a dry season from November to April. The mountain areas affect wind and moisture flow over the state, concentrating moisture in certain areas of the state. They also are responsible for some cloud-covered rainforest areas in the Sierra Madre.
Chiapas's rainforests are home to thousands of animals and plants, some of which cannot be found anywhere else in the world. Natural vegetation varies from lowland to highland tropical forest, pine and oak forests in the highest altitudes and plains area with some grassland. Chiapas is ranked second in forest resources in Mexico with valued woods such as pine, cypress, Liquidambar, oak, cedar, mahogany and more. The Lacandon Jungle is one of the last major tropical rainforests in the northern hemisphere with an extension of 600,000 ha (1,500,000 acres). It contains about sixty percent of Mexico's tropical tree species, 3,500 species of plants, 1,157 species of invertebrates and over 500 of vertebrate species. Chiapas has one of the greatest diversities in wildlife in the Americas. There are more than 100 species of amphibians, 700 species of birds, fifty of mammals and just over 200 species of reptiles. In the hot lowlands, there are armadillos, monkeys, pelicans, wild boar, jaguars, crocodiles, iguanas and many others. In the temperate regions there are species such as bobcats, salamanders, a large red lizard Abronia lythrochila, weasels, opossums, deer, ocelots and bats. The coastal areas have large quantities of fish, turtles, and crustaceans, with many species in danger of extinction or endangered as they are endemic only to this area. The total biodiversity of the state is estimated at over 50,000 species of plants and animals. The diversity of species is not limited to the hot lowlands. The higher altitudes also have mesophile forests, oak/pine forests in the Los Altos, Northern Mountains and Sierra Madre and the extensive estuaries and mangrove wetlands along the coast.
Chiapas has about thirty percent of Mexico's fresh water resources. The Sierra Madre divides them into those that flow to the Pacific and those that flow to the Gulf of Mexico. Most of the first are short rivers and streams; most longer ones flow to the Gulf. Most Pacific side rivers do not drain directly into this ocean but into lagoons and estuaries. The two largest rivers are the Grijalva and the Usumacinta, with both part of the same system. The Grijalva has four dams built on it the Belisario Dominguez (La Angostura); Manuel Moreno Torres (Chicoasén); Nezahualcóyotl (Malpaso); and Angel Albino Corzo (Peñitas). The Usumacinta divides the state from Guatemala and is the longest river in Central America. In total, the state has 110,000 ha (270,000 acres) of surface waters, 260 km (160 mi) of coastline, control of 96,000 km (37,000 sq mi) of ocean, 75,230 ha (185,900 acres) of estuaries and ten lake systems. Laguna Miramar is a lake in the Montes Azules reserve and the largest in the Lacandon Jungle at 40 km in diameter. The color of its waters varies from indigo to emerald green and in ancient times, there were settlements on its islands and its caves on the shoreline. The Catazajá Lake is 28 km north of the city of Palenque. It is formed by rainwater captured as it makes its way to the Usumacinta River. It contains wildlife such as manatees and iguanas and it is surrounded by rainforest. Fishing on this lake is an ancient tradition and the lake has an annual bass fishing tournament. The Welib Já Waterfall is located on the road between Palenque and Bonampak.
The state has thirty-six protected areas at the state and federal levels along with 67 areas protected by various municipalities. The Sumidero Canyon National Park was decreed in 1980 with an extension of 21,789 ha (53,840 acres). It extends over two of the regions of the state, the Central Depression and the Central Highlands over the municipalities of Tuxtla Gutiérrez, Nuevo Usumacinta, Chiapa de Corzo and San Fernando. The canyon has steep and vertical sides that rise to up to 1000 meters from the river below with mostly tropical rainforest but some areas with xerophile vegetation such as cactus can be found. The river below, which has cut the canyon over the course of twelve million years, is called the Grijalva. The canyon is emblematic for the state as it is featured in the state seal. The Sumidero Canyon was once the site of a battle between the Spaniards and Chiapanecan Indians. Many Chiapanecans chose to throw themselves from the high edges of the canyon rather than be defeated by Spanish forces. Today, the canyon is a popular destination for ecotourism. Visitors can take boat trips down the river that runs through the canyon and see the area's many birds and abundant vegetation.
The Montes Azules Biosphere Reserve was decreed in 1978. It is located in the northeast of the state in the Lacandon Jungle. It covers 331,200 ha (818,000 acres) in the municipalities of Maravilla Tenejapa, Ocosingo and Las Margaritas. It conserves highland perennial rainforest. The jungle is in the Usumacinta River basin east of the Chiapas Highlands. It is recognized by the United Nations Environment Programme for its global biological and cultural significance. In 1992, the 61,874 ha (152,890-acre) Lacantun Reserve, which includes the Classic Maya archaeological sites of Yaxchilan and Bonampak, was added to the biosphere reserve.
Agua Azul Waterfall Protection Area is in the Northern Mountains in the municipality of Tumbalá. It covers an area of 2,580 ha (6,400 acres) of rainforest and pine-oak forest, centered on the waterfalls it is named after. It is located in an area locally called the "Mountains of Water", as many rivers flow through there on their way to the Gulf of Mexico. The rugged terrain encourages waterfalls with large pools at the bottom, that the falling water has carved into the sedimentary rock and limestone. Agua Azul is one of the best known in the state. The waters of the Agua Azul River emerge from a cave that forms a natural bridge of thirty meters and five small waterfalls in succession, all with pools of water at the bottom. In addition to Agua Azul, the area has other attractions—such as the Shumuljá River, which contains rapids and waterfalls, the Misol Há Waterfall with a thirty-meter drop, the Bolón Ajau Waterfall with a fourteen-meter drop, the Gallito Copetón rapids, the Blacquiazules Waterfalls, and a section of calm water called the Agua Clara.
The El Ocote Biosphere Reserve was decreed in 1982 located in the Northern Mountains at the boundary with the Sierra Madre del Sur in the municipalities of Ocozocoautla, Cintalapa and Tecpatán. It has a surface area of 101,288.15 ha (250,288.5 acres) and preserves a rainforest area with karst formations. The Lagunas de Montebello National Park was decreed in 1959 and consists of 7,371 ha (18,210 acres) near the Guatemalan border in the municipalities of La Independencia and La Trinitaria. It contains two of the most threatened ecosystems in Mexico the "cloud rainforest" and the Soconusco rainforest. The El Triunfo Biosphere Reserve, decreed in 1990, is located in the Sierra Madre de Chiapas in the municipalities of Acacoyagua, Ángel Albino Corzo, Montecristo de Guerrero, La Concordia, Mapastepec, Pijijiapan, Siltepec and Villa Corzo near the Pacific Ocean with 119,177.29 ha (294,493.5 acres). It conserves areas of tropical rainforest and many freshwater systems endemic to Central America. It is home to around 400 species of birds including several rare species such as the horned guan, the quetzal and the azure-rumped tanager. The Palenque National Forest is centered on the archaeological site of the same name and was decreed in 1981. It is located in the municipality of Palenque where the Northern Mountains meet the Gulf Coast Plain. It extends over 1,381 ha (3,410 acres) of tropical rainforest. The Laguna Bélgica Conservation Zone is located in the north west of the state in the municipality of Ocozocoautla. It covers forty-two hectares centered on the Bélgica Lake. The El Zapotal Ecological Center was established in 1980. Nahá–Metzabok is an area in the Lacandon Forest whose name means "place of the black lord" in Nahuatl. It extends over 617.49 km (238.41 sq mi) and in 2010, it was included in the World Network of Biosphere Reserves. Two main communities in the area are called Nahá and Metzabok. They were established in the 1940s, but the oldest communities in the area belong to the Lacandon people. The area has large numbers of wildlife including endangered species such as eagles, quetzals and jaguars.
As of 2010, the population is 4,796,580, the eighth most populous state in Mexico. The 20th century saw large population growth in Chiapas. From fewer than one million inhabitants in 1940, the state had about two million in 1980, and over 4 million in 2005. Overcrowded land in the highlands was relieved when the rainforest to the east was subject to land reform. Cattle ranchers, loggers, and subsistence farmers migrated to the rain forest area. The population of the Lacandon was only one thousand people in 1950, but by the mid-1990s this had increased to 200 thousand. As of 2010, 78% lives in urban communities with 22% in rural communities. While birthrates are still high in the state, they have come down in recent decades from 7.4 per woman in 1950. However, these rates still mean significant population growth in raw numbers. About half of the state's population is under age 20, with an average age of 19. In 2005, there were 924,967 households, 81% headed by men and the rest by women. Most households were nuclear families (70.7%) with 22.1% consisting of extended families.
More migrate out of Chiapas than migrate in, with emigrants leaving for Tabasco, Oaxaca, Veracruz, State of Mexico and the Federal District (Mexico City) primarily.
While Catholics remain the majority, their numbers have dropped as many have converted to Protestant denominations in recent decades. Islam is also a small but growing religion due to the Indigenous Muslims as well as Muslim immigrants from Africa continuously rising in numbers. The National Presbyterian Church in Mexico has a large following in Chiapas; some estimate that 40% of the population are followers of the Presbyterian church.
There are a number of people in the state with African features. These are the descendants of slaves brought to the state in the 16th century. There are also those with predominantly European features who are the descendants of the original Spanish colonizers as well as later immigrants to Mexico. The latter mostly came at the end of the 19th and early 20th century under the Porfirio Díaz regime to start plantations. According to the 2020 Census, 1.02% of Chiapas's population identified as Black, Afro-Mexican, or of African descent.
Over the history of Chiapas, there have been three main indigenous groups: the Mixes-Zoques, the Mayas and the Chiapas [es]. Today, there are an estimated fifty-six linguistic groups. As of the 2005 Census, there were 957,255 people who spoke an indigenous language out of a total population of about 3.5 million. Of this one million, one third do not speak Spanish. Out of Chiapas's 111 municipios, 99 have majority indigenous populations. 22 municipalities have indigenous populations over 90%, and 36 municipalities have native populations exceeding 50%. However, despite population growth in indigenous villages, the percentage of indigenous to non indigenous continues to fall with less than 35% indigenous. Indian populations are concentrated in a few areas, with the largest concentration of indigenous-language-speaking individuals is living in 5 of Chiapas's 9 economic regions: Los Altos, Selva, Norte, Fronteriza, and Sierra. The remaining three regions, Soconusco, Centro and Costa, have populations that are considered to be predominantly mestizo.
The state has about 13.5% of all of Mexico's indigenous population, and it has been ranked among the ten "most indianized" states, with only Campeche, Oaxaca, Quintana Roo and Yucatán having been ranked above it between 1930 and the present. These indigenous peoples have been historically resistant to assimilation into the broader Mexican society, with it best seen in the retention rates of indigenous languages and the historic demands for autonomy over geographic areas as well as cultural domains. Much of the latter has been prominent since the Zapatista uprising in 1994. Most of Chiapas's indigenous groups are descended from the Mayans, speaking languages that are closely related to one another, belonging to the Western Maya language group. The state was part of a large region dominated by the Mayans during the Classic period. The most numerous of these Mayan groups include the Tzeltal, Tzotzil, Ch'ol, Zoque, Tojolabal, Lacandon and Mam, which have traits in common such as syncretic religious practices, and social structure based on kinship. The most common Western Maya languages are Tzeltal and Tzotzil along with Chontal, Ch’ol, Tojolabal, Chuj, Kanjobal, Acatec, Jacaltec and Motozintlec.
12 of Mexico's officially recognized native peoples living in the state have conserved their language, customs, history, dress and traditions to a significant degree. The primary groups include the Tzeltal, Tzotzil, Ch'ol, Tojolabal, Zoque, Chuj, Kanjobal, Mam, Jacalteco, Mochó Cakchiquel and Lacandon. Most indigenous communities are found in the municipalities of the Centro, Altos, Norte and Selva regions, with many having indigenous populations of over fifty percent. These include Bochil, Sitalá, Pantepec, Simojovel to those with over ninety percent indigenous such as San Juan Cancuc, Huixtán, Tenejapa, Tila, Oxchuc, Tapalapa, Zinacantán, Mitontic, Ocotepec, Chamula, and Chalchihuitán. The most numerous indigenous communities are the Tzeltal and Tzotzil peoples, who number about 400,000 each, together accounting for about half of the state's indigenous population. The next most numerous are the Ch’ol with about 200,000 people and the Tojolabal and Zoques, who number about 50,000 each. The top 3 municipalities in Chiapas with indigenous language speakers 3 years of age and older are: Ocosingo (133,811), Chilon (96,567), and San Juan Chamula (69,475). These 3 municipalities accounted for 24.8% (299,853) of all indigenous language speakers 3 years or older in the state of Chiapas, out of a total of 1,209,057 indigenous language speakers 3 years or older.
Although most indigenous language speakers are bilingual, especially in the younger generations, many of these languages have shown resilience. Four of Chiapas's indigenous languages, Tzeltal, Tzotzil, Tojolabal and Chol, are high-vitality languages, meaning that a high percentage of these ethnicities speak the language and that there is a high rate of monolingualism in it. It is used in over 80% of homes. Zoque is considered to be of medium-vitality with a rate of bilingualism of over 70% and home use somewhere between 65% and 80%. Maya is considered to be of low-vitality with almost all of its speakers bilingual with Spanish. The most spoken indigenous languages as of 2010 are Tzeltal with 461,236 speakers, Tzotzil with 417,462, Ch’ol with 191,947 and Zoque with 53,839. In total, there are 1,141,499 who speak an indigenous language or 27% of the total population. Of these, 14% do not speak Spanish. Studies done between 1930 and 2000 have indicated that Spanish is not dramatically displacing these languages. In raw number, speakers of these languages are increasing, especially among groups with a long history of resistance to Spanish/Mexican domination. Language maintenance has been strongest in areas related to where the Zapatista uprising took place such as the municipalities of Altamirano, Chamula, Chanal, Larráinzar, Las Margaritas, Ocosingo, Palenque, Sabanilla, San Cristóbal de Las Casas and Simojovel.
The state's rich indigenous tradition along with its associated political uprisings, especially that of 1994, has great interest from other parts of Mexico and abroad. It has been especially appealing to a variety of academics including many anthropologists, archeologists, historians, psychologists and sociologists. The concept of "mestizo" or mixed indigenous European heritage became important to Mexico's identity by the time of Independence, but Chiapas has kept its indigenous identity to the present day. Since the 1970s, this has been supported by the Mexican government as it has shifted from cultural policies that favor a "multicultural" identity for the country. One major exception to the separatist, indigenous identity has been the case of the Chiapa people, from whom the state's name comes, who have mostly been assimilated and intermarried into the mestizo population.
Most Indigenous communities have economies based primarily on traditional agriculture such as the cultivation and processing of corn, beans and coffee as a cash crop and in the last decade, many have begun producing sugarcane and jatropha for refinement into biodiesel and ethanol for automobile fuel. The raising of livestock, particularly chicken and turkey and to a lesser extent beef and farmed fish is also a major economic activity. Many indigenous people, in particular the Maya, are employed in the production of traditional clothing, fabrics, textiles, wood items, artworks and traditional goods such as jade and amber works. Tourism has provided a number of a these communities with markets for their handcrafts and works, some of which are very profitable.
San Cristóbal de las Casas and San Juan Chamula maintain a strong indigenous identity. On market day, many indigenous people from rural areas come into San Cristóbal to buy and sell mostly items for everyday use such as fruit, vegetables, animals, cloth, consumer goods and tools. San Juan Chamula is considered to be a center of indigenous culture, especially its elaborate festivals of Carnival and Day of Saint John. It was common for politicians, especially during Institutional Revolutionary Party's dominance to visit here during election campaigns and dress in indigenous clothing and carry a carved walking stick, a traditional sign of power. Relations between the indigenous ethnic groups is complicated. While there has been inter-ethnic political activism such as that promoted by the Diocese of Chiapas in the 1970s and the Zapatista movement in the 1990s, there has been inter-indigenous conflict as well. Much of this has been based on religion, pitting those of the traditional Catholic/indigenous beliefs who support the traditional power structure against Protestants, Evangelicals and Word of God Catholics (directly allied with the Diocese) who tend to oppose it. This is particularly significant problem among the Tzeltals and Tzotzils. Starting in the 1970s, traditional leaders in San Juan Chamula began expelling dissidents from their homes and land, amounting to about 20,000 indigenous forced to leave over a thirty-year period. It continues to be a serious social problem although authorities downplay it. Recently there has been political, social and ethnic conflict between the Tzotzil who are more urbanized and have a significant number of Protestant practitioners and the Tzeltal who are predominantly Catholic and live in smaller farming communities. Many Protestant Tzotzil have accused the Tzeltal of ethnic discrimination and intimidation due to their religious beliefs and the Tzeltal have in return accused the Tzotzil of singling them out for discrimination.
Clothing, especially women's clothing, varies by indigenous group. For example, women in Ocosingo tend to wear a blouse with a round collar embroidered with flowers and a black skirt decorated with ribbons and tied with a cloth belt. The Lacandon people tend to wear a simple white tunic. They also make a ceremonial tunic from bark, decorated with astronomy symbols. In Tenejapa, women wear a huipil embroidered with Mayan fretwork along with a black wool rebozo. Men wear short pants, embroidered at the bottom.
The Tzeltals call themselves Winik atel, which means "working men." This is the largest ethnicity in the state, mostly living southeast of San Cristóbal with the largest number in Amatenango. Today, there are about 500,000 Tzeltal Indians in Chiapas. Tzeltal Mayan, part of the Mayan language family, today is spoken by about 375,000 people making it the fourth-largest language group in Mexico. There are two main dialects; highland (or Oxchuc) and lowland (or Bachajonteco). This language, along with Tzotzil, is from the Tzeltalan subdivision of the Mayan language family. Lexico-statistical studies indicate that these two languages probably became differentiated from one another around 1200 Most children are bilingual in the language and Spanish although many of their grandparents are monolingual Tzeltal speakers. Each Tzeltal community constitutes a distinct social and cultural unit with its own well-defined lands, wearing apparel, kinship system, politico-religious organization, economic resources, crafts, and other cultural features. Women are distinguished by a black skirt with a wool belt and an undyed cotton bloused embroidered with flowers. Their hair is tied with ribbons and covered with a cloth. Most men do not use traditional attire. Agriculture is the basic economic activity of the Tzeltal people. Traditional Mesoamerican crops such as maize, beans, squash, and chili peppers are the most important, but a variety of other crops, including wheat, manioc, sweet potatoes, cotton, chayote, some fruits, other vegetables, and coffee.
Tzotzil speakers number just slightly less than theTzeltals at 226,000, although those of the ethnicity are probably higher. Tzotzils are found in the highlands or Los Altos and spread out towards the northeast near the border with Tabasco. However, Tzotzil communities can be found in almost every municipality of the state. They are concentrated in Chamula, Zinacantán, Chenalhó, and Simojovel. Their language is closely related to Tzeltal and distantly related to Yucatec Mayan and Lacandon. Men dress in short pants tied with a red cotton belt and a shirt that hangs down to their knees. They also wear leather huaraches and a hat decorated with ribbons. The women wear a red or blue skirt, a short huipil as a blouse, and use a chal or rebozo to carry babies and bundles. Tzotzil communities are governed by a katinab who is selected for life by the leaders of each neighborhood. The Tzotzils are also known for their continued use of the temazcal for hygiene and medicinal purposes.
The Ch’ols of Chiapas migrated to the northwest of the state starting about 2,000 years ago, when they were concentrated in Guatemala and Honduras. Those Ch’ols who remained in the south are distinguished by the name Chortís. Chiapas Ch’ols are closely related to the Chontal in Tabasco as well. Choles are found in Tila, Tumbalá, Sabanilla, Palenque, and Salto de Agua, with an estimated population of about 115,000 people. The Ch’ol language belongs to the Maya family and is related to Tzeltal, Tzotzil, Lacandon, Tojolabal, and Yucatec Mayan. There are three varieties of Chol (spoken in Tila, Tumbalá, and Sabanilla), all mutually intelligible. Over half of speakers are monolingual in the Chol language. Women wear a long navy blue or black skirt with a white blouse heavily embroidered with bright colors and a sash with a red ribbon. The men only occasionally use traditional dress for events such as the feast of the Virgin of Guadalupe. This dress usually includes pants, shirts and huipils made of undyed cotton, with leather huaraches, a carrying sack and a hat. The fundamental economic activity of the Ch’ols is agriculture. They primarily cultivate corn and beans, as well as sugar cane, rice, coffee, and some fruits. They have Catholic beliefs strongly influenced by native ones. Harvests are celebrated on the Feast of Saint Rose on 30 August.
The Totolabals are estimated at 35,000 in the highlands. According to oral tradition, the Tojolabales came north from Guatemala. The largest community is Ingeniero González de León in the La Cañada region, an hour outside the municipal seat of Las Margaritas. Tojolabales are also found in Comitán, Trinitaria, Altamirano and La Independencia. This area is filled with rolling hills with a temperate and moist climate. There are fast moving rivers and jungle vegetation. Tojolabal is related to Kanjobal, but also to Tzeltal and Tzotzil. However, most of the youngest of this ethnicity speak Spanish. Women dress traditionally from childhood with brightly colored skirts decorated with lace or ribbons and a blouse decorated with small ribbons, and they cover their heads with kerchiefs. They embroider many of their own clothes but do not sell them. Married women arrange their hair in two braids and single women wear it loose decorated with ribbons. Men no longer wear traditional garb daily as it is considered too expensive to make.
The Zoques are found in 3,000 square kilometers the center and west of the state scattered among hundreds of communities. These were one of the first native peoples of Chiapas, with archeological ruins tied to them dating back as far as 3500 BCE. Their language is not Mayan but rather related to Mixe, which is found in Oaxaca and Veracruz. By the time the Spanish arrived, they had been reduced in number and territory. Their ancient capital was Quechula, which was covered with water by the creation of the Malpaso Dam, along with the ruins of Guelegas, which was first buried by an eruption of the Chichonal volcano. There are still Zoque ruins at Janepaguay, the Ocozocuautla and La Ciénega valleys.
The Lacandons are one of the smallest native indigenous groups of the state with a population estimated between 600 and 1,000. They are mostly located in the communities of Lacanjá Chansayab, Najá, and Mensabak in the Lacandon Jungle. They live near the ruins of Bonampak and Yaxchilan and local lore states that the gods resided here when they lived on Earth. They inhabit about a million hectares of rainforest but from the 16th century to the present, migrants have taken over the area, most of which are indigenous from other areas of Chiapas. This dramatically altered their lifestyle and worldview. Traditional Lacandon shelters are huts made with fonds and wood with an earthen floor, but this has mostly given way to modern structures.
The Mochós or Motozintlecos are concentrated in the municipality of Motozintla on the Guatemalan border. According to anthropologists, these people are an "urban" ethnicity as they are mostly found in the neighborhoods of the municipal seat. Other communities can be found near the Tacaná volcano, and in the municipalities of Tuzantán and Belisario Dominguez. The name "Mochó" comes from a response many gave the Spanish whom they could not understand and means "I don't know." This community is in the process of disappearing as their numbers shrink.
The Mams are a Mayan ethnicity that numbers about 20,000 found in thirty municipalities, especially Tapachula, Motozintla, El Porvenir, Cacahoatán and Amatenango in the southeastern Sierra Madre of Chiapas. The Mame language is one of the most ancient Mayan languages with 5,450 Mame speakers were tallied in Chiapas in the 2000 census. These people first migrated to the border region between Chiapas and Guatemala at the end of the nineteenth century, establishing scattered settlements. In the 1960s, several hundred migrated to the Lacandon rain forest near the confluence of the Santo Domingo and Jataté Rivers. Those who live in Chiapas are referred to locally as the "Mexican Mam (or Mame)" to differentiate them from those in Guatemala. Most live around the Tacaná volcano, which the Mams call "our mother" as it is considered to be the source of the fertility of the area's fields. The masculine deity is the Tajumulco volcano, which is in Guatemala.
In the last decades of the 20th century, Chiapas received a large number of indigenous refugees, especially from Guatemala, many of whom remain in the state. These have added ethnicities such as the Kekchi, Chuj, Ixil, Kanjobal, K'iche' and Cakchikel to the population. The Kanjobal mainly live along the border between Chiapas and Guatemala, with almost 5,800 speakers of the language tallied in the 2000 census. It is believed that a significant number of these Kanjobal-speakers may have been born in Guatemala and immigrated to Chiapas, maintaining strong cultural ties to the neighboring nation.
Chiapas accounts for 1.73% of Mexico's GDP. The primary sector, agriculture, produces 15.2% of the state's GDP. The secondary sector, mostly energy production, but also commerce, services and tourism, accounts for 21.8%. The share of the GDP coming from services is rising while that of agriculture is falling. The state is divided into nine economic regions. These regions were established in the 1980s in order to facilitate statewide economic planning. Many of these regions are based on state and federal highway systems. These include Centro, Altos, Fronteriza, Frailesca, Norte, Selva, Sierra, Soconusco and Istmo-Costa.
Despite being rich in resources, Chiapas, along with Oaxaca and Guerrero, lags behind the rest of the country in almost all socioeconomic indicators. As of 2005, there were 889,420 residential units; 71% had running water, 77.3% sewerage, and 93.6% electricity. Construction of these units varies from modern construction of block and concrete to those constructed of wood and laminate.
Because of its high rate of economic marginalization, more people migrate from Chiapas than migrate to it. Most of its socioeconomic indicators are the lowest in the country including income, education, health and housing. It has a significantly higher percentage of illiteracy than the rest of the country, although that situation has improved since the 1970s when over 45% were illiterate and 1980s, about 32%. The tropical climate presents health challenges, with most illnesses related to the gastro-intestinal tract and parasites. As of 2005, the state has 1,138 medical facilities: 1098 outpatient and 40 inpatient. Most are run by IMSS and ISSSTE and other government agencies. The implementation of NAFTA had negative effects on the economy, particularly by lowering prices for agricultural products. It made the southern states of Mexico poorer in comparison to those in the north, with over 90% of the poorest municipalities in the south of the country. As of 2006, 31.8% work in communal services, social services and personal services. 18.4% work in financial services, insurance and real estate, 10.7% work in commerce, restaurants and hotels, 9.8% work in construction, 8.9% in utilities, 7.8% in transportation, 3.4% in industry (excluding handcrafts), and 8.4% in agriculture.
Although until the 1960s, many indigenous communities were considered by scholars to be autonomous and economically isolated, this was never the case. Economic conditions began forcing many to migrate to work, especially in agriculture for non-indigenous. However, unlike many other migrant workers, most indigenous in Chiapas have remained strongly tied to their home communities. A study as early as the 1970s showed that 77 percent of heads of household migrated outside of the Chamula municipality as local land did not produce sufficiently to support families. In the 1970s, cuts in the price of corn forced many large landowners to convert their fields into pasture for cattle, displacing many hired laborers, cattle required less work. These agricultural laborers began to work for the government on infrastructure projects financed by oil revenue. It is estimated that in the 1980s to 1990s as many as 100,000 indigenous people moved from the mountain areas into cities in Chiapas, with some moving out of the state to Mexico City, Cancún and Villahermosa in search of employment.
Agriculture, livestock, forestry and fishing employ over 53% of the state's population; however, its productivity is considered to be low. Agriculture includes both seasonal and perennial plants. Major crops include corn, beans, sorghum, soybeans, peanuts, sesame seeds, coffee, cacao, sugar cane, mangos, bananas, and palm oil. These crops take up 95% of the cultivated land in the state and 90% of the agricultural production. Only four percent of fields are irrigated with the rest dependent on rainfall either seasonally or year round. Chiapas ranks second among the Mexican states in the production of cacao, the product used to make chocolate, and is responsible for about 60 percent of Mexico's total coffee output. The production of bananas, cacao and corn make Chiapas Mexico's second largest agricultural producer overall.
Coffee is the state's most important cash crop with a history from the 19th century. The crop was introduced in 1846 by Jeronimo Manchinelli who brought 1,500 seedlings from Guatemala on his farm La Chacara. This was followed by a number of other farms as well. Coffee production intensified during the regime of Porfirio Díaz and the Europeans who came to own many of the large farms in the area. By 1892, there were 22 coffee farms in the region, among them Nueva Alemania, Hamburgo, Chiripa, Irlanda, Argovia, San Francisco, and Linda Vista in the Soconusco region. Since then coffee production has grown and diversified to include large plantations, the use and free and forced labor and a significant sector of small producers. While most coffee is grown in the Soconusco, other areas grow it, including the municipalities of Oxchuc, Pantheló, El Bosque, Tenejapa, Chenalhó, Larráinzar, and Chalchihuitán, with around six thousand producers. It also includes organic coffee producers with 18 million tons grown annually 60,000 producers. One third of these producers are indigenous women and other peasant farmers who grow the coffee under the shade of native trees without the use of agro chemicals. Some of this coffee is even grown in environmentally protected areas such as the El Triunfo reserve, where ejidos with 14,000 people grow the coffee and sell it to cooperativers who sell it to companies such as Starbucks, but the main market is Europe. Some growers have created cooperatives of their own to cut out the middleman.
Ranching occupies about three million hectares of natural and induced pasture, with about 52% of all pasture induced. Most livestock is done by families using traditional methods. Most important are meat and dairy cattle, followed by pigs and domestic fowl. These three account for 93% of the value of production. Annual milk production in Chiapas totals about 180 million liters per year. The state's cattle production, along with timber from the Lacandon Jungle and energy output gives it a certain amount of economic clouts compared to other states in the region.
Forestry is mostly based on conifers and common tropical species producing 186,858 m per year at a value of 54,511,000 pesos. Exploited non-wood species include the Camedor palm tree for its fronds. The fishing industry is underdeveloped but includes the capture of wild species as well as fish farming. Fish production is generated both from the ocean as well as the many freshwater rivers and lakes. In 2002, 28,582 tons of fish valued at 441.2 million pesos was produced. Species include tuna, shark, shrimp, mojarra and crab.
The state's abundant rivers and streams have been dammed to provide about fifty-five percent of the country's hydroelectric energy. Much of this is sent to other states accounting for over six percent of all of Mexico's energy output. Main power stations are located at Malpaso, La Angostura, Chicoasén and Peñitas, which produce about eight percent of Mexico's hydroelectric energy. Manuel Moreno Torres plant on the Grijalva River the most productive in Mexico. All of the hydroelectric plants are owned and operated by the Federal Electricity Commission (Comisión Federal de Electricidad, CFE).
Chiapas is rich in petroleum reserves. Oil production began during the 1980s and Chiapas has become the fourth largest producer of crude oil and natural gas among the Mexican states. Many reserves are yet untapped, but between 1984 and 1992, PEMEX drilled nineteen oil wells in the Lacandona Jungle. Currently, petroleum reserves are found in the municipalities of Juárez, Ostuacán, Pichucalco and Reforma in the north of the state with 116 wells accounting for about 6.5% of the country's oil production. It also provides about a quarter of the country's natural gas. This production equals 6,313.6 m (222,960 cu ft) of natural gas and 17,565,000 barrels of oil per year.
Industry is limited to small and micro enterprises and include auto parts, bottling, fruit packing, coffee and chocolate processing, production of lime, bricks and other construction materials, sugar mills, furniture making, textiles, printing and the production of handcrafts. The two largest enterprises is the Comisión Federal de Electricidad and a Petróleos Mexicanos refinery. Chiapas opened its first assembly plant in 2002, a fact that highlights the historical lack of industry in this area.
Chiapas is one of the states that produces a wide variety of handcrafts and folk art in Mexico. One reason for this is its many indigenous ethnicities who produce traditional items out of identity as well as commercial reasons. One commercial reason is the market for crafts provided by the tourism industry. Another is that most indigenous communities can no longer provide for their own needs through agriculture. The need to generate outside income has led to many indigenous women producing crafts communally, which has not only had economic benefits but also involved them in the political process as well. Unlike many other states, Chiapas has a wide variety of wood resources such as cedar and mahogany as well as plant species such as reeds, ixtle and palm. It also has minerals such as obsidian, amber, jade and several types of clay and animals for the production of leather, dyes from various insects used to create the colors associated with the region. Items include various types of handcrafted clothing, dishes, jars, furniture, roof tiles, toys, musical instruments, tools and more.
Chiapas's most important handcraft is textiles, most of which is cloth woven on a backstrap loom. Indigenous girls often learn how to sew and embroider before they learn how to speak Spanish. They are also taught how to make natural dyes from insects, and weaving techniques. Many of the items produced are still for day-to-day use, often dyed in bright colors with intricate embroidery. They include skirts, belts, rebozos, blouses, huipils and shoulder wraps called chals. Designs are in red, yellow, turquoise blue, purple, pink, green and various pastels and decorated with designs such as flowers, butterflies, and birds, all based on local flora and fauna. Commercially, indigenous textiles are most often found in San Cristóbal de las Casas, San Juan Chamula and Zinacantán. The best textiles are considered to be from Magdalenas, Larráinzar, Venustiano Carranza and Sibaca.
One of the main minerals of the state is amber, much of which is 25 million years old, with quality comparable to that found in the Dominican Republic. Chiapan amber has a number of unique qualities, including much that is clear all the way through and some with fossilized insects and plants. Most Chiapan amber is worked into jewelry including pendants, rings and necklaces. Colors vary from white to yellow/orange to a deep red, but there are also green and pink tones as well. Since pre-Hispanic times, native peoples have believed amber to have healing and protective qualities. The largest amber mine is in Simojovel, a small village 130 km from Tuxtla Gutiérrez, which produces 95% of Chiapas's amber. Other mines are found in Huitiupán, Totolapa, El Bosque, Pueblo Nuevo Solistahuacán, Pantelhó and San Andrés Duraznal. According to the Museum of Amber in San Cristóbal, almost 300 kg of amber is extracted per month from the state. Prices vary depending on quality and color.
The major center for ceramics in the state is the city of Amatenango del Valle, with its barro blanco (white clay) pottery. The most traditional ceramic in Amatenango and Aguacatenango is a type of large jar called a cantaro used to transport water and other liquids. Many pieces created from this clay are ornamental as well as traditional pieces for everyday use such as comals, dishes, storage containers and flowerpots. All pieces here are made by hand using techniques that go back centuries. Other communities that produce ceramics include Chiapa de Corzo, Tonalá, Ocuilpa, Suchiapa and San Cristóbal de las Casas.
Wood crafts in the state center on furniture, brightly painted sculptures and toys. The Tzotzils of San Juan de Chamula are known for their sculptures as well as for their sturdy furniture. Sculptures are made from woods such as cedar, mahogany and strawberry tree. Another town noted for their sculptures is Tecpatán. The making lacquer to use in the decoration of wooden and other items goes back to the colonial period. The best-known area for this type of work, called "laca" is Chiapa de Corzo, which has a museum dedicated to it. One reason this type of decoration became popular in the state was that it protected items from the constant humidity of the climate. Much of the laca in Chiapa de Corzo is made in the traditional way with natural pigments and sands to cover gourds, dipping spoons, chests, niches and furniture. It is also used to create the Parachicos masks.
Traditional Mexican toys, which have all but disappeared in the rest of Mexico, are still readily found here and include the cajita de la serpiente, yo yos, ball in cup and more. Other wooden items include masks, cooking utensils, and tools. One famous toy is the "muñecos zapatistas" (Zapatista dolls), which are based on the revolutionary group that emerged in the 1990s.
Ninety-four percent of the state's commercial outlets are small retail stores with about 6% wholesalers. There are 111 municipal markets, 55 tianguis, three wholesale food markets and 173 large vendors of staple products. The service sector is the most important to the economy, with mostly commerce, warehousing and tourism.
Tourism brings large numbers of visitors to the state each year. Most of Chiapas's tourism is based on its culture, colonial cities and ecology. The state has a total of 491 ranked hotels with 12,122 rooms. There are also 780 other establishments catering primarily to tourism, such as services and restaurants.
There are three main tourist routes: the Maya Route, the Colonial Route and the Coffee Route. The Maya Route runs along the border with Guatemala in the Lacandon Jungle and includes the sites of Palenque, Bonampak, Yaxchilan along with the natural attractions of Agua Azul Waterfalls, Misol-Há Waterfall, and the Catazajá Lake. Palenque is the most important of these sites, and one of the most important tourist destinations in the state. Yaxchilan was a Mayan city along the Usumacinta River. It developed between 350 and 810 CE. Bonampak is known for its well preserved murals. These Mayan sites have made the state an attraction for international tourism. These sites contain a large number of structures, most of which date back thousands of years, especially to the sixth century. In addition to the sites on the Mayan Route, there are others within the state away from the border such as Toniná, near the city of Ocosingo.
The Colonial Route is mostly in the central highlands with a significant number of churches, monasteries and other structures from the colonial period along with some from the 19th century and even into the early 20th. The most important city on this route is San Cristóbal de las Casas, located in the Los Altos region in the Jovel Valley. The historic center of the city is filled with tiled roofs, patios with flowers, balconies, Baroque facades along with Neoclassical and Moorish designs. It is centered on a main plaza surrounded by the cathedral, the municipal palace, the Portales commercial area and the San Nicolás church. In addition, it has museums dedicated to the state's indigenous cultures, one to amber and one to jade, both of which have been mined in the state. Other attractions along this route include Comitán de Domínguez and Chiapa de Corzo, along with small indigenous communities such as San Juan Chamula. The state capital of Tuxtla Gutiérrez does not have many colonial era structures left, but it lies near the area's most famous natural attraction of the Sumidero Canyon. This canyon is popular with tourists who take boat tours into it on the Grijalva River to see such features such as caves (La Cueva del Hombre, La Cueva del Silencio) and the Christmas Tree, which is a rock and plant formation on the side of one of the canyon walls created by a seasonal waterfall.
The Coffee Route begins in Tapachula and follows a mountainous road into the Suconusco regopm. The route passes through Puerto Chiapas, a port with modern infrastructure for shipping exports and receiving international cruises. The route visits a number of coffee plantations, such as Hamburgo, Chiripa, Violetas, Santa Rita, Lindavista, Perú-París, San Antonio Chicarras and Rancho Alegre. These haciendas provide visitors with the opportunity to see how coffee is grown and initially processed on these farms. They also offer a number of ecotourism activities such as mountain climbing, rafting, rappelling and mountain biking. There are also tours into the jungle vegetation and the Tacaná Volcano. In addition to coffee, the region also produces most of Chiapas's soybeans, bananas and cacao.
The state has a large number of ecological attractions most of which are connected to water. The main beaches on the coastline include Puerto Arista, Boca del Cielo, Playa Linda, Playa Aventuras, Playa Azul and Santa Brigida. Others are based on the state's lakes and rivers. Laguna Verde is a lake in the Coapilla municipality. The lake is generally green but its tones constantly change through the day depending on how the sun strikes it. In the early morning and evening hours there can also be blue and ochre tones as well. The El Chiflón Waterfall is part of an ecotourism center located in a valley with reeds, sugarcane, mountains and rainforest. It is formed by the San Vicente River and has pools of water at the bottom popular for swimming. The Las Nubes Ecotourism center is located in the Las Margaritas municipality near the Guatemalan border. The area features a number of turquoise blue waterfalls with bridges and lookout points set up to see them up close.
Still others are based on conservation, local culture and other features. The Las Guacamayas Ecotourism Center is located in the Lacandon Jungle on the edge of the Montes Azules reserve. It is centered on the conservation of the red macaw, which is in danger of extinction. The Tziscao Ecotourism Center is centered on a lake with various tones. It is located inside the Lagunas de Montebello National Park, with kayaking, mountain biking and archery. Lacanjá Chansayab is located in the interior of the Lacandon Jungle and a major Lacandon people community. It has some activities associated with ecotourism such as mountain biking, hiking and cabins. The Grutas de Rancho Nuevo Ecotourism Center is centered on a set of caves in which appear capricious forms of stalagmite and stalactites. There is horseback riding as well.
Architecture in the state begins with the archeological sites of the Mayans and other groups who established color schemes and other details that echo in later structures. After the Spanish subdued the area, the building of Spanish style cities began, especially in the highland areas.
Many of the colonial-era buildings are related to Dominicans who came from Seville. This Spanish city had much Arabic influence in its architecture, and this was incorporated into the colonial architecture of Chiapas, especially in structures dating from the 16th to 18th centuries. However, there are a number of architectural styles and influences present in Chiapas colonial structures, including colors and patterns from Oaxaca and Central America along with indigenous ones from Chiapas.
The main colonial structures are the cathedral and Santo Domingo church of San Cristóbal, the Santo Domingo monastery and La Pila in Chiapa de Corzo. The San Cristóbal cathedral has a Baroque facade that was begun in the 16th century but by the time it was finished in the 17th, it had a mix of Spanish, Arabic, and indigenous influences. It is one of the most elaborately decorated in Mexico.
The churches and former monasteries of Santo Domingo, La Merced and San Francisco have ornamentation similar to that of the cathedral. The main structures in Chiapa de Corzo are the Santo Domingo monastery and the La Pila fountain. Santo Domingo has indigenous decorative details such as double headed eagles as well as a statue of the founding monk. In San Cristóbal, the Diego de Mazariegos house has a Plateresque facade, while that of Francisco de Montejo, built later in the 18th century has a mix of Baroque and Neoclassical. Art Deco structures can be found in San Cristóbal and Tapachula in public buildings as well as a number of rural coffee plantations from the Porfirio Díaz era.
Art in Chiapas is based on the use of color and has strong indigenous influence. This dates back to cave paintings such as those found in Sima de las Cotorras near Tuxtla Gutiérrez and the caverns of Rancho Nuevo where human remains and offerings were also found. The best-known pre-Hispanic artwork is the Maya murals of Bonampak, which are the only Mesoamerican murals to have been preserved for over 1500 years. In general, Mayan artwork stands out for its precise depiction of faces and its narrative form. Indigenous forms derive from this background and continue into the colonial period with the use of indigenous color schemes in churches and modern structures such as the municipal palace in Tapachula. Since the colonial period, the state has produced a large number of painters and sculptors. Noted 20th-century artists include Lázaro Gómez, Ramiro Jiménez Chacón, Héctor Ventura Cruz, Máximo Prado Pozo, and Gabriel Gallegos Ramos.
The two best-known poets from the state are Jaime Sabines and Rosario Castellanos, both from prominent Chiapan families. The first was a merchant and diplomat and the second was a teacher, diplomat, theatre director and the director of the Instituto Nacional Indigenista. Jaime Sabines is widely regarded as Mexico's most influential contemporary poet. His work celebrates everyday people in common settings.
The most important instrument in the state is the marimba. In the pre-Hispanic period, indigenous peoples had already been producing music with wooden instruments. The marimba was introduced by African slaves brought to Chiapas by the Spanish. However, it achieved its widespread popularity in the early 20th century due to the formation of the Cuarteto Marimbistico de los Hermanos Gómez in 1918, who popularized the instrument and the popular music that it plays not only in Chiapas but in various parts of Mexico and into the United States. Along with Cuban Juan Arozamena, they composed the piece "Las chiapanecas" considered to be the unofficial anthem of the state. In the 1940s, they were also featured in a number of Mexican films. Marimbas are constructed in Venustiano Carranza, Chiapas de Corzo and Tuxtla Gutiérrez.
Like the rest of Mesoamerica, the basic diet has been based on corn and Chiapas cooking retains strong indigenous influence. One important ingredient is chipilin, a fragrant and strongly flavored herb that is used on most of the indigenous plates and hoja santa, the large anise-scented leaves used in much of southern Mexican cuisine. Chiapan dishes do not incorporate many chili peppers as part of their dishes. Rather, chili peppers are most often found in the condiments. One reason for that is that a local chili pepper, called the simojovel, is far too hot to use except very sparingly. Chiapan cuisine tends to rely more on slightly sweet seasonings in their main dishes such as cinnamon, plantains, prunes and pineapple are often found in meat and poultry dishes.
Tamales are a major part of the diet and often include chipilín mixed into the dough and hoja santa, within the tamale itself or used to wrap it. One tamale native to the state is the "picte", a fresh sweet corn tamale. Tamales juacanes are filled with a mixture of black beans, dried shrimp, and pumpkin seeds.
Meats are centered on the European introduced beef, pork and chicken as many native game animals are in danger of extinction. Meat dishes are frequently accompanied by vegetables such as squash, chayote and carrots. Black beans are the favored type. Beef is favored, especially a thin cut called tasajo usually served in a sauce. Pepita con tasajo is a common dish at festivals especially in Chiapa de Corzo. It consists of a squash seed based sauced over reconstituted and shredded dried beef. As a cattle raising area, beef dishes in Palenque are particularly good. Pux-Xaxé is a stew with beef organ meats and mole sauce made with tomato, chili bolita and corn flour. Tzispolá is a beef broth with chunks of meat, chickpeas, cabbage and various types of chili peppers. Pork dishes include cochito, which is pork in an adobo sauce. In Chiapa de Corzo, their version is cochito horneado, which is a roast suckling pig flavored with adobo. Seafood is a strong component in many dishes along the coast. Turula is dried shrimp with tomatoes. Sausages, ham and other cold cuts are most often made and consumed in the highlands.
In addition to meat dishes, there is chirmol, a cooked tomato sauced flavored with chili pepper, onion and cilantro and zats, butterfly caterpillars from the Altos de Chiapas that are boiled in salted water, then sautéed in lard and eaten with tortillas, limes, and green chili pepper.
Sopa de pan consists of layers of bread and vegetables covered with a broth seasoned with saffron and other flavorings. A Comitán speciality is hearts of palm salad in vinaigrette and Palenque is known for many versions of fried plaintains, including filled with black beans or cheese.
Cheese making is important, especially in the municipalities of Ocosingo, Rayon and Pijijiapan. Ocosingo has its own self-named variety, which is shipped to restaurants and gourmet shops in various parts of the country. Regional sweets include crystallized fruit, coconut candies, flan and compotes. San Cristobal is noted for its sweets, as well as chocolates, coffee and baked goods.
While Chiapas is known for good coffee, there are a number of other local beverages. The oldest is pozol, originally the name for a fermented corn dough. This dough has its origins in the pre-Hispanic period. To make the beverage, the dough is dissolved in water and usually flavored with cocoa and sugar, but sometimes it is left to ferment further. It is then served very cold with much ice. Taxcalate is a drink made from a powder of toasted corn, achiote, cinnamon and sugar prepared with milk or water. Pumbo is a beverage made with pineapple, club soda, vodka, sugar syrup and much ice. Pox is a drink distilled from sugar cane.
Like in the rest of Mexico, Christianity was introduced to the native populations of Chiapas by the Spanish conquistadors. However, Catholic beliefs were mixed with indigenous ones to form what is now called "traditionalist" Catholic belief. The Diocese of Chiapas comprises almost the entire state, and centered on San Cristobal de las Casas. It was founded in 1538 by Pope Paul III to evangelize the area with its most famous bishop of that time Bartolomé de las Casas. Evangelization focused on grouping indigenous peoples into communities centered on a church. This bishop not only graciously evangelized the people in their own language, he worked to introduce many of the crafts still practiced today. While still a majority, only 53.9% percent of Chiapas residents profess the Catholic faith as of 2020, compared to 78.6% of the total national population.
Some indigenous people mix Christianity with Indian beliefs. One particular area where this is strong is the central highlands in small communities such as San Juan Chamula. In one church in San Cristobal, Mayan rites including the sacrifice of animals is permitted inside the church to ask for good health or to "ward off the evil eye."
Starting in the 1970s, there has been a shift away from traditional Catholic affiliation to Protestant, Evangelical and other Christian denominations. Presbyterians and Pentecostals attracted a large number of converts, with percentages of Protestants in the state rising from five percent in 1970 to twenty-one percent in 2000. This shift has had a political component as well, with those making the switch tending to identify across ethnic boundaries, especially across indigenous ethnic boundaries and being against the traditional power structure. The National Presbyterian Church in Mexico is particularly strong in Chiapas, the state can be described as one of the strongholds of the denomination.
Both Protestants and Word of God Catholics tend to oppose traditional cacique leadership and often worked to prohibit the sale of alcohol. The latter had the effect of attracting many women to both movements.
The growing number of Protestants, Evangelicals and Word of God Catholics challenging traditional authority has caused religious strife in a number of indigenous communities. Tensions have been strong, at times, especially in rural areas such as San Juan Chamula. Tension among the groups reached its peak in the 1990s with a large number of people injured during open clashes. In the 1970s, caciques began to expel dissidents from their communities for challenging their power, initially with the use of violence. By 2000, more than 20,000 people had been displaced, but state and federal authorities did not act to stop the expulsions. Today, the situation has quieted but the tension remains, especially in very isolated communities.
The Spanish Murabitun community, the Comunidad Islámica en España, based in Granada in Spain, and one of its missionaries, Muhammad Nafia (formerly Aureliano Pérez), now emir of the Comunidad Islámica en México, arrived in the state of Chiapas shortly after the Zapatista uprising and established a commune in the city of San Cristóbal. The group, characterized as anti-capitalistic, entered an ideological pact with the socialist Zapatistas group. President Vicente Fox voiced concerns about the influence of the fundamentalism and possible connections to the Zapatistas and the Basque terrorist organization Euskadi Ta Askatasuna (ETA), but it appeared that converts had no interest in political extremism. By 2015, many indigenous Mayans and more than 700 Tzotzils have converted to Islam. In San Cristóbal, the Murabitun established a pizzeria, a carpentry workshop and a Quranic school (madrasa) where children learned Arabic and prayed five times a day in the backroom of a residential building, and women in head scarves have become a common sight. Nowadays, most of the Mayan Muslims have left the Murabitun and established ties with the CCIM, now following the orthodox Sunni school of Islam. They built the Al-Kausar Mosque in San Cristobal de las Casas.
The earliest population of Chiapas was in the coastal Soconusco region, where the Chantuto peoples appeared, going back to 5500 BC. This was the oldest Mesoamerican culture discovered to date.
The largest and best-known archaeological sites in Chiapas belong to the Mayan civilization. Apart from a few works by Franciscan friars, knowledge of Maya civilisation largely disappeared after the Spanish Conquest. In the mid-19th century, John Lloyd Stephens and Frederick Catherwood traveled though the sites in Chiapas and other Mayan areas and published their writings and illustrations. This led to serious work on the culture including the deciphering of its hieroglyphic writing.
In Chiapas, principal Mayan sites include Palenque, Toniná, Bonampak, Chinkoltic and Tenam Puentes, all or near in the Lacandon Jungle. They are technically more advanced than earlier Olmec sites, which can best be seen in the detailed sculpting and novel construction techniques, including structures of four stories in height. Mayan sites are not only noted for large numbers of structures, but also for glyphs, other inscriptions, and artwork that has provided a relatively complete history of many of the sites.
Palenque is the most important Mayan and archaeological site. Though much smaller than the huge sites at Tikal or Copán, Palenque contains some of the finest architecture, sculpture and stucco reliefs the Mayans ever produced. The history of the Palenque site begins in 431 with its height under Pakal I (615–683), Chan-Bahlum II (684–702) and Kan-Xul who reigned between 702 and 721. However, the power of Palenque would be lost by the end of the century. Pakal's tomb was not discovered inside the Temple of Inscriptions until 1949. Today, Palenque is a World Heritage Site and one of the best-known sites in Mexico. The similarly-aged site (750/700–600) of Pampa el Pajón preserves burials and cultural items, including cranial modifications.
Yaxchilan flourished in the 8th and 9th centuries. The site contains impressive ruins, with palaces and temples bordering a large plaza upon a terrace above the Usumacinta River. The architectural remains extend across the higher terraces and the hills to the south of the river, overlooking both the river itself and the lowlands beyond. Yaxchilan is known for the large quantity of excellent sculpture at the site, such as the monolithic carved stelae and the narrative stone reliefs carved on lintels spanning the temple doorways. Over 120 inscriptions have been identified on the various monuments from the site. The major groups are the Central Acropolis, the West Acropolis and the South Acropolis. The South Acropolis occupies the highest part of the site. The site is aligned with relation to the Usumacinta River, at times causing unconventional orientation of the major structures, such as the two ballcourts.
The city of Bonampak features some of the finest remaining Maya murals. The realistically rendered paintings depict human sacrifices, musicians and scenes of the royal court. In fact the name means “painted murals.” It is centered on a large plaza and has a stairway that leads to the Acropolis. There are also a number of notable steles.
Toniná is near the city of Ocosingo with its main features being the Casa de Piedra (House of Stone) and Acropolis. The latter is a series of seven platforms with various temples and steles. This site was a ceremonial center that flourished between 600 and 900 CE.
The capital of Sak Tz’i’ (an Ancient Maya kingdom) now named Lacanja Tzeltal, was revealed by researchers led by associate anthropology professor Charles Golden and bioarchaeologist Andrew Scherer in the Chiapas in the backyard of a Mexican farmer in 2020.
Multiple domestic constructions used by the population for religious purposes. “Plaza Muk’ul Ton” or Monuments Plaza where people used to gather for ceremonies was also unearthed by the team.
While the Mayan sites are the best-known, there are a number of other important sites in the state, including many older than the Maya civilization.
The oldest sites are in the coastal Soconusco region. This includes the Mokaya culture, the oldest ceramic culture of Mesoamerica. Later, Paso de la Amada became important. Many of these sites are in Mazatan, Chiapas area.
Izapa became an important pre-Mayan site as well.
There are also other ancient sites including Tapachula and Tecpatán, and Pijijiapan. These sites contain numerous embankments and foundations that once lay beneath pyramids and other buildings. Some of these buildings have disappeared and others have been covered by jungle for about 3,000 years, unexplored.
Pijijiapan and Izapa are on the Pacific coast and were the most important pre Hispanic cities for about 1,000 years, as the most important commercial centers between the Mexican Plateau and Central America. Sima de las Cotorras is a sinkhole 140 meters deep with a diameter of 160 meters in the municipality of Ocozocoautla. It contains ancient cave paintings depicting warriors, animals and more. It is best known as a breeding area for parrots, thousands of which leave the area at once at dawn and return at dusk. The state as its Museo Regional de Antropologia e Historia located in Tuxtla Gutiérrez focusing on the pre Hispanic peoples of the state with a room dedicated to its history from the colonial period.
The average number of years of schooling is 6.7, which is the beginning of middle school, compared to the Mexico average of 8.6. 16.5% have no schooling at all, 59.6% have only primary school/secondary school, 13.7% finish high school or technical school and 9.8% go to university. Eighteen out of every 100 people 15 years or older cannot read or write, compared to 7/100 nationally. Most of Chiapas's illiterate population are indigenous women, who are often prevented from going to school. School absenteeism and dropout rates are highest among indigenous girls.
There are an estimated 1.4 million students in the state from preschool on up. The state has about 61,000 teachers and just over 17,000 centers of educations. Preschool and primary schools are divided into modalities called general, indigenous, private and community educations sponsored by CONAFE. Middle school is divided into technical, telesecundaria (distance education) and classes for working adults. About 98% of the student population of the state is in state schools. Higher levels of education include "professional medio" (vocational training), general high school and technology-focused high school. At this level, 89% of students are in public schools. There are 105 universities and similar institutions with 58 public and 47 private serving over 60,500 students.
The state university is the Universidad Autónoma de Chiapas [es] (UNACH). It was begun when an organization to establish a state level institution was formed in 1965, with the university itself opening its doors ten years later in 1975. The university project was partially supported by UNESCO in Mexico. It integrated older schools such as the Escuela de Derecho (Law School), which originated in 1679; the Escuela de Ingeniería Civil (School of Civil Engineering), founded in 1966; and the Escuela de Comercio y Administración, which was located in Tuxtla Gutiérrez.
The state has approximately 22,517 km (13,991 mi) of highway with 10,857 federally maintained and 11,660 maintained by the state. Almost all of these kilometers are paved. Major highways include the Las Choapas-Raudales-Ocozocoautla, which links the state to Oaxaca, Veracruz, Puebla and Mexico City. Major airports include Llano San Juan in Ocozocoautla, Francisco Sarabia National Airport (which was replaced by Ángel Albino Corzo International Airport) in Tuxtla Gutiérrez and Corazón de María Airport (which closed in 2010) in San Cristóbal de las Casas. These are used for domestic flights with the airports in Palenque and Tapachula providing international service into Guatemala. There are 22 other airfields in twelve other municipalities. Rail lines extend over 547.8 km. There are two major lines: one in the north of the state that links the center and southeast of the country, and the Costa Panamericana route, which runs from Oaxaca to the Guatemalan border.
Chiapas's main port is just outside the city of Tapachula called the Puerto Chiapas. It faces 3,361 m (11,027 ft) of ocean, with 3,060 m (32,900 sq ft) of warehouse space. Next to it there is an industrial park that covers 2,340,000 m (25,200,000 sq ft; 234 ha; 580 acres). Puerto Chiapas has 60,000 m (650,000 sq ft) of area with a capacity to receive 1,800 containers as well as refrigerated containers. The port serves the state of Chiapas and northern Guatemala. Puerto Chiapas serves to import and export products across the Pacific to Asia, the United States, Canada and South America. It also has connections with the Panama Canal. A marina serves yachts in transit. There is an international airport located eleven km (6.8 mi) away as well as a railroad terminal ending at the port proper. Over the past five years the port has grown with its newest addition being a terminal for cruise ships with tours to the Izapa site, the Coffee Route, the city of Tapachula, Pozuelos Lake and an Artesanal Chocolate Tour. Principal exports through the port include banana and banana trees, corn, fertilizer and tuna.
There are thirty-six AM radio stations and sixteen FM stations. There are thirty-seven local television stations and sixty-six repeaters. Newspapers of Chiapas include: Chiapas Hoy, Cuarto Poder , El Heraldo de Chiapas, El Orbe, La Voz del Sureste, and Noticias de Chiapas. | [
{
"paragraph_id": 0,
"text": "Chiapas (Spanish pronunciation: [ˈtʃjapas] ; Tzotzil and Tzeltal: Chyapas [ˈtʃʰjapʰas]), officially the Free and Sovereign State of Chiapas (Spanish: Estado Libre y Soberano de Chiapas), is one of the states that make up the 32 federal entities of Mexico. It comprises 124 municipalities as of September 2017 and its capital and largest city is Tuxtla Gutiérrez. Other important population centers in Chiapas include Ocosingo, Tapachula, San Cristóbal de las Casas, Comitán, and Arriaga. Chiapas is the southernmost state in Mexico, and it borders the states of Oaxaca to the west, Veracruz to the northwest, and Tabasco to the north, and the Petén, Quiché, Huehuetenango, and San Marcos departments of Guatemala to the east and southeast. Chiapas has a significant coastline on the Pacific Ocean to the southwest.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In general, Chiapas has a humid, tropical climate. In the northern area bordering Tabasco, near Teapa, rainfall can average more than 3,000 mm (120 in) per year. In the past, natural vegetation in this region was lowland, tall perennial rainforest, but this vegetation has been almost completely cleared to allow agriculture and ranching. Rainfall decreases moving towards the Pacific Ocean, but it is still abundant enough to allow the farming of bananas and many other tropical crops near Tapachula. On the several parallel sierras or mountain ranges running along the center of Chiapas, the climate can be quite moderate and foggy, allowing the development of cloud forests like those of Reserva de la Biosfera El Triunfo, home to a handful of horned guans, resplendent quetzals, and azure-rumped tanagers.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Chiapas is home to the ancient Mayan ruins of Palenque, Yaxchilán, Bonampak, Chinkultic and Toniná. It is also home to one of the largest indigenous populations in the country, with ten federally recognized ethnicities.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The official name of the state is Chiapas, which is believed to have come from the ancient city of Chiapan, which in Náhuatl means \"the place where the chia sage grows.\" After the Spanish arrived (1522), they established two cities called Chiapas de los Indios and Chiapas de los Españoles (1528), with the name of Provincia de Chiapas for the area around the cities. The first coat of arms of the region dates from 1535 as that of the Ciudad Real (San Cristóbal de las Casas). Chiapas painter Javier Vargas Ballinas designed the modern coat of arms.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Hunter gatherers began to occupy the central valley of the state around 7000 BCE, but little is known about them. The oldest archaeological remains in the seat are located at the Santa Elena Ranch in Ocozocoautla whose finds include tools and weapons made of stone and bone. It also includes burials. In the pre Classic period from 1800 BCE to 300 CE, agricultural villages appeared all over the state although hunter gather groups would persist for long after the era.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Recent excavations in the Soconusco region of the state indicate that the oldest civilization to appear in what is now modern Chiapas is that of the Mokaya, which were cultivating corn and living in houses as early as 1500 BCE, making them one of the oldest in Mesoamerica. There is speculation that these were the forefathers of the Olmec, migrating across the Grijalva Valley and onto the coastal plain of the Gulf of Mexico to the north, which was Olmec territory. One of these people's ancient cities is now the archeological site of Chiapa de Corzo, in which was found the oldest calendar known on a piece of ceramic with a date of 36 BCE. This is three hundred years before the Mayans developed their calendar. The descendants of Mokaya are the Mixe-Zoque.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "During the pre Classic era, it is known that most of Chiapas was not Olmec, but had close relations with them, especially the Olmecs of the Isthmus of Tehuantepec. Olmec-influenced sculpture can be found in Chiapas and products from the state including amber, magnetite, and ilmenite were exported to Olmec lands. The Olmecs came to what is now the northwest of the state looking for amber with one of the main pieces of evidence for this called the Simojovel Ax.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Mayan civilization began in the pre-Classic period as well, but did not come into prominence until the Classic period (300–900 CE). Development of this culture was agricultural villages during the pre-Classic period with city building during the Classic as social stratification became more complex. The Mayans built cities on the Yucatán Peninsula and west into Guatemala. In Chiapas, Mayan sites are concentrated along the state's borders with Tabasco and Guatemala, near Mayan sites in those entities. Most of this area belongs to the Lacandon Jungle.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Mayan civilization in the Lacandon area is marked by rising exploitation of rain forest resources, rigid social stratification, fervent local identity, waging war against neighboring peoples. At its height, it had large cities, a writing system, and development of scientific knowledge, such as mathematics and astronomy. Cities were centered on large political and ceremonial structures elaborately decorated with murals and inscriptions. Among these cities are Palenque, Bonampak, Yaxchilan, Chinkultic, Toniná and Tenón. The Mayan civilization had extensive trade networks and large markets trading in goods such as animal skins, indigo, amber, vanilla and quetzal feathers. It is not known what ended the civilization but theories range from over population size, natural disasters, disease, and loss of natural resources through over exploitation or climate change.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Nearly all Mayan cities collapsed around the same time, 900 CE. From then until 1500 CE, social organization of the region fragmented into much smaller units and social structure became much less complex. There was some influence from the rising powers of central Mexico but two main indigenous groups emerged during this time, the Zoques and the various Mayan descendants. The Chiapans, for whom the state is named, migrated into the center of the state during this time and settled around Chiapa de Corzo, the old Mixe–Zoque stronghold. There is evidence that the Aztecs appeared in the center of the state around Chiapa de Corza in the 15th century, but were unable to displace the native Chiapa tribe. However, they had enough influence so that the name of this area and of the state would come from Nahuatl.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "When the Spanish arrived in the 16th century, they found the indigenous peoples divided into Mayan and non-Mayan, with the latter dominated by the Zoques and Chiapanecas. The first contact between Spaniards and the people of Chiapas came in 1522, when Hernán Cortés sent tax collectors to the area after Aztec Empire was subdued. The first military incursion was headed by Luis Marín, who arrived in 1523. After three years, Marín was able to subjugate a number of the local peoples, but met with fierce resistance from the Tzotzils in the highlands. The Spanish colonial government then sent a new expedition under Diego de Mazariegos. Mazariegos had more success than his predecessor, but many natives preferred to commit suicide rather than submit to the Spanish. One famous example of this is the Battle of Tepetchia, where many jumped to their deaths in the Sumidero Canyon.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Indigenous resistance was weakened by continual warfare with the Spaniards and disease. By 1530 almost all of the indigenous peoples of the area had been subdued with the exception of the Lacandons in the deep jungles who actively resisted until 1695. However, the main two groups, the Tzotzils and Tzeltals of the central highlands were subdued enough to establish the first Spanish city, today called San Cristóbal de las Casas, in 1528. It was one of two settlements initially called Villa Real de Chiapa de los Españoles and the other called Chiapa de los Indios.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Soon after, the encomienda system was introduced, which reduced most of the indigenous population to serfdom and many even as slaves as a form of tribute and way of locking in a labor supply for tax payments. The conquistadors brought previously unknown diseases. This, as well as overwork on plantations, dramatically decreased the indigenous population. The Spanish also established missions, mostly under the Dominicans, with the Diocese of Chiapas established in 1538 by Pope Paul III. The Dominican evangelizers became early advocates of the indigenous' people's plight, with Bartolomé de las Casas winning a battle with the passing of a law in 1542 for their protection. This order also worked to make sure that communities would keep their indigenous name with a saint's prefix leading to names such as San Juan Chamula and San Lorenzo Zinacantán. He also advocated adapting the teaching of Christianity to indigenous language and culture. The encomienda system that had perpetrated much of the abuse of the indigenous peoples declined by the end of the 16th century, and was replaced by haciendas. However, the use and misuse of Indian labor remained a large part of Chiapas politics into modern times. Maltreatment and tribute payments created an undercurrent of resentment in the indigenous population that passed on from generation to generation. One uprising against high tribute payments occurred in the Tzeltal communities in the Los Alto region in 1712. Soon, the Tzoltzils and Ch'ols joined the Tzeltales in rebellion, but within a year the government was able to extinguish the rebellion.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "As of 1778, Thomas Kitchin described Chiapas as \"the metropolis of the original Mexicans,\" with a population of approximately 20,000, and consisting mainly of indigenous peoples. The Spanish introduced new crops such as sugar cane, wheat, barley and indigo as main economic staples along native ones such as corn, cotton, cacao and beans. Livestock such as cattle, horses and sheep were introduced as well. Regions would specialize in certain crops and animals depending on local conditions and for many of these regions, communication and travel were difficult. Most Europeans and their descendants tended to concentrate in cities such as Ciudad Real, Comitán, Chiapa and Tuxtla. Intermixing of the races was prohibited by colonial law but by the end of the 17th century there was a significant mestizo population. Added to this was a population of African slaves brought in by the Spanish in the middle of the 16th century due to the loss of native workforce.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Initially, \"Chiapas\" referred to the first two cities established by the Spanish in what is now the center of the state and the area surrounding them. Two other regions were also established, the Soconusco and Tuxtla, all under the regional colonial government of Guatemala. Chiapas, Soconusco and Tuxla regions were united to the first time as an intendencia during the Bourbon Reforms in 1790 as an administrative region under the name of Chiapas. However, within this intendencia, the division between Chiapas and Soconusco regions would remain strong and have consequences at the end of the colonial period.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "From the colonial period Chiapas was relatively isolated from the colonial authorities in Mexico City and regional authorities in Guatemala. One reason for this was the rugged terrain. Another was that much of Chiapas was not attractive to the Spanish. It lacked mineral wealth, large areas of arable land, and easy access to markets. This isolation spared it from battles related to Independence. José María Morelos y Pavón did enter the city of Tonalá but incurred no resistance. The only other insurgent activity was the publication of a newspaper called El Pararrayos by Matías de Córdova in San Cristóbal de las Casas.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "Following the end of Spanish rule in New Spain, it was unclear what new political arrangements would emerge. The isolation of Chiapas from centers of power, along with the strong internal divisions in the intendencia caused a political crisis after the royal government collapsed in Mexico City in 1821, ending the Mexican War of Independence. During this war, a group of influential Chiapas merchants and ranchers sought the establishment of the Free State of Chiapas. This group became known as the La Familia Chiapaneca. However, this alliance did not last with the lowlands preferring inclusion among the new republics of Central America and the highlands annexation to Mexico. In 1821, a number of cities in Chiapas, starting in Comitán, declared the state's separation from the Spanish empire. In 1823, Guatemala became part of the United Provinces of Central America, which united to form a federal republic that would last from 1823 to 1839. With the exception of the pro-Mexican Ciudad Real (San Cristóbal) and some others, many Chiapanecan towns and villages favored a Chiapas independent of Mexico and some favored unification with Guatemala.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Elites in highland cities pushed for incorporation into Mexico. In 1822, then-Emperor Agustín de Iturbide decreed that Chiapas was part of Mexico. In 1823, the Junta General de Gobierno was held and Chiapas declared independence again. In July 1824, the Soconusco District of southwestern Chiapas split off from Chiapas, announcing that it would join the Central American Federation. In September of the same year, a referendum was held on whether the intendencia would join Central America or Mexico, with many of the elite endorsing union with Mexico. This referendum ended in favor of incorporation with Mexico (allegedly through manipulation of the elite in the highlands), but the Soconusco region maintained a neutral status until 1842, when Oaxacans under General Antonio López de Santa Anna occupied the area, and declared it reincorporated into Mexico. Elites of the area would not accept this until 1844. Guatemala would not recognize Mexico's annexation of the Soconusco region until 1895, even though the border between Chiapas and Guatemala had been agreed upon in 1882. The State of Chiapas was officially declared in 1824, with its first constitution in 1826. Ciudad Real was renamed San Cristóbal de las Casas in 1828.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In the decades after the official end of the war, the provinces of Chiapas and Soconusco unified, with power concentrated in San Cristóbal de las Casas. The state's society evolved into three distinct spheres: indigenous peoples, mestizos from the farms and haciendas and the Spanish colonial cities. Most of the political struggles were between the last two groups especially over who would control the indigenous labor force. Economically, the state lost one of its main crops, indigo, to synthetic dyes. There was a small experiment with democracy in the form of \"open city councils\" but it was short-lived because voting was heavily rigged.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "The Universidad Pontificia y Literaria de Chiapas was founded in 1826, with Mexico's second teacher's college founded in the state in 1828.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "With the ouster of conservative Antonio López de Santa Anna, Mexican liberals came to power. The Reform War (1858–1861) fought between Liberals, who favored federalism and sought economic development, decreased power of the Roman Catholic Church, and Mexican army, and Conservatives, who favored centralized autocratic government, retention of elite privileges, did not lead to any military battles in the state. Despite that it strongly affected Chiapas politics. In Chiapas, the Liberal-Conservative division had its own twist. Much of the division between the highland and lowland ruling families was for whom the Indians should work for and for how long as the main shortage was of labor. These families split into Liberals in the lowlands, who wanted further reform and Conservatives in the highlands who still wanted to keep some of the traditional colonial and church privileges. For most of the early and mid 19th century, Conservatives held most of the power and were concentrated in the larger cities of San Cristóbal de las Casas, Chiapa (de Corzo), Tuxtla and Comitán. As Liberals gained the upper hand nationally in the mid-19th century, one Liberal politician Ángel Albino Corzo gained control of the state. Corzo became the primary exponent of Liberal ideas in the southeast of Mexico and defended the Palenque and Pichucalco areas from annexation by Tabasco. However, Corzo's rule would end in 1875, when he opposed the regime of Porfirio Díaz.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "Liberal land reforms would have negative effects on the state's indigenous population unlike in other areas of the country. Liberal governments expropriated lands that were previously held by the Spanish Crown and Catholic Church in order to sell them into private hands. This was not only motivated by ideology, but also due to the need to raise money. However, many of these lands had been in a kind of \"trust\" with the local indigenous populations, who worked them. Liberal reforms took away this arrangement and many of these lands fell into the hands of large landholders who when made the local Indian population work for three to five days a week just for the right to continue to cultivate the lands. This requirement caused many to leave and look for employment elsewhere. Most became \"free\" workers on other farms, but they were often paid only with food and basic necessities from the farm shop. If this was not enough, these workers became indebted to these same shops and then unable to leave.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The opening up of these lands also allowed many whites and mestizos (often called Ladinos in Chiapas) to encroach on what had been exclusively indigenous communities in the state. These communities had had almost no contact with the Ladino world, except for a priest. The new Ladino landowners occupied their acquired lands as well as others, such as shopkeepers, opened up businesses in the center of Indian communities. In 1848, a group of Tzeltals plotted to kill the new mestizos in their midst, but this plan was discovered, and was punished by the removal of large number of the community's male members. The changing social order had severe negative effects on the indigenous population with alcoholism spreading, leading to more debts as it was expensive. The struggles between Conservatives and Liberals nationally disrupted commerce and confused power relations between Indian communities and Ladino authorities. It also resulted in some brief respites for Indians during times when the instability led to uncollected taxes.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "One other effect that Liberal land reforms had was the start of coffee plantations, especially in the Soconusco region. One reason for this push in this area was that Mexico was still working to strengthen its claim on the area against Guatemala's claims on the region. The land reforms brought colonists from other areas of the country as well as foreigners from England, the United States and France. These foreign immigrants would introduce coffee production to the areas, as well as modern machinery and professional administration of coffee plantations. Eventually, this production of coffee would become the state's most important crop.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Although the Liberals had mostly triumphed in the state and the rest of the country by the 1860s, Conservatives still held considerable power in Chiapas. Liberal politicians sought to solidify their power among the indigenous groups by weakening the Roman Catholic Church. The more radical of these even allowed indigenous groups the religious freedoms to return to a number of native rituals and beliefs such as pilgrimages to natural shrines such as mountains and waterfalls.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "This culminated in the Chiapas \"caste war\", which was an uprising of Tzotzils beginning in 1868. The basis of the uprising was the establishment of the \"three stones cult\" in Tzajahemal. Agustina Gómez Checheb was a girl tending her father's sheep when three stones fell from the sky. Collecting them, she put them on her father's altar and soon claimed that the stone communicated with her. Word of this soon spread and the \"talking stones\" of Tzajahemel soon became a local indigenous pilgrimage site. The cult was taken over by one pilgrim, Pedro Díaz Cuzcat, who also claimed to be able to communicate with the stones, and had knowledge of Catholic ritual, becoming a kind of priest. However, this challenged the traditional Catholic faith and non Indians began to denounce the cult. Stories about the cult include embellishments such as the crucifixion of a young Indian boy.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "This led to the arrest of Checheb and Cuzcat in December 1868. This caused resentment among the Tzotzils. Although the Liberals had earlier supported the cult, Liberal landowners had also lost control of much of their Indian labor and Liberal politicians were having a harder time collecting taxes from indigenous communities. An Indian army gathered at Zontehuitz then attacked various villages and haciendas. By the following June the city of San Cristóbal was surrounded by several thousand Indians, who offered the exchanged of several Ladino captives for their religious leaders and stones. Chiapas governor Dominguéz came to San Cristóbal with about three hundred heavily armed men, who then attacked the Indian force armed only with sticks and machetes. The indigenous force was quickly dispersed and routed with government troops pursuing pockets of guerrilla resistance in the mountains until 1870. The event effectively returned control of the indigenous workforce back to the highland elite.",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The Porfirio Díaz era at the end of the 19th century and beginning of the 20th was initially thwarted by regional bosses called caciques, bolstered by a wave of Spanish and mestizo farmers who migrated to the state and added to the elite group of wealthy landowning families. There was some technological progress such as a highway from San Cristóbal to the Oaxaca border and the first telephone line in the 1880s, but Porfirian era economic reforms would not begin until 1891 with Governor Emilio Rabasa. This governor took on the local and regional caciques and centralized power into the state capital, which he moved from San Cristóbal de las Casas to Tuxtla in 1892. He modernized public administration, transportation and promoted education. Rabasa also introduced the telegraph, limited public schooling, sanitation and road construction, including a route from San Cristóbal to Tuxtla then Oaxaca, which signaled the beginning of favoritism of development in the central valley over the highlands. He also changed state policies to favor foreign investment, favored large land mass consolidation for the production of cash crops such as henequen, rubber, guayule, cochineal and coffee. Agricultural production boomed, especially coffee, which induced the construction of port facilities in Tonalá. The economic expansion and investment in roads also increased access to tropical commodities such as hardwoods, rubber and chicle.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "These still required cheap and steady labor to be provided by the indigenous population. By the end of the 19th century, the four main indigenous groups, Tzeltals, Tzotzils, Tojolabals and Ch’ols were living in \"reducciones\" or reservations, isolated from one another. Conditions on the farms of the Porfirian era was serfdom, as bad if not worse than for other indigenous and mestizo populations leading to the Mexican Revolution. While this coming event would affect the state, Chiapas did not follow the uprisings in other areas that would end the Porfirian era.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "Japanese immigration to Mexico began in 1897 when the first thirty five migrants arrived in Chiapas to work on coffee farms, so that Mexico was the first Latin American country to receive organized Japanese immigration. Although this colony ultimately failed, there remains a small Japanese community in Acacoyagua, Chiapas.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "In the early 20th century and into the Mexican Revolution, the production of coffee was particularly important but labor-intensive. This would lead to a practice called enganche (hook), where recruiters would lure workers with advanced pay and other incentives such as alcohol and then trap them with debts for travel and other items to be worked off. This practice would lead to a kind of indentured servitude and uprisings in areas of the state, although they never led to large rebel armies as in other parts of Mexico.",
"title": "History"
},
{
"paragraph_id": 31,
"text": "A small war broke out between Tuxtla Gutiérrez and San Cristobal in 1911. San Cristóbal, allied with San Juan Chamula, tried to regain the state's capital but the effort failed. San Cristóbal de las Casas, which had a very limited budget, to the extent that it had to ally with San Juan Chamula challenged Tuxtla Gutierrez which, with only a small ragtag army overwhelmingly defeated the army helped by chamulas from San Cristóbal. There were three years of peace after that until troops allied with the \"First Chief\" of the revolutionary Constitutionalist forces, Venustiano Carranza, entered in 1914 taking over the government, with the aim of imposing the Ley de Obreros (Workers' Law) to address injustices against the state's mostly indigenous workers. Conservatives responded violently months later when they were certain the Carranza forces would take their lands. This was mostly by way of guerrilla actions headed by farm owners who called themselves the Mapaches. This action continued for six years, until President Carranza was assassinated in 1920 and revolutionary general Álvaro Obregón became president of Mexico. This allowed the Mapaches to gain political power in the state and effectively stop many of the social reforms occurring in other parts of Mexico.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "The Mapaches continued to fight against socialists and communists in Mexico from 1920 to 1936, to maintain their control over the state. In general, elite landowners also allied with the nationally dominant party founded by Plutarco Elías Calles following the assassination of president-elect Obregón in 1928; that party was renamed the Institutional Revolutionary Party in 1946. Through that alliance, they could block land reform in this way as well. The Mapaches were first defeated in 1925 when an alliance of socialists and former Carranza loyalists had Carlos A. Vidal selected as governor, although he was assassinated two years later. The last of the Mapache resistance was overcome in the early 1930s by Governor Victorico Grajales, who pursued President Lázaro Cárdenas' social and economic policies including persecution of the Catholic Church. These policies would have some success in redistributing lands and organizing indigenous workers but the state would remain relatively isolated for the rest of the 20th century. The territory was reorganized into municipalities in 1916. The current state constitution was written in 1921.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "There was political stability from the 1940s to the early 1970s; however, regionalism regained with people thinking of themselves as from their local city or municipality over the state. This regionalism impeded the economy as local authorities restrained outside goods. For this reason, construction of highways and communications were pushed to help with economic development. Most of the work was done around Tuxtla Gutiérrez and Tapachula. This included the Sureste railroad connecting northern municipalities such as Pichucalco, Salto de Agua, Palenque, Catazajá and La Libertad. The Cristobal Colon highway linked Tuxtla to the Guatemalan border. Other highways included El Escopetazo to Pichucalco, a highway between San Cristóbal and Palenque with branches to Cuxtepeques and La Frailesca. This helped to integrate the state's economy, but it also permitted the political rise of communal land owners called ejidatarios.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "In the mid-20th century, the state experienced a significant rise in population, which outstripped local resources, especially land in the highland areas. Since the 1930s, many indigenous and mestizos have migrated from the highland areas into the Lacandon Jungle with the populations of Altamirano, Las Margaritas, Ocosingo and Palenque rising from less than 11,000 in 1920 to over 376,000 in 2000. These migrants came to the jungle area to clear forest and grow crops and raise livestock, especially cattle. Economic development in general raised the output of the state, especially in agriculture, but it had the effect of deforesting many areas, especially the Lacandon. Added to this was there were still serf like conditions for many workers and insufficient educational infrastructure. Population continued to increase faster than the economy could absorb. There were some attempts to resettle peasant farmers onto non cultivated lands, but they were met with resistance. President Gustavo Díaz Ordaz awarded a land grant to the town of Venustiano Carranza in 1967, but that land was already being used by cattle-ranchers who refused to leave. The peasants tried to take over the land anyway, but when violence broke out, they were forcibly removed. In Chiapas poor farmland and severe poverty afflict the Mayan Indians which led to unsuccessful non violent protests and eventually armed struggle started by the Zapatista National Liberation Army in January 1994.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "These events began to lead to political crises in the 1970s, with more frequent land invasions and takeovers of municipal halls. This was the beginning of a process that would lead to the emergence of the Zapatista movement in the 1990s. Another important factor to this movement would be the role of the Catholic Church from the 1960s to the 1980s. In 1960, Samuel Ruiz became the bishop of the Diocese of Chiapas, centered in San Cristóbal. He supported and worked with Marist priests and nuns following an ideology called liberation theology. In 1974, he organized a statewide \"Indian Congress\" with representatives from the Tzeltal, Tzotzil, Tojolabal and Ch'ol peoples from 327 communities as well as Marists and the Maoist People's Union. This congress was the first of its kind with the goal of uniting the indigenous peoples politically. These efforts were also supported by leftist organizations from outside Mexico, especially to form unions of ejido organizations. These unions would later form the base of the EZLN organization. One reason for the Church's efforts to reach out to the indigenous population was that starting in the 1970s, a shift began from traditional Catholic affiliation to Protestant, Evangelical and other Christian sects.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The 1980s saw a large wave of refugees coming into the state from Central America as a number of these countries, especially Guatemala, were in the midst of violent political turmoil. The Chiapas/Guatemala border had been relatively porous with people traveling back and forth easily in the 19th and 20th centuries, much like the Mexico/U.S. border around the same time. This is in spite of tensions caused by Mexico's annexation of the Soconusco region in the 19th century. The border between Mexico and Guatemala had been traditionally poorly guarded, due to diplomatic considerations, lack of resources and pressure from landowners who need cheap labor sources.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "The arrival of thousands of refugees from Central America stressed Mexico's relationship with Guatemala, at one point coming close to war as well as a politically destabilized Chiapas. Although Mexico is not a signatory to the UN Convention Relating to the Status of Refugees, international pressure forced the government to grant official protection to at least some of the refugees. Camps were established in Chiapas and other southern states, and mostly housed Mayan peoples. However, most Central American refugees from that time never received any official status, estimated by church and charity groups at about half a million from El Salvador alone. The Mexican government resisted direct international intervention in the camps, but eventually relented somewhat because of finances. By 1984, there were 92 camps with 46,000 refugees in Chiapas, concentrated in three areas, mostly near the Guatemalan border. To make matters worse, the Guatemalan army conducted raids into camps on Mexican territories with significant casualties, terrifying the refugees and local populations. From within Mexico, refugees faced threats by local governments who threatened to deport them, legally or not, and local paramilitary groups funded by those worried about the political situation in Central America spilling over into the state. The official government response was to militarize the areas around the camps, which limited international access and migration into Mexico from Central America was restricted. By 1990, it was estimated that there were over 200,000 Guatemalans and half a million from El Salvador, almost all peasant farmers and most under age twenty.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "In the 1980s, the politization of the indigenous and rural populations of the state that began in the 1960s and 1970s continued. In 1980, several ejido (communal land organizations) joined to form the Union of Ejidal Unions and United Peasants of Chiapas, generally called the Union of Unions, or UU. It had a membership of 12,000 families from over 180 communities. By 1988, this organization joined with other to form the ARIC-Union of Unions (ARIC-UU) and took over much of the Lacandon Jungle portion of the state. Most of the members of these organization were from Protestant and Evangelical sects as well as \"Word of God\" Catholics affiliated with the political movements of the Diocese of Chiapas. What they held in common was indigenous identity vis-à-vis the non-indigenous, using the old 19th century \"caste war\" word \"Ladino\" for them.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "The adoption of liberal economic reforms by the Mexican federal government clashed with the leftist political ideals of these groups, notably as the reforms were believed to have begun to have negative economic effects on poor farmers, especially small-scale indigenous coffee-growers. Opposition would coalesce into the Zapatista movement in the 1990s. Although the Zapatista movement couched its demands and cast its role in response to contemporary issues, especially in its opposition to neoliberalism, it operates in the tradition of a long line of peasant and indigenous uprisings that have occurred in the state since the colonial era. This is reflected in its indigenous vs. Mestizo character. However, the movement was an economic one as well. Although the area has extensive resources, much of the local population of the state, especially in rural areas, did not benefit from this bounty. In the 1990s, two thirds of the state's residents did not have sewage service, only a third had electricity and half did not have potable water. Over half of the schools offered education only to the third grade and most pupils dropped out by the end of first grade. Grievances, strongest in the San Cristóbal and Lacandon Jungle areas, were taken up by a small leftist guerrilla band led by a man called only \"Subcomandante Marcos.\"",
"title": "History"
},
{
"paragraph_id": 40,
"text": "This small band, called the Zapatista Army of National Liberation (Ejército Zapatista de Liberación Nacional, EZLN), came to the world's attention when on January 1, 1994 (the day the NAFTA treaty went into effect) EZLN forces occupied and took over the towns of San Cristobal de las Casas, Las Margaritas, Altamirano, Ocosingo and three others. They read their proclamation of revolt to the world and then laid siege to a nearby military base, capturing weapons and releasing many prisoners from the jails. This action followed previous protests in the state in opposition to neoliberal economic policies.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "Although it has been estimated as having no more than 300 armed guerrilla members, the EZLN paralyzed the Mexican government, which balked at the political risks of direct confrontation. The major reason for this was that the rebellion caught the attention of the national and world press, as Marcos made full use of the then-new Internet to get the group's message out, putting the spotlight on indigenous issues in Mexico in general. Furthermore, the opposition press in Mexico City, especially La Jornada, actively supported the rebels. These factors encouraged the rebellion to go national. Many blamed the unrest on infiltration of leftists among the large Central American refugee population in Chiapas, and the rebellion opened up splits in the countryside between those supporting and opposing the EZLN. Zapatista sympathizers have included mostly Protestants and Word of God Catholics, opposing those \"traditionalist\" Catholics who practiced a syncretic form of Catholicism and indigenous beliefs. This split had existed in Chiapas since the 1970s, with the latter group supported by the caciques and others in the traditional power-structure. Protestants and Word of God Catholics (allied directly with the bishopric in San Cristóbal) tended to oppose traditional power structures.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "The Bishop of Chiapas, Samuel Ruiz, and the Diocese of Chiapas reacted by offering to mediate between the rebels and authorities. However, because of this diocese's activism since the 1960s, authorities accused the clergy of being involved with the rebels. There was some ambiguity about the relationship between Ruiz and Marcos and it was a constant feature of news coverage, with many in official circles using such to discredit Ruiz. Eventually, the activities of the Zapatistas began to worry the Roman Catholic Church in general and to upstage the diocese's attempts to re establish itself among Chiapan indigenous communities against Protestant evangelization. This would lead to a breach between the Church and the Zapatistas.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "The Zapatista story remained in headlines for a number of years. One reason for this was the December 1997 massacre of forty-five unarmed Tzotzil peasants, mostly women and children, in the Zapatista-controlled village of Acteal in the Chenhaló municipality just north of San Cristóbal. This allowed many media outlets in Mexico to step up their criticisms of the government.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "Despite this, the armed conflict was brief, mostly because the Zapatistas, unlike many other guerilla movements, did not try to gain traditional political power. It focused more on trying to manipulate public opinion in order to obtain concessions from the government. This has linked the Zapatistas to other indigenous and identity-politics movements that arose in the late-20th century. The main concession that the group received was the San Andrés Accords (1996), also known as the Law on Indian Rights and Culture. The Accords appear to grant certain indigenous zones autonomy, but this is against the Mexican constitution, so its legitimacy has been questioned. Zapatista declarations since the mid-1990s have called for a new constitution. As of 1999 the government had not found a solution to this problem. The revolt also pressed the government to institute anti-poverty programs such as \"Progresa\" (later called \"Oportunidades\") and the \"Puebla-Panama Plan\" – aiming to increase trade between southern Mexico and Central America.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "As of the first decade of the 2000s the Zapatista movement remained popular in many indigenous communities. The uprising gave indigenous peoples a more active role in the state's politics. However, it did not solve the economic issues that many peasant farmers face, especially the lack of land to cultivate. This problem has been at crisis proportions since the 1970s, and the government's reaction has been to encourage peasant farmers—mostly indigenous—to migrate into the sparsely populated Lacandon Jungle, a trend since earlier in the century.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "From the 1970s on, some 100,000 people set up homes in this rainforest area, with many being recognized as ejidos, or communal land-holding organizations. These migrants included Tzeltals, Tojolabals, Ch'ols and mestizos, mostly farming corn and beans and raising livestock. However, the government changed policies in the late 1980s with the establishment of the Montes Azules Biosphere Reserve, as much of the Lacandon Jungle had been destroyed or severely damaged. While armed resistance has wound down, the Zapatistas have remained a strong political force, especially around San Cristóbal and the Lacandon Jungle, its traditional bases. Since the Accords, they have shifted focus in gaining autonomy for the communities they control.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "Since the 1994 uprising, migration into the Lacandon Jungle has significantly increased, involving illegal settlements and cutting in the protected biosphere reserve. The Zapatistas support these actions as part of indigenous rights, but that has put them in conflict with international environmental groups and with the indigenous inhabitants of the rainforest area, the Lacandons. Environmental groups state that the settlements pose grave risks to what remains of the Lacandon, while the Zapatistas accuse them of being fronts for the government, which wants to open the rainforest up to multinational corporations. Added to this is the possibility that significant oil and gas deposits exist under this area.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "The Zapatista movement has had some successes. The agricultural sector of the economy now favors ejidos and other commonly-owned land. There have been some other gains economically as well. In the last decades of the 20th century, Chiapas's traditional agricultural economy has diversified somewhat with the construction of more roads and better infrastructure by the federal and state governments. Tourism has become important in some areas of the state, especially in San Cristóbal de las Casas and Palenque. Its economy is important to Mexico as a whole as well, producing coffee, corn, cacao, tobacco, sugar, fruit, vegetables and honey for export. It is also a key state for the nation's petrochemical and hydroelectric industries. A significant percentage of PEMEX's drilling and refining takes place in Chiapas and Tabasco, and Chiapas produces fifty-five percent of Mexico's hydroelectric energy.",
"title": "History"
},
{
"paragraph_id": 49,
"text": "However, Chiapas remains one of the poorest states in Mexico. Ninety-four of its 111 municipalities have a large percentage of the population living in poverty. In areas such as Ocosingo, Altamirano and Las Margaritas, the towns where the Zapatistas first came into prominence in 1994, 48% of the adults were illiterate. Chiapas is still considered isolated and distant from the rest of Mexico, both culturally and geographically. It has significantly underdeveloped infrastructure compared to the rest of the country, and its significant indigenous population with isolationist tendencies keep the state distinct culturally. Cultural stratification, neglect and lack of investment by the Mexican federal government has exacerbated this problem.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "In early November 2023, signed by rebel Subcomandante Moises and EZLN that announced the dissolution of the Rebel Zapatista Autonomous Municipalities due to the cartel violence generated by Sinaloa Cartel and Jalisco New Generation Cartel and violent border clashes in Guatemala due to the increasing violence growing on the border. Caracoles will remain open to locals but remain closed to outsiders, and the previous MAREZ system will be reorganized into a new autonomous system.",
"title": "History"
},
{
"paragraph_id": 51,
"text": "Chiapas is located in the south east of Mexico, bordering the states of Tabasco, Veracruz and Oaxaca with the Pacific Ocean to the south and Guatemala to the east. It has a territory of 74,415 km, the eighth largest state in Mexico. The state consists of 118 municipalities organized into nine political regions called Center, Altos, Fronteriza, Frailesca, Norte, Selva, Sierra, Soconusco and Istmo-Costa. There are 18 cities, twelve towns (villas) and 111 pueblos (villages). Major cities include Tuxtla Gutiérrez, San Cristóbal de las Casas, Tapachula, Palenque, Comitán, and Chiapa de Corzo.",
"title": "Geography"
},
{
"paragraph_id": 52,
"text": "The state has a complex geography with seven distinct regions according to the Mullerried classification system. These include the Pacific Coast Plains, the Sierra Madre de Chiapas, the Central Depression, the Central Highlands, the Eastern Mountains, the Northern Mountains and the Gulf Coast Plains. The Pacific Coast Plains is a strip of land parallel to the ocean. It is composed mostly of sediment from the mountains that border it on the northern side. It is uniformly flat, and stretches from the Bernal Mountain south to Tonalá. It has deep salty soils due to its proximity to the sea. It has mostly deciduous rainforest although most has been converted to pasture for cattle and fields for crops. It has numerous estuaries with mangroves and other aquatic vegetation.",
"title": "Geography"
},
{
"paragraph_id": 53,
"text": "The Sierra Madre de Chiapas runs parallel to the Pacific coastline of the state, northwest to southeast as a continuation of the Sierra Madre del Sur. This area has the highest altitudes in Chiapas including the Tacaná Volcano, which rises 4,093 m (13,428 ft) above sea level. Most of these mountains are volcanic in origin although the nucleus is metamorphic rock. It has a wide range of climates but little arable land. It is mostly covered in middle altitude rainforest, high altitude rainforest, and forests of oaks and pines. The mountains partially block rain clouds from the Pacific, a process known as Orographic lift, which creates a particularly rich coastal region called the Soconusco. The main commercial center of the sierra is the town of Motozintla, also near the Guatemalan border.",
"title": "Geography"
},
{
"paragraph_id": 54,
"text": "The Central Depression is in the center of the state. It is an extensive semi flat area bordered by the Sierra Madre de Chiapas, the Central Highlands and the Northern Mountains. Within the depression there are a number of distinct valleys. The climate here can be very hot and humid in the summer, especially due to the large volume of rain received in July and August. The original vegetation was lowland deciduous forest with some rainforest of middle altitudes and some oaks above 1,500 m (4,900 ft) above sea level.",
"title": "Geography"
},
{
"paragraph_id": 55,
"text": "The Central Highlands, also referred to as Los Altos, are mountains oriented from northwest to southeast with altitudes ranging from one thousand two hundred to one thousand six hundred m (3,900 to 5,200 ft) above sea level. The western highlands are displaced faults, while the eastern highlands are mainly folds of sedimentary formations – mainly limestone, shale, and sandstone. These mountains, along the Sierra Madre of Chiapas become the Cuchumatanes where they extend over the border into Guatemala. Its topography is mountainous with many narrow valleys and karst formations called uvalas or poljés, depending on the size. Most of the rock is limestone allowing for a number of formations such as caves and sinkholes. There are also some isolated pockets of volcanic rock with the tallest peaks being the Tzontehuitz and Huitepec volcanos. There are no significant surface water systems as they are almost all underground. The original vegetation was forest of oak and pine but these have been heavily damaged. The highlands climate in the Koeppen modified classification system for Mexico is humid temperate C(m) and subhumid temperate C (w 2 ) (w). This climate exhibits a summer rainy season and a dry winter, with possibilities of frost from December to March. The Central Highlands have been the population center of Chiapas since the Conquest. European epidemics were hindered by the tierra fría climate, allowing the indigenous peoples in the highlands to retain their large numbers.",
"title": "Geography"
},
{
"paragraph_id": 56,
"text": "The Eastern Mountains (Montañas del Oriente) are in the east of the state, formed by various parallel mountain chains mostly made of limestone and sandstone. Its altitude varies from 500 to 1,500 m (1,600 to 4,900 ft). This area receives moisture from the Gulf of Mexico with abundant rainfall and exuberant vegetation, which creates the Lacandon Jungle, one of the most important rainforests in Mexico. The Northern Mountains (Montañas del Norte) are in the north of the state. They separate the flatlands of the Gulf Coast Plains from the Central Depression. Its rock is mostly limestone. These mountains also receive large amounts of rainfall with moisture from the Gulf of Mexico giving it a mostly hot and humid climate with rains year round. In the highest elevations around 1,800 m (5,900 ft), temperatures are somewhat cooler and do experience a winter. The terrain is rugged with small valleys whose natural vegetation is high altitude rainforest.",
"title": "Geography"
},
{
"paragraph_id": 57,
"text": "The Gulf Coast Plains (Llanura Costera del Golfo) stretch into Chiapas from the state of Tabasco, which gives it the alternate name of the Tabasqueña Plains. These plains are found only in the extreme north of the state. The terrain is flat and prone to flooding during the rainy season as it was built by sediments deposited by rivers and streams heading to the Gulf.",
"title": "Geography"
},
{
"paragraph_id": 58,
"text": "The Lacandon Jungle is situated in north eastern Chiapas, centered on a series of canyonlike valleys called the Cañadas, between smaller mountain ridges oriented from northwest to southeast. The ecosystem covers an area of approximately 1.9×10 ha (4.7×10 acres) extending from Chiapas into northern Guatemala and southern Yucatán Peninsula and into Belize. This area contains as much as 25% of Mexico's total species diversity, most of which has not been researched. It has a predominantly hot and humid climate (Am w\" i g) with most rain falling from summer to part of fall, with an average of between 2300 and 2600 mm per year. There is a short dry season from March to May. The predominant wild vegetation is perennial high rainforest. The Lacandon comprises a biosphere reserve (Montes Azules); four natural protected areas (Bonampak, Yaxchilan, Chan Kin, and Lacantum); and the communal reserve (La Cojolita), which functions as a biological corridor with the area of Petén in Guatemala. Flowing within the Rainforest is the Usumacinta River, considered to be one of the largest rivers in Mexico and seventh largest in the world based on volume of water.",
"title": "Geography"
},
{
"paragraph_id": 59,
"text": "During the 20th century, the Lacandon has had a dramatic increase in population and along with it, severe deforestation. The population of municipalities in this area, Altamirano, Las Margaritas, Ocosingo and Palenque have risen from 11,000 in 1920 to over 376,000 in 2000. Migrants include Ch'ol, Tzeltal, Tzotzil, Tojolabal indigenous peoples along with mestizos, Guatemalan refugees and others. Most of these migrants are peasant farmers, who cut forest to plant crops. However, the soil of this area cannot support annual crop farming for more than three or four harvests. The increase in population and the need to move on to new lands has pitted migrants against each other, the native Lacandon people, and the various ecological reserves for land. It is estimated that only ten percent of the original Lacandon rainforest in Mexico remains, with the rest strip-mined, logged and farmed. It once stretched over a large part of eastern Chiapas but all that remains is along the northern edge of the Guatemalan border. Of this remaining portion, Mexico is losing over five percent each year.",
"title": "Geography"
},
{
"paragraph_id": 60,
"text": "The best preserved portion of the Lacandon is within the Montes Azules Biosphere Reserve. It is centered on what was a commercial logging grant by the Porfirio Díaz government, which the government later nationalized. However, this nationalization and conversion into a reserve has made it one of the most contested lands in Chiapas, with the already existing ejidos and other settlements within the park along with new arrivals squatting on the land.",
"title": "Geography"
},
{
"paragraph_id": 61,
"text": "The Soconusco region encompasses a coastal plain and a mountain range with elevations of up to 2,000 m (6,600 ft) above sea levels paralleling the Pacific Coast. The highest peak in Chiapas is the Tacaná Volcano at 4,060 m (13,320 ft) above sea level. In accordance with an 1882 treaty, the dividing line between Mexico and Guatemala goes right over the summit of this volcano. The climate is tropical, with a number of rivers and evergreen forests in the mountains. This is Chiapas's major coffee-producing area, as it has the best soils and climate for coffee. Before the arrival of the Spanish, this area was the principal source of cocoa seeds in the Aztec empire, which they used as currency, and for the highly prized quetzal feathers used by the nobility. It would become the first area to produce coffee, introduced by an Italian entrepreneur on the La Chacara farm. Coffee is cultivated on the slopes of these mountains mostly between 600 and 1,200 m (2,000 and 3,900 ft) asl. Mexico produces about 4 million sacks of green coffee each year, fifth in the world behind Brazil, Colombia, Indonesia and Vietnam. Most producers are small with plots of land under five ha (12 acres). From November to January, the annual crop is harvested and processed employing thousands of seasonal workers. Lately, a number of coffee haciendas have been developing tourism infrastructure as well.",
"title": "Geography"
},
{
"paragraph_id": 62,
"text": "Chiapas is located in the tropical belt of the planet, but the climate is moderated in many areas by altitude. For this reason, there are hot, semi-hot, temperate and even cold climates. Some areas have abundant rainfall year-round and others receive most of their rain between May and October, with a dry season from November to April. The mountain areas affect wind and moisture flow over the state, concentrating moisture in certain areas of the state. They also are responsible for some cloud-covered rainforest areas in the Sierra Madre.",
"title": "Geography"
},
{
"paragraph_id": 63,
"text": "Chiapas's rainforests are home to thousands of animals and plants, some of which cannot be found anywhere else in the world. Natural vegetation varies from lowland to highland tropical forest, pine and oak forests in the highest altitudes and plains area with some grassland. Chiapas is ranked second in forest resources in Mexico with valued woods such as pine, cypress, Liquidambar, oak, cedar, mahogany and more. The Lacandon Jungle is one of the last major tropical rainforests in the northern hemisphere with an extension of 600,000 ha (1,500,000 acres). It contains about sixty percent of Mexico's tropical tree species, 3,500 species of plants, 1,157 species of invertebrates and over 500 of vertebrate species. Chiapas has one of the greatest diversities in wildlife in the Americas. There are more than 100 species of amphibians, 700 species of birds, fifty of mammals and just over 200 species of reptiles. In the hot lowlands, there are armadillos, monkeys, pelicans, wild boar, jaguars, crocodiles, iguanas and many others. In the temperate regions there are species such as bobcats, salamanders, a large red lizard Abronia lythrochila, weasels, opossums, deer, ocelots and bats. The coastal areas have large quantities of fish, turtles, and crustaceans, with many species in danger of extinction or endangered as they are endemic only to this area. The total biodiversity of the state is estimated at over 50,000 species of plants and animals. The diversity of species is not limited to the hot lowlands. The higher altitudes also have mesophile forests, oak/pine forests in the Los Altos, Northern Mountains and Sierra Madre and the extensive estuaries and mangrove wetlands along the coast.",
"title": "Geography"
},
{
"paragraph_id": 64,
"text": "Chiapas has about thirty percent of Mexico's fresh water resources. The Sierra Madre divides them into those that flow to the Pacific and those that flow to the Gulf of Mexico. Most of the first are short rivers and streams; most longer ones flow to the Gulf. Most Pacific side rivers do not drain directly into this ocean but into lagoons and estuaries. The two largest rivers are the Grijalva and the Usumacinta, with both part of the same system. The Grijalva has four dams built on it the Belisario Dominguez (La Angostura); Manuel Moreno Torres (Chicoasén); Nezahualcóyotl (Malpaso); and Angel Albino Corzo (Peñitas). The Usumacinta divides the state from Guatemala and is the longest river in Central America. In total, the state has 110,000 ha (270,000 acres) of surface waters, 260 km (160 mi) of coastline, control of 96,000 km (37,000 sq mi) of ocean, 75,230 ha (185,900 acres) of estuaries and ten lake systems. Laguna Miramar is a lake in the Montes Azules reserve and the largest in the Lacandon Jungle at 40 km in diameter. The color of its waters varies from indigo to emerald green and in ancient times, there were settlements on its islands and its caves on the shoreline. The Catazajá Lake is 28 km north of the city of Palenque. It is formed by rainwater captured as it makes its way to the Usumacinta River. It contains wildlife such as manatees and iguanas and it is surrounded by rainforest. Fishing on this lake is an ancient tradition and the lake has an annual bass fishing tournament. The Welib Já Waterfall is located on the road between Palenque and Bonampak.",
"title": "Geography"
},
{
"paragraph_id": 65,
"text": "The state has thirty-six protected areas at the state and federal levels along with 67 areas protected by various municipalities. The Sumidero Canyon National Park was decreed in 1980 with an extension of 21,789 ha (53,840 acres). It extends over two of the regions of the state, the Central Depression and the Central Highlands over the municipalities of Tuxtla Gutiérrez, Nuevo Usumacinta, Chiapa de Corzo and San Fernando. The canyon has steep and vertical sides that rise to up to 1000 meters from the river below with mostly tropical rainforest but some areas with xerophile vegetation such as cactus can be found. The river below, which has cut the canyon over the course of twelve million years, is called the Grijalva. The canyon is emblematic for the state as it is featured in the state seal. The Sumidero Canyon was once the site of a battle between the Spaniards and Chiapanecan Indians. Many Chiapanecans chose to throw themselves from the high edges of the canyon rather than be defeated by Spanish forces. Today, the canyon is a popular destination for ecotourism. Visitors can take boat trips down the river that runs through the canyon and see the area's many birds and abundant vegetation.",
"title": "Geography"
},
{
"paragraph_id": 66,
"text": "The Montes Azules Biosphere Reserve was decreed in 1978. It is located in the northeast of the state in the Lacandon Jungle. It covers 331,200 ha (818,000 acres) in the municipalities of Maravilla Tenejapa, Ocosingo and Las Margaritas. It conserves highland perennial rainforest. The jungle is in the Usumacinta River basin east of the Chiapas Highlands. It is recognized by the United Nations Environment Programme for its global biological and cultural significance. In 1992, the 61,874 ha (152,890-acre) Lacantun Reserve, which includes the Classic Maya archaeological sites of Yaxchilan and Bonampak, was added to the biosphere reserve.",
"title": "Geography"
},
{
"paragraph_id": 67,
"text": "Agua Azul Waterfall Protection Area is in the Northern Mountains in the municipality of Tumbalá. It covers an area of 2,580 ha (6,400 acres) of rainforest and pine-oak forest, centered on the waterfalls it is named after. It is located in an area locally called the \"Mountains of Water\", as many rivers flow through there on their way to the Gulf of Mexico. The rugged terrain encourages waterfalls with large pools at the bottom, that the falling water has carved into the sedimentary rock and limestone. Agua Azul is one of the best known in the state. The waters of the Agua Azul River emerge from a cave that forms a natural bridge of thirty meters and five small waterfalls in succession, all with pools of water at the bottom. In addition to Agua Azul, the area has other attractions—such as the Shumuljá River, which contains rapids and waterfalls, the Misol Há Waterfall with a thirty-meter drop, the Bolón Ajau Waterfall with a fourteen-meter drop, the Gallito Copetón rapids, the Blacquiazules Waterfalls, and a section of calm water called the Agua Clara.",
"title": "Geography"
},
{
"paragraph_id": 68,
"text": "The El Ocote Biosphere Reserve was decreed in 1982 located in the Northern Mountains at the boundary with the Sierra Madre del Sur in the municipalities of Ocozocoautla, Cintalapa and Tecpatán. It has a surface area of 101,288.15 ha (250,288.5 acres) and preserves a rainforest area with karst formations. The Lagunas de Montebello National Park was decreed in 1959 and consists of 7,371 ha (18,210 acres) near the Guatemalan border in the municipalities of La Independencia and La Trinitaria. It contains two of the most threatened ecosystems in Mexico the \"cloud rainforest\" and the Soconusco rainforest. The El Triunfo Biosphere Reserve, decreed in 1990, is located in the Sierra Madre de Chiapas in the municipalities of Acacoyagua, Ángel Albino Corzo, Montecristo de Guerrero, La Concordia, Mapastepec, Pijijiapan, Siltepec and Villa Corzo near the Pacific Ocean with 119,177.29 ha (294,493.5 acres). It conserves areas of tropical rainforest and many freshwater systems endemic to Central America. It is home to around 400 species of birds including several rare species such as the horned guan, the quetzal and the azure-rumped tanager. The Palenque National Forest is centered on the archaeological site of the same name and was decreed in 1981. It is located in the municipality of Palenque where the Northern Mountains meet the Gulf Coast Plain. It extends over 1,381 ha (3,410 acres) of tropical rainforest. The Laguna Bélgica Conservation Zone is located in the north west of the state in the municipality of Ocozocoautla. It covers forty-two hectares centered on the Bélgica Lake. The El Zapotal Ecological Center was established in 1980. Nahá–Metzabok is an area in the Lacandon Forest whose name means \"place of the black lord\" in Nahuatl. It extends over 617.49 km (238.41 sq mi) and in 2010, it was included in the World Network of Biosphere Reserves. Two main communities in the area are called Nahá and Metzabok. They were established in the 1940s, but the oldest communities in the area belong to the Lacandon people. The area has large numbers of wildlife including endangered species such as eagles, quetzals and jaguars.",
"title": "Geography"
},
{
"paragraph_id": 69,
"text": "As of 2010, the population is 4,796,580, the eighth most populous state in Mexico. The 20th century saw large population growth in Chiapas. From fewer than one million inhabitants in 1940, the state had about two million in 1980, and over 4 million in 2005. Overcrowded land in the highlands was relieved when the rainforest to the east was subject to land reform. Cattle ranchers, loggers, and subsistence farmers migrated to the rain forest area. The population of the Lacandon was only one thousand people in 1950, but by the mid-1990s this had increased to 200 thousand. As of 2010, 78% lives in urban communities with 22% in rural communities. While birthrates are still high in the state, they have come down in recent decades from 7.4 per woman in 1950. However, these rates still mean significant population growth in raw numbers. About half of the state's population is under age 20, with an average age of 19. In 2005, there were 924,967 households, 81% headed by men and the rest by women. Most households were nuclear families (70.7%) with 22.1% consisting of extended families.",
"title": "Demographics"
},
{
"paragraph_id": 70,
"text": "More migrate out of Chiapas than migrate in, with emigrants leaving for Tabasco, Oaxaca, Veracruz, State of Mexico and the Federal District (Mexico City) primarily.",
"title": "Demographics"
},
{
"paragraph_id": 71,
"text": "While Catholics remain the majority, their numbers have dropped as many have converted to Protestant denominations in recent decades. Islam is also a small but growing religion due to the Indigenous Muslims as well as Muslim immigrants from Africa continuously rising in numbers. The National Presbyterian Church in Mexico has a large following in Chiapas; some estimate that 40% of the population are followers of the Presbyterian church.",
"title": "Demographics"
},
{
"paragraph_id": 72,
"text": "There are a number of people in the state with African features. These are the descendants of slaves brought to the state in the 16th century. There are also those with predominantly European features who are the descendants of the original Spanish colonizers as well as later immigrants to Mexico. The latter mostly came at the end of the 19th and early 20th century under the Porfirio Díaz regime to start plantations. According to the 2020 Census, 1.02% of Chiapas's population identified as Black, Afro-Mexican, or of African descent.",
"title": "Demographics"
},
{
"paragraph_id": 73,
"text": "Over the history of Chiapas, there have been three main indigenous groups: the Mixes-Zoques, the Mayas and the Chiapas [es]. Today, there are an estimated fifty-six linguistic groups. As of the 2005 Census, there were 957,255 people who spoke an indigenous language out of a total population of about 3.5 million. Of this one million, one third do not speak Spanish. Out of Chiapas's 111 municipios, 99 have majority indigenous populations. 22 municipalities have indigenous populations over 90%, and 36 municipalities have native populations exceeding 50%. However, despite population growth in indigenous villages, the percentage of indigenous to non indigenous continues to fall with less than 35% indigenous. Indian populations are concentrated in a few areas, with the largest concentration of indigenous-language-speaking individuals is living in 5 of Chiapas's 9 economic regions: Los Altos, Selva, Norte, Fronteriza, and Sierra. The remaining three regions, Soconusco, Centro and Costa, have populations that are considered to be predominantly mestizo.",
"title": "Demographics"
},
{
"paragraph_id": 74,
"text": "The state has about 13.5% of all of Mexico's indigenous population, and it has been ranked among the ten \"most indianized\" states, with only Campeche, Oaxaca, Quintana Roo and Yucatán having been ranked above it between 1930 and the present. These indigenous peoples have been historically resistant to assimilation into the broader Mexican society, with it best seen in the retention rates of indigenous languages and the historic demands for autonomy over geographic areas as well as cultural domains. Much of the latter has been prominent since the Zapatista uprising in 1994. Most of Chiapas's indigenous groups are descended from the Mayans, speaking languages that are closely related to one another, belonging to the Western Maya language group. The state was part of a large region dominated by the Mayans during the Classic period. The most numerous of these Mayan groups include the Tzeltal, Tzotzil, Ch'ol, Zoque, Tojolabal, Lacandon and Mam, which have traits in common such as syncretic religious practices, and social structure based on kinship. The most common Western Maya languages are Tzeltal and Tzotzil along with Chontal, Ch’ol, Tojolabal, Chuj, Kanjobal, Acatec, Jacaltec and Motozintlec.",
"title": "Demographics"
},
{
"paragraph_id": 75,
"text": "12 of Mexico's officially recognized native peoples living in the state have conserved their language, customs, history, dress and traditions to a significant degree. The primary groups include the Tzeltal, Tzotzil, Ch'ol, Tojolabal, Zoque, Chuj, Kanjobal, Mam, Jacalteco, Mochó Cakchiquel and Lacandon. Most indigenous communities are found in the municipalities of the Centro, Altos, Norte and Selva regions, with many having indigenous populations of over fifty percent. These include Bochil, Sitalá, Pantepec, Simojovel to those with over ninety percent indigenous such as San Juan Cancuc, Huixtán, Tenejapa, Tila, Oxchuc, Tapalapa, Zinacantán, Mitontic, Ocotepec, Chamula, and Chalchihuitán. The most numerous indigenous communities are the Tzeltal and Tzotzil peoples, who number about 400,000 each, together accounting for about half of the state's indigenous population. The next most numerous are the Ch’ol with about 200,000 people and the Tojolabal and Zoques, who number about 50,000 each. The top 3 municipalities in Chiapas with indigenous language speakers 3 years of age and older are: Ocosingo (133,811), Chilon (96,567), and San Juan Chamula (69,475). These 3 municipalities accounted for 24.8% (299,853) of all indigenous language speakers 3 years or older in the state of Chiapas, out of a total of 1,209,057 indigenous language speakers 3 years or older.",
"title": "Demographics"
},
{
"paragraph_id": 76,
"text": "Although most indigenous language speakers are bilingual, especially in the younger generations, many of these languages have shown resilience. Four of Chiapas's indigenous languages, Tzeltal, Tzotzil, Tojolabal and Chol, are high-vitality languages, meaning that a high percentage of these ethnicities speak the language and that there is a high rate of monolingualism in it. It is used in over 80% of homes. Zoque is considered to be of medium-vitality with a rate of bilingualism of over 70% and home use somewhere between 65% and 80%. Maya is considered to be of low-vitality with almost all of its speakers bilingual with Spanish. The most spoken indigenous languages as of 2010 are Tzeltal with 461,236 speakers, Tzotzil with 417,462, Ch’ol with 191,947 and Zoque with 53,839. In total, there are 1,141,499 who speak an indigenous language or 27% of the total population. Of these, 14% do not speak Spanish. Studies done between 1930 and 2000 have indicated that Spanish is not dramatically displacing these languages. In raw number, speakers of these languages are increasing, especially among groups with a long history of resistance to Spanish/Mexican domination. Language maintenance has been strongest in areas related to where the Zapatista uprising took place such as the municipalities of Altamirano, Chamula, Chanal, Larráinzar, Las Margaritas, Ocosingo, Palenque, Sabanilla, San Cristóbal de Las Casas and Simojovel.",
"title": "Demographics"
},
{
"paragraph_id": 77,
"text": "The state's rich indigenous tradition along with its associated political uprisings, especially that of 1994, has great interest from other parts of Mexico and abroad. It has been especially appealing to a variety of academics including many anthropologists, archeologists, historians, psychologists and sociologists. The concept of \"mestizo\" or mixed indigenous European heritage became important to Mexico's identity by the time of Independence, but Chiapas has kept its indigenous identity to the present day. Since the 1970s, this has been supported by the Mexican government as it has shifted from cultural policies that favor a \"multicultural\" identity for the country. One major exception to the separatist, indigenous identity has been the case of the Chiapa people, from whom the state's name comes, who have mostly been assimilated and intermarried into the mestizo population.",
"title": "Demographics"
},
{
"paragraph_id": 78,
"text": "Most Indigenous communities have economies based primarily on traditional agriculture such as the cultivation and processing of corn, beans and coffee as a cash crop and in the last decade, many have begun producing sugarcane and jatropha for refinement into biodiesel and ethanol for automobile fuel. The raising of livestock, particularly chicken and turkey and to a lesser extent beef and farmed fish is also a major economic activity. Many indigenous people, in particular the Maya, are employed in the production of traditional clothing, fabrics, textiles, wood items, artworks and traditional goods such as jade and amber works. Tourism has provided a number of a these communities with markets for their handcrafts and works, some of which are very profitable.",
"title": "Demographics"
},
{
"paragraph_id": 79,
"text": "San Cristóbal de las Casas and San Juan Chamula maintain a strong indigenous identity. On market day, many indigenous people from rural areas come into San Cristóbal to buy and sell mostly items for everyday use such as fruit, vegetables, animals, cloth, consumer goods and tools. San Juan Chamula is considered to be a center of indigenous culture, especially its elaborate festivals of Carnival and Day of Saint John. It was common for politicians, especially during Institutional Revolutionary Party's dominance to visit here during election campaigns and dress in indigenous clothing and carry a carved walking stick, a traditional sign of power. Relations between the indigenous ethnic groups is complicated. While there has been inter-ethnic political activism such as that promoted by the Diocese of Chiapas in the 1970s and the Zapatista movement in the 1990s, there has been inter-indigenous conflict as well. Much of this has been based on religion, pitting those of the traditional Catholic/indigenous beliefs who support the traditional power structure against Protestants, Evangelicals and Word of God Catholics (directly allied with the Diocese) who tend to oppose it. This is particularly significant problem among the Tzeltals and Tzotzils. Starting in the 1970s, traditional leaders in San Juan Chamula began expelling dissidents from their homes and land, amounting to about 20,000 indigenous forced to leave over a thirty-year period. It continues to be a serious social problem although authorities downplay it. Recently there has been political, social and ethnic conflict between the Tzotzil who are more urbanized and have a significant number of Protestant practitioners and the Tzeltal who are predominantly Catholic and live in smaller farming communities. Many Protestant Tzotzil have accused the Tzeltal of ethnic discrimination and intimidation due to their religious beliefs and the Tzeltal have in return accused the Tzotzil of singling them out for discrimination.",
"title": "Demographics"
},
{
"paragraph_id": 80,
"text": "Clothing, especially women's clothing, varies by indigenous group. For example, women in Ocosingo tend to wear a blouse with a round collar embroidered with flowers and a black skirt decorated with ribbons and tied with a cloth belt. The Lacandon people tend to wear a simple white tunic. They also make a ceremonial tunic from bark, decorated with astronomy symbols. In Tenejapa, women wear a huipil embroidered with Mayan fretwork along with a black wool rebozo. Men wear short pants, embroidered at the bottom.",
"title": "Demographics"
},
{
"paragraph_id": 81,
"text": "The Tzeltals call themselves Winik atel, which means \"working men.\" This is the largest ethnicity in the state, mostly living southeast of San Cristóbal with the largest number in Amatenango. Today, there are about 500,000 Tzeltal Indians in Chiapas. Tzeltal Mayan, part of the Mayan language family, today is spoken by about 375,000 people making it the fourth-largest language group in Mexico. There are two main dialects; highland (or Oxchuc) and lowland (or Bachajonteco). This language, along with Tzotzil, is from the Tzeltalan subdivision of the Mayan language family. Lexico-statistical studies indicate that these two languages probably became differentiated from one another around 1200 Most children are bilingual in the language and Spanish although many of their grandparents are monolingual Tzeltal speakers. Each Tzeltal community constitutes a distinct social and cultural unit with its own well-defined lands, wearing apparel, kinship system, politico-religious organization, economic resources, crafts, and other cultural features. Women are distinguished by a black skirt with a wool belt and an undyed cotton bloused embroidered with flowers. Their hair is tied with ribbons and covered with a cloth. Most men do not use traditional attire. Agriculture is the basic economic activity of the Tzeltal people. Traditional Mesoamerican crops such as maize, beans, squash, and chili peppers are the most important, but a variety of other crops, including wheat, manioc, sweet potatoes, cotton, chayote, some fruits, other vegetables, and coffee.",
"title": "Demographics"
},
{
"paragraph_id": 82,
"text": "Tzotzil speakers number just slightly less than theTzeltals at 226,000, although those of the ethnicity are probably higher. Tzotzils are found in the highlands or Los Altos and spread out towards the northeast near the border with Tabasco. However, Tzotzil communities can be found in almost every municipality of the state. They are concentrated in Chamula, Zinacantán, Chenalhó, and Simojovel. Their language is closely related to Tzeltal and distantly related to Yucatec Mayan and Lacandon. Men dress in short pants tied with a red cotton belt and a shirt that hangs down to their knees. They also wear leather huaraches and a hat decorated with ribbons. The women wear a red or blue skirt, a short huipil as a blouse, and use a chal or rebozo to carry babies and bundles. Tzotzil communities are governed by a katinab who is selected for life by the leaders of each neighborhood. The Tzotzils are also known for their continued use of the temazcal for hygiene and medicinal purposes.",
"title": "Demographics"
},
{
"paragraph_id": 83,
"text": "The Ch’ols of Chiapas migrated to the northwest of the state starting about 2,000 years ago, when they were concentrated in Guatemala and Honduras. Those Ch’ols who remained in the south are distinguished by the name Chortís. Chiapas Ch’ols are closely related to the Chontal in Tabasco as well. Choles are found in Tila, Tumbalá, Sabanilla, Palenque, and Salto de Agua, with an estimated population of about 115,000 people. The Ch’ol language belongs to the Maya family and is related to Tzeltal, Tzotzil, Lacandon, Tojolabal, and Yucatec Mayan. There are three varieties of Chol (spoken in Tila, Tumbalá, and Sabanilla), all mutually intelligible. Over half of speakers are monolingual in the Chol language. Women wear a long navy blue or black skirt with a white blouse heavily embroidered with bright colors and a sash with a red ribbon. The men only occasionally use traditional dress for events such as the feast of the Virgin of Guadalupe. This dress usually includes pants, shirts and huipils made of undyed cotton, with leather huaraches, a carrying sack and a hat. The fundamental economic activity of the Ch’ols is agriculture. They primarily cultivate corn and beans, as well as sugar cane, rice, coffee, and some fruits. They have Catholic beliefs strongly influenced by native ones. Harvests are celebrated on the Feast of Saint Rose on 30 August.",
"title": "Demographics"
},
{
"paragraph_id": 84,
"text": "The Totolabals are estimated at 35,000 in the highlands. According to oral tradition, the Tojolabales came north from Guatemala. The largest community is Ingeniero González de León in the La Cañada region, an hour outside the municipal seat of Las Margaritas. Tojolabales are also found in Comitán, Trinitaria, Altamirano and La Independencia. This area is filled with rolling hills with a temperate and moist climate. There are fast moving rivers and jungle vegetation. Tojolabal is related to Kanjobal, but also to Tzeltal and Tzotzil. However, most of the youngest of this ethnicity speak Spanish. Women dress traditionally from childhood with brightly colored skirts decorated with lace or ribbons and a blouse decorated with small ribbons, and they cover their heads with kerchiefs. They embroider many of their own clothes but do not sell them. Married women arrange their hair in two braids and single women wear it loose decorated with ribbons. Men no longer wear traditional garb daily as it is considered too expensive to make.",
"title": "Demographics"
},
{
"paragraph_id": 85,
"text": "The Zoques are found in 3,000 square kilometers the center and west of the state scattered among hundreds of communities. These were one of the first native peoples of Chiapas, with archeological ruins tied to them dating back as far as 3500 BCE. Their language is not Mayan but rather related to Mixe, which is found in Oaxaca and Veracruz. By the time the Spanish arrived, they had been reduced in number and territory. Their ancient capital was Quechula, which was covered with water by the creation of the Malpaso Dam, along with the ruins of Guelegas, which was first buried by an eruption of the Chichonal volcano. There are still Zoque ruins at Janepaguay, the Ocozocuautla and La Ciénega valleys.",
"title": "Demographics"
},
{
"paragraph_id": 86,
"text": "The Lacandons are one of the smallest native indigenous groups of the state with a population estimated between 600 and 1,000. They are mostly located in the communities of Lacanjá Chansayab, Najá, and Mensabak in the Lacandon Jungle. They live near the ruins of Bonampak and Yaxchilan and local lore states that the gods resided here when they lived on Earth. They inhabit about a million hectares of rainforest but from the 16th century to the present, migrants have taken over the area, most of which are indigenous from other areas of Chiapas. This dramatically altered their lifestyle and worldview. Traditional Lacandon shelters are huts made with fonds and wood with an earthen floor, but this has mostly given way to modern structures.",
"title": "Demographics"
},
{
"paragraph_id": 87,
"text": "The Mochós or Motozintlecos are concentrated in the municipality of Motozintla on the Guatemalan border. According to anthropologists, these people are an \"urban\" ethnicity as they are mostly found in the neighborhoods of the municipal seat. Other communities can be found near the Tacaná volcano, and in the municipalities of Tuzantán and Belisario Dominguez. The name \"Mochó\" comes from a response many gave the Spanish whom they could not understand and means \"I don't know.\" This community is in the process of disappearing as their numbers shrink.",
"title": "Demographics"
},
{
"paragraph_id": 88,
"text": "The Mams are a Mayan ethnicity that numbers about 20,000 found in thirty municipalities, especially Tapachula, Motozintla, El Porvenir, Cacahoatán and Amatenango in the southeastern Sierra Madre of Chiapas. The Mame language is one of the most ancient Mayan languages with 5,450 Mame speakers were tallied in Chiapas in the 2000 census. These people first migrated to the border region between Chiapas and Guatemala at the end of the nineteenth century, establishing scattered settlements. In the 1960s, several hundred migrated to the Lacandon rain forest near the confluence of the Santo Domingo and Jataté Rivers. Those who live in Chiapas are referred to locally as the \"Mexican Mam (or Mame)\" to differentiate them from those in Guatemala. Most live around the Tacaná volcano, which the Mams call \"our mother\" as it is considered to be the source of the fertility of the area's fields. The masculine deity is the Tajumulco volcano, which is in Guatemala.",
"title": "Demographics"
},
{
"paragraph_id": 89,
"text": "In the last decades of the 20th century, Chiapas received a large number of indigenous refugees, especially from Guatemala, many of whom remain in the state. These have added ethnicities such as the Kekchi, Chuj, Ixil, Kanjobal, K'iche' and Cakchikel to the population. The Kanjobal mainly live along the border between Chiapas and Guatemala, with almost 5,800 speakers of the language tallied in the 2000 census. It is believed that a significant number of these Kanjobal-speakers may have been born in Guatemala and immigrated to Chiapas, maintaining strong cultural ties to the neighboring nation.",
"title": "Demographics"
},
{
"paragraph_id": 90,
"text": "Chiapas accounts for 1.73% of Mexico's GDP. The primary sector, agriculture, produces 15.2% of the state's GDP. The secondary sector, mostly energy production, but also commerce, services and tourism, accounts for 21.8%. The share of the GDP coming from services is rising while that of agriculture is falling. The state is divided into nine economic regions. These regions were established in the 1980s in order to facilitate statewide economic planning. Many of these regions are based on state and federal highway systems. These include Centro, Altos, Fronteriza, Frailesca, Norte, Selva, Sierra, Soconusco and Istmo-Costa.",
"title": "Economy"
},
{
"paragraph_id": 91,
"text": "Despite being rich in resources, Chiapas, along with Oaxaca and Guerrero, lags behind the rest of the country in almost all socioeconomic indicators. As of 2005, there were 889,420 residential units; 71% had running water, 77.3% sewerage, and 93.6% electricity. Construction of these units varies from modern construction of block and concrete to those constructed of wood and laminate.",
"title": "Economy"
},
{
"paragraph_id": 92,
"text": "Because of its high rate of economic marginalization, more people migrate from Chiapas than migrate to it. Most of its socioeconomic indicators are the lowest in the country including income, education, health and housing. It has a significantly higher percentage of illiteracy than the rest of the country, although that situation has improved since the 1970s when over 45% were illiterate and 1980s, about 32%. The tropical climate presents health challenges, with most illnesses related to the gastro-intestinal tract and parasites. As of 2005, the state has 1,138 medical facilities: 1098 outpatient and 40 inpatient. Most are run by IMSS and ISSSTE and other government agencies. The implementation of NAFTA had negative effects on the economy, particularly by lowering prices for agricultural products. It made the southern states of Mexico poorer in comparison to those in the north, with over 90% of the poorest municipalities in the south of the country. As of 2006, 31.8% work in communal services, social services and personal services. 18.4% work in financial services, insurance and real estate, 10.7% work in commerce, restaurants and hotels, 9.8% work in construction, 8.9% in utilities, 7.8% in transportation, 3.4% in industry (excluding handcrafts), and 8.4% in agriculture.",
"title": "Economy"
},
{
"paragraph_id": 93,
"text": "Although until the 1960s, many indigenous communities were considered by scholars to be autonomous and economically isolated, this was never the case. Economic conditions began forcing many to migrate to work, especially in agriculture for non-indigenous. However, unlike many other migrant workers, most indigenous in Chiapas have remained strongly tied to their home communities. A study as early as the 1970s showed that 77 percent of heads of household migrated outside of the Chamula municipality as local land did not produce sufficiently to support families. In the 1970s, cuts in the price of corn forced many large landowners to convert their fields into pasture for cattle, displacing many hired laborers, cattle required less work. These agricultural laborers began to work for the government on infrastructure projects financed by oil revenue. It is estimated that in the 1980s to 1990s as many as 100,000 indigenous people moved from the mountain areas into cities in Chiapas, with some moving out of the state to Mexico City, Cancún and Villahermosa in search of employment.",
"title": "Economy"
},
{
"paragraph_id": 94,
"text": "Agriculture, livestock, forestry and fishing employ over 53% of the state's population; however, its productivity is considered to be low. Agriculture includes both seasonal and perennial plants. Major crops include corn, beans, sorghum, soybeans, peanuts, sesame seeds, coffee, cacao, sugar cane, mangos, bananas, and palm oil. These crops take up 95% of the cultivated land in the state and 90% of the agricultural production. Only four percent of fields are irrigated with the rest dependent on rainfall either seasonally or year round. Chiapas ranks second among the Mexican states in the production of cacao, the product used to make chocolate, and is responsible for about 60 percent of Mexico's total coffee output. The production of bananas, cacao and corn make Chiapas Mexico's second largest agricultural producer overall.",
"title": "Economy"
},
{
"paragraph_id": 95,
"text": "Coffee is the state's most important cash crop with a history from the 19th century. The crop was introduced in 1846 by Jeronimo Manchinelli who brought 1,500 seedlings from Guatemala on his farm La Chacara. This was followed by a number of other farms as well. Coffee production intensified during the regime of Porfirio Díaz and the Europeans who came to own many of the large farms in the area. By 1892, there were 22 coffee farms in the region, among them Nueva Alemania, Hamburgo, Chiripa, Irlanda, Argovia, San Francisco, and Linda Vista in the Soconusco region. Since then coffee production has grown and diversified to include large plantations, the use and free and forced labor and a significant sector of small producers. While most coffee is grown in the Soconusco, other areas grow it, including the municipalities of Oxchuc, Pantheló, El Bosque, Tenejapa, Chenalhó, Larráinzar, and Chalchihuitán, with around six thousand producers. It also includes organic coffee producers with 18 million tons grown annually 60,000 producers. One third of these producers are indigenous women and other peasant farmers who grow the coffee under the shade of native trees without the use of agro chemicals. Some of this coffee is even grown in environmentally protected areas such as the El Triunfo reserve, where ejidos with 14,000 people grow the coffee and sell it to cooperativers who sell it to companies such as Starbucks, but the main market is Europe. Some growers have created cooperatives of their own to cut out the middleman.",
"title": "Economy"
},
{
"paragraph_id": 96,
"text": "Ranching occupies about three million hectares of natural and induced pasture, with about 52% of all pasture induced. Most livestock is done by families using traditional methods. Most important are meat and dairy cattle, followed by pigs and domestic fowl. These three account for 93% of the value of production. Annual milk production in Chiapas totals about 180 million liters per year. The state's cattle production, along with timber from the Lacandon Jungle and energy output gives it a certain amount of economic clouts compared to other states in the region.",
"title": "Economy"
},
{
"paragraph_id": 97,
"text": "Forestry is mostly based on conifers and common tropical species producing 186,858 m per year at a value of 54,511,000 pesos. Exploited non-wood species include the Camedor palm tree for its fronds. The fishing industry is underdeveloped but includes the capture of wild species as well as fish farming. Fish production is generated both from the ocean as well as the many freshwater rivers and lakes. In 2002, 28,582 tons of fish valued at 441.2 million pesos was produced. Species include tuna, shark, shrimp, mojarra and crab.",
"title": "Economy"
},
{
"paragraph_id": 98,
"text": "The state's abundant rivers and streams have been dammed to provide about fifty-five percent of the country's hydroelectric energy. Much of this is sent to other states accounting for over six percent of all of Mexico's energy output. Main power stations are located at Malpaso, La Angostura, Chicoasén and Peñitas, which produce about eight percent of Mexico's hydroelectric energy. Manuel Moreno Torres plant on the Grijalva River the most productive in Mexico. All of the hydroelectric plants are owned and operated by the Federal Electricity Commission (Comisión Federal de Electricidad, CFE).",
"title": "Economy"
},
{
"paragraph_id": 99,
"text": "Chiapas is rich in petroleum reserves. Oil production began during the 1980s and Chiapas has become the fourth largest producer of crude oil and natural gas among the Mexican states. Many reserves are yet untapped, but between 1984 and 1992, PEMEX drilled nineteen oil wells in the Lacandona Jungle. Currently, petroleum reserves are found in the municipalities of Juárez, Ostuacán, Pichucalco and Reforma in the north of the state with 116 wells accounting for about 6.5% of the country's oil production. It also provides about a quarter of the country's natural gas. This production equals 6,313.6 m (222,960 cu ft) of natural gas and 17,565,000 barrels of oil per year.",
"title": "Economy"
},
{
"paragraph_id": 100,
"text": "Industry is limited to small and micro enterprises and include auto parts, bottling, fruit packing, coffee and chocolate processing, production of lime, bricks and other construction materials, sugar mills, furniture making, textiles, printing and the production of handcrafts. The two largest enterprises is the Comisión Federal de Electricidad and a Petróleos Mexicanos refinery. Chiapas opened its first assembly plant in 2002, a fact that highlights the historical lack of industry in this area.",
"title": "Economy"
},
{
"paragraph_id": 101,
"text": "Chiapas is one of the states that produces a wide variety of handcrafts and folk art in Mexico. One reason for this is its many indigenous ethnicities who produce traditional items out of identity as well as commercial reasons. One commercial reason is the market for crafts provided by the tourism industry. Another is that most indigenous communities can no longer provide for their own needs through agriculture. The need to generate outside income has led to many indigenous women producing crafts communally, which has not only had economic benefits but also involved them in the political process as well. Unlike many other states, Chiapas has a wide variety of wood resources such as cedar and mahogany as well as plant species such as reeds, ixtle and palm. It also has minerals such as obsidian, amber, jade and several types of clay and animals for the production of leather, dyes from various insects used to create the colors associated with the region. Items include various types of handcrafted clothing, dishes, jars, furniture, roof tiles, toys, musical instruments, tools and more.",
"title": "Economy"
},
{
"paragraph_id": 102,
"text": "Chiapas's most important handcraft is textiles, most of which is cloth woven on a backstrap loom. Indigenous girls often learn how to sew and embroider before they learn how to speak Spanish. They are also taught how to make natural dyes from insects, and weaving techniques. Many of the items produced are still for day-to-day use, often dyed in bright colors with intricate embroidery. They include skirts, belts, rebozos, blouses, huipils and shoulder wraps called chals. Designs are in red, yellow, turquoise blue, purple, pink, green and various pastels and decorated with designs such as flowers, butterflies, and birds, all based on local flora and fauna. Commercially, indigenous textiles are most often found in San Cristóbal de las Casas, San Juan Chamula and Zinacantán. The best textiles are considered to be from Magdalenas, Larráinzar, Venustiano Carranza and Sibaca.",
"title": "Economy"
},
{
"paragraph_id": 103,
"text": "One of the main minerals of the state is amber, much of which is 25 million years old, with quality comparable to that found in the Dominican Republic. Chiapan amber has a number of unique qualities, including much that is clear all the way through and some with fossilized insects and plants. Most Chiapan amber is worked into jewelry including pendants, rings and necklaces. Colors vary from white to yellow/orange to a deep red, but there are also green and pink tones as well. Since pre-Hispanic times, native peoples have believed amber to have healing and protective qualities. The largest amber mine is in Simojovel, a small village 130 km from Tuxtla Gutiérrez, which produces 95% of Chiapas's amber. Other mines are found in Huitiupán, Totolapa, El Bosque, Pueblo Nuevo Solistahuacán, Pantelhó and San Andrés Duraznal. According to the Museum of Amber in San Cristóbal, almost 300 kg of amber is extracted per month from the state. Prices vary depending on quality and color.",
"title": "Economy"
},
{
"paragraph_id": 104,
"text": "The major center for ceramics in the state is the city of Amatenango del Valle, with its barro blanco (white clay) pottery. The most traditional ceramic in Amatenango and Aguacatenango is a type of large jar called a cantaro used to transport water and other liquids. Many pieces created from this clay are ornamental as well as traditional pieces for everyday use such as comals, dishes, storage containers and flowerpots. All pieces here are made by hand using techniques that go back centuries. Other communities that produce ceramics include Chiapa de Corzo, Tonalá, Ocuilpa, Suchiapa and San Cristóbal de las Casas.",
"title": "Economy"
},
{
"paragraph_id": 105,
"text": "Wood crafts in the state center on furniture, brightly painted sculptures and toys. The Tzotzils of San Juan de Chamula are known for their sculptures as well as for their sturdy furniture. Sculptures are made from woods such as cedar, mahogany and strawberry tree. Another town noted for their sculptures is Tecpatán. The making lacquer to use in the decoration of wooden and other items goes back to the colonial period. The best-known area for this type of work, called \"laca\" is Chiapa de Corzo, which has a museum dedicated to it. One reason this type of decoration became popular in the state was that it protected items from the constant humidity of the climate. Much of the laca in Chiapa de Corzo is made in the traditional way with natural pigments and sands to cover gourds, dipping spoons, chests, niches and furniture. It is also used to create the Parachicos masks.",
"title": "Economy"
},
{
"paragraph_id": 106,
"text": "Traditional Mexican toys, which have all but disappeared in the rest of Mexico, are still readily found here and include the cajita de la serpiente, yo yos, ball in cup and more. Other wooden items include masks, cooking utensils, and tools. One famous toy is the \"muñecos zapatistas\" (Zapatista dolls), which are based on the revolutionary group that emerged in the 1990s.",
"title": "Economy"
},
{
"paragraph_id": 107,
"text": "Ninety-four percent of the state's commercial outlets are small retail stores with about 6% wholesalers. There are 111 municipal markets, 55 tianguis, three wholesale food markets and 173 large vendors of staple products. The service sector is the most important to the economy, with mostly commerce, warehousing and tourism.",
"title": "Economy"
},
{
"paragraph_id": 108,
"text": "Tourism brings large numbers of visitors to the state each year. Most of Chiapas's tourism is based on its culture, colonial cities and ecology. The state has a total of 491 ranked hotels with 12,122 rooms. There are also 780 other establishments catering primarily to tourism, such as services and restaurants.",
"title": "Economy"
},
{
"paragraph_id": 109,
"text": "There are three main tourist routes: the Maya Route, the Colonial Route and the Coffee Route. The Maya Route runs along the border with Guatemala in the Lacandon Jungle and includes the sites of Palenque, Bonampak, Yaxchilan along with the natural attractions of Agua Azul Waterfalls, Misol-Há Waterfall, and the Catazajá Lake. Palenque is the most important of these sites, and one of the most important tourist destinations in the state. Yaxchilan was a Mayan city along the Usumacinta River. It developed between 350 and 810 CE. Bonampak is known for its well preserved murals. These Mayan sites have made the state an attraction for international tourism. These sites contain a large number of structures, most of which date back thousands of years, especially to the sixth century. In addition to the sites on the Mayan Route, there are others within the state away from the border such as Toniná, near the city of Ocosingo.",
"title": "Economy"
},
{
"paragraph_id": 110,
"text": "The Colonial Route is mostly in the central highlands with a significant number of churches, monasteries and other structures from the colonial period along with some from the 19th century and even into the early 20th. The most important city on this route is San Cristóbal de las Casas, located in the Los Altos region in the Jovel Valley. The historic center of the city is filled with tiled roofs, patios with flowers, balconies, Baroque facades along with Neoclassical and Moorish designs. It is centered on a main plaza surrounded by the cathedral, the municipal palace, the Portales commercial area and the San Nicolás church. In addition, it has museums dedicated to the state's indigenous cultures, one to amber and one to jade, both of which have been mined in the state. Other attractions along this route include Comitán de Domínguez and Chiapa de Corzo, along with small indigenous communities such as San Juan Chamula. The state capital of Tuxtla Gutiérrez does not have many colonial era structures left, but it lies near the area's most famous natural attraction of the Sumidero Canyon. This canyon is popular with tourists who take boat tours into it on the Grijalva River to see such features such as caves (La Cueva del Hombre, La Cueva del Silencio) and the Christmas Tree, which is a rock and plant formation on the side of one of the canyon walls created by a seasonal waterfall.",
"title": "Economy"
},
{
"paragraph_id": 111,
"text": "The Coffee Route begins in Tapachula and follows a mountainous road into the Suconusco regopm. The route passes through Puerto Chiapas, a port with modern infrastructure for shipping exports and receiving international cruises. The route visits a number of coffee plantations, such as Hamburgo, Chiripa, Violetas, Santa Rita, Lindavista, Perú-París, San Antonio Chicarras and Rancho Alegre. These haciendas provide visitors with the opportunity to see how coffee is grown and initially processed on these farms. They also offer a number of ecotourism activities such as mountain climbing, rafting, rappelling and mountain biking. There are also tours into the jungle vegetation and the Tacaná Volcano. In addition to coffee, the region also produces most of Chiapas's soybeans, bananas and cacao.",
"title": "Economy"
},
{
"paragraph_id": 112,
"text": "The state has a large number of ecological attractions most of which are connected to water. The main beaches on the coastline include Puerto Arista, Boca del Cielo, Playa Linda, Playa Aventuras, Playa Azul and Santa Brigida. Others are based on the state's lakes and rivers. Laguna Verde is a lake in the Coapilla municipality. The lake is generally green but its tones constantly change through the day depending on how the sun strikes it. In the early morning and evening hours there can also be blue and ochre tones as well. The El Chiflón Waterfall is part of an ecotourism center located in a valley with reeds, sugarcane, mountains and rainforest. It is formed by the San Vicente River and has pools of water at the bottom popular for swimming. The Las Nubes Ecotourism center is located in the Las Margaritas municipality near the Guatemalan border. The area features a number of turquoise blue waterfalls with bridges and lookout points set up to see them up close.",
"title": "Economy"
},
{
"paragraph_id": 113,
"text": "Still others are based on conservation, local culture and other features. The Las Guacamayas Ecotourism Center is located in the Lacandon Jungle on the edge of the Montes Azules reserve. It is centered on the conservation of the red macaw, which is in danger of extinction. The Tziscao Ecotourism Center is centered on a lake with various tones. It is located inside the Lagunas de Montebello National Park, with kayaking, mountain biking and archery. Lacanjá Chansayab is located in the interior of the Lacandon Jungle and a major Lacandon people community. It has some activities associated with ecotourism such as mountain biking, hiking and cabins. The Grutas de Rancho Nuevo Ecotourism Center is centered on a set of caves in which appear capricious forms of stalagmite and stalactites. There is horseback riding as well.",
"title": "Economy"
},
{
"paragraph_id": 114,
"text": "Architecture in the state begins with the archeological sites of the Mayans and other groups who established color schemes and other details that echo in later structures. After the Spanish subdued the area, the building of Spanish style cities began, especially in the highland areas.",
"title": "Culture"
},
{
"paragraph_id": 115,
"text": "Many of the colonial-era buildings are related to Dominicans who came from Seville. This Spanish city had much Arabic influence in its architecture, and this was incorporated into the colonial architecture of Chiapas, especially in structures dating from the 16th to 18th centuries. However, there are a number of architectural styles and influences present in Chiapas colonial structures, including colors and patterns from Oaxaca and Central America along with indigenous ones from Chiapas.",
"title": "Culture"
},
{
"paragraph_id": 116,
"text": "The main colonial structures are the cathedral and Santo Domingo church of San Cristóbal, the Santo Domingo monastery and La Pila in Chiapa de Corzo. The San Cristóbal cathedral has a Baroque facade that was begun in the 16th century but by the time it was finished in the 17th, it had a mix of Spanish, Arabic, and indigenous influences. It is one of the most elaborately decorated in Mexico.",
"title": "Culture"
},
{
"paragraph_id": 117,
"text": "The churches and former monasteries of Santo Domingo, La Merced and San Francisco have ornamentation similar to that of the cathedral. The main structures in Chiapa de Corzo are the Santo Domingo monastery and the La Pila fountain. Santo Domingo has indigenous decorative details such as double headed eagles as well as a statue of the founding monk. In San Cristóbal, the Diego de Mazariegos house has a Plateresque facade, while that of Francisco de Montejo, built later in the 18th century has a mix of Baroque and Neoclassical. Art Deco structures can be found in San Cristóbal and Tapachula in public buildings as well as a number of rural coffee plantations from the Porfirio Díaz era.",
"title": "Culture"
},
{
"paragraph_id": 118,
"text": "Art in Chiapas is based on the use of color and has strong indigenous influence. This dates back to cave paintings such as those found in Sima de las Cotorras near Tuxtla Gutiérrez and the caverns of Rancho Nuevo where human remains and offerings were also found. The best-known pre-Hispanic artwork is the Maya murals of Bonampak, which are the only Mesoamerican murals to have been preserved for over 1500 years. In general, Mayan artwork stands out for its precise depiction of faces and its narrative form. Indigenous forms derive from this background and continue into the colonial period with the use of indigenous color schemes in churches and modern structures such as the municipal palace in Tapachula. Since the colonial period, the state has produced a large number of painters and sculptors. Noted 20th-century artists include Lázaro Gómez, Ramiro Jiménez Chacón, Héctor Ventura Cruz, Máximo Prado Pozo, and Gabriel Gallegos Ramos.",
"title": "Culture"
},
{
"paragraph_id": 119,
"text": "The two best-known poets from the state are Jaime Sabines and Rosario Castellanos, both from prominent Chiapan families. The first was a merchant and diplomat and the second was a teacher, diplomat, theatre director and the director of the Instituto Nacional Indigenista. Jaime Sabines is widely regarded as Mexico's most influential contemporary poet. His work celebrates everyday people in common settings.",
"title": "Culture"
},
{
"paragraph_id": 120,
"text": "The most important instrument in the state is the marimba. In the pre-Hispanic period, indigenous peoples had already been producing music with wooden instruments. The marimba was introduced by African slaves brought to Chiapas by the Spanish. However, it achieved its widespread popularity in the early 20th century due to the formation of the Cuarteto Marimbistico de los Hermanos Gómez in 1918, who popularized the instrument and the popular music that it plays not only in Chiapas but in various parts of Mexico and into the United States. Along with Cuban Juan Arozamena, they composed the piece \"Las chiapanecas\" considered to be the unofficial anthem of the state. In the 1940s, they were also featured in a number of Mexican films. Marimbas are constructed in Venustiano Carranza, Chiapas de Corzo and Tuxtla Gutiérrez.",
"title": "Culture"
},
{
"paragraph_id": 121,
"text": "Like the rest of Mesoamerica, the basic diet has been based on corn and Chiapas cooking retains strong indigenous influence. One important ingredient is chipilin, a fragrant and strongly flavored herb that is used on most of the indigenous plates and hoja santa, the large anise-scented leaves used in much of southern Mexican cuisine. Chiapan dishes do not incorporate many chili peppers as part of their dishes. Rather, chili peppers are most often found in the condiments. One reason for that is that a local chili pepper, called the simojovel, is far too hot to use except very sparingly. Chiapan cuisine tends to rely more on slightly sweet seasonings in their main dishes such as cinnamon, plantains, prunes and pineapple are often found in meat and poultry dishes.",
"title": "Culture"
},
{
"paragraph_id": 122,
"text": "Tamales are a major part of the diet and often include chipilín mixed into the dough and hoja santa, within the tamale itself or used to wrap it. One tamale native to the state is the \"picte\", a fresh sweet corn tamale. Tamales juacanes are filled with a mixture of black beans, dried shrimp, and pumpkin seeds.",
"title": "Culture"
},
{
"paragraph_id": 123,
"text": "Meats are centered on the European introduced beef, pork and chicken as many native game animals are in danger of extinction. Meat dishes are frequently accompanied by vegetables such as squash, chayote and carrots. Black beans are the favored type. Beef is favored, especially a thin cut called tasajo usually served in a sauce. Pepita con tasajo is a common dish at festivals especially in Chiapa de Corzo. It consists of a squash seed based sauced over reconstituted and shredded dried beef. As a cattle raising area, beef dishes in Palenque are particularly good. Pux-Xaxé is a stew with beef organ meats and mole sauce made with tomato, chili bolita and corn flour. Tzispolá is a beef broth with chunks of meat, chickpeas, cabbage and various types of chili peppers. Pork dishes include cochito, which is pork in an adobo sauce. In Chiapa de Corzo, their version is cochito horneado, which is a roast suckling pig flavored with adobo. Seafood is a strong component in many dishes along the coast. Turula is dried shrimp with tomatoes. Sausages, ham and other cold cuts are most often made and consumed in the highlands.",
"title": "Culture"
},
{
"paragraph_id": 124,
"text": "In addition to meat dishes, there is chirmol, a cooked tomato sauced flavored with chili pepper, onion and cilantro and zats, butterfly caterpillars from the Altos de Chiapas that are boiled in salted water, then sautéed in lard and eaten with tortillas, limes, and green chili pepper.",
"title": "Culture"
},
{
"paragraph_id": 125,
"text": "Sopa de pan consists of layers of bread and vegetables covered with a broth seasoned with saffron and other flavorings. A Comitán speciality is hearts of palm salad in vinaigrette and Palenque is known for many versions of fried plaintains, including filled with black beans or cheese.",
"title": "Culture"
},
{
"paragraph_id": 126,
"text": "Cheese making is important, especially in the municipalities of Ocosingo, Rayon and Pijijiapan. Ocosingo has its own self-named variety, which is shipped to restaurants and gourmet shops in various parts of the country. Regional sweets include crystallized fruit, coconut candies, flan and compotes. San Cristobal is noted for its sweets, as well as chocolates, coffee and baked goods.",
"title": "Culture"
},
{
"paragraph_id": 127,
"text": "While Chiapas is known for good coffee, there are a number of other local beverages. The oldest is pozol, originally the name for a fermented corn dough. This dough has its origins in the pre-Hispanic period. To make the beverage, the dough is dissolved in water and usually flavored with cocoa and sugar, but sometimes it is left to ferment further. It is then served very cold with much ice. Taxcalate is a drink made from a powder of toasted corn, achiote, cinnamon and sugar prepared with milk or water. Pumbo is a beverage made with pineapple, club soda, vodka, sugar syrup and much ice. Pox is a drink distilled from sugar cane.",
"title": "Culture"
},
{
"paragraph_id": 128,
"text": "Like in the rest of Mexico, Christianity was introduced to the native populations of Chiapas by the Spanish conquistadors. However, Catholic beliefs were mixed with indigenous ones to form what is now called \"traditionalist\" Catholic belief. The Diocese of Chiapas comprises almost the entire state, and centered on San Cristobal de las Casas. It was founded in 1538 by Pope Paul III to evangelize the area with its most famous bishop of that time Bartolomé de las Casas. Evangelization focused on grouping indigenous peoples into communities centered on a church. This bishop not only graciously evangelized the people in their own language, he worked to introduce many of the crafts still practiced today. While still a majority, only 53.9% percent of Chiapas residents profess the Catholic faith as of 2020, compared to 78.6% of the total national population.",
"title": "Religion"
},
{
"paragraph_id": 129,
"text": "Some indigenous people mix Christianity with Indian beliefs. One particular area where this is strong is the central highlands in small communities such as San Juan Chamula. In one church in San Cristobal, Mayan rites including the sacrifice of animals is permitted inside the church to ask for good health or to \"ward off the evil eye.\"",
"title": "Religion"
},
{
"paragraph_id": 130,
"text": "Starting in the 1970s, there has been a shift away from traditional Catholic affiliation to Protestant, Evangelical and other Christian denominations. Presbyterians and Pentecostals attracted a large number of converts, with percentages of Protestants in the state rising from five percent in 1970 to twenty-one percent in 2000. This shift has had a political component as well, with those making the switch tending to identify across ethnic boundaries, especially across indigenous ethnic boundaries and being against the traditional power structure. The National Presbyterian Church in Mexico is particularly strong in Chiapas, the state can be described as one of the strongholds of the denomination.",
"title": "Religion"
},
{
"paragraph_id": 131,
"text": "Both Protestants and Word of God Catholics tend to oppose traditional cacique leadership and often worked to prohibit the sale of alcohol. The latter had the effect of attracting many women to both movements.",
"title": "Religion"
},
{
"paragraph_id": 132,
"text": "The growing number of Protestants, Evangelicals and Word of God Catholics challenging traditional authority has caused religious strife in a number of indigenous communities. Tensions have been strong, at times, especially in rural areas such as San Juan Chamula. Tension among the groups reached its peak in the 1990s with a large number of people injured during open clashes. In the 1970s, caciques began to expel dissidents from their communities for challenging their power, initially with the use of violence. By 2000, more than 20,000 people had been displaced, but state and federal authorities did not act to stop the expulsions. Today, the situation has quieted but the tension remains, especially in very isolated communities.",
"title": "Religion"
},
{
"paragraph_id": 133,
"text": "The Spanish Murabitun community, the Comunidad Islámica en España, based in Granada in Spain, and one of its missionaries, Muhammad Nafia (formerly Aureliano Pérez), now emir of the Comunidad Islámica en México, arrived in the state of Chiapas shortly after the Zapatista uprising and established a commune in the city of San Cristóbal. The group, characterized as anti-capitalistic, entered an ideological pact with the socialist Zapatistas group. President Vicente Fox voiced concerns about the influence of the fundamentalism and possible connections to the Zapatistas and the Basque terrorist organization Euskadi Ta Askatasuna (ETA), but it appeared that converts had no interest in political extremism. By 2015, many indigenous Mayans and more than 700 Tzotzils have converted to Islam. In San Cristóbal, the Murabitun established a pizzeria, a carpentry workshop and a Quranic school (madrasa) where children learned Arabic and prayed five times a day in the backroom of a residential building, and women in head scarves have become a common sight. Nowadays, most of the Mayan Muslims have left the Murabitun and established ties with the CCIM, now following the orthodox Sunni school of Islam. They built the Al-Kausar Mosque in San Cristobal de las Casas.",
"title": "Religion"
},
{
"paragraph_id": 134,
"text": "The earliest population of Chiapas was in the coastal Soconusco region, where the Chantuto peoples appeared, going back to 5500 BC. This was the oldest Mesoamerican culture discovered to date.",
"title": "Archaeology"
},
{
"paragraph_id": 135,
"text": "The largest and best-known archaeological sites in Chiapas belong to the Mayan civilization. Apart from a few works by Franciscan friars, knowledge of Maya civilisation largely disappeared after the Spanish Conquest. In the mid-19th century, John Lloyd Stephens and Frederick Catherwood traveled though the sites in Chiapas and other Mayan areas and published their writings and illustrations. This led to serious work on the culture including the deciphering of its hieroglyphic writing.",
"title": "Archaeology"
},
{
"paragraph_id": 136,
"text": "In Chiapas, principal Mayan sites include Palenque, Toniná, Bonampak, Chinkoltic and Tenam Puentes, all or near in the Lacandon Jungle. They are technically more advanced than earlier Olmec sites, which can best be seen in the detailed sculpting and novel construction techniques, including structures of four stories in height. Mayan sites are not only noted for large numbers of structures, but also for glyphs, other inscriptions, and artwork that has provided a relatively complete history of many of the sites.",
"title": "Archaeology"
},
{
"paragraph_id": 137,
"text": "Palenque is the most important Mayan and archaeological site. Though much smaller than the huge sites at Tikal or Copán, Palenque contains some of the finest architecture, sculpture and stucco reliefs the Mayans ever produced. The history of the Palenque site begins in 431 with its height under Pakal I (615–683), Chan-Bahlum II (684–702) and Kan-Xul who reigned between 702 and 721. However, the power of Palenque would be lost by the end of the century. Pakal's tomb was not discovered inside the Temple of Inscriptions until 1949. Today, Palenque is a World Heritage Site and one of the best-known sites in Mexico. The similarly-aged site (750/700–600) of Pampa el Pajón preserves burials and cultural items, including cranial modifications.",
"title": "Archaeology"
},
{
"paragraph_id": 138,
"text": "Yaxchilan flourished in the 8th and 9th centuries. The site contains impressive ruins, with palaces and temples bordering a large plaza upon a terrace above the Usumacinta River. The architectural remains extend across the higher terraces and the hills to the south of the river, overlooking both the river itself and the lowlands beyond. Yaxchilan is known for the large quantity of excellent sculpture at the site, such as the monolithic carved stelae and the narrative stone reliefs carved on lintels spanning the temple doorways. Over 120 inscriptions have been identified on the various monuments from the site. The major groups are the Central Acropolis, the West Acropolis and the South Acropolis. The South Acropolis occupies the highest part of the site. The site is aligned with relation to the Usumacinta River, at times causing unconventional orientation of the major structures, such as the two ballcourts.",
"title": "Archaeology"
},
{
"paragraph_id": 139,
"text": "The city of Bonampak features some of the finest remaining Maya murals. The realistically rendered paintings depict human sacrifices, musicians and scenes of the royal court. In fact the name means “painted murals.” It is centered on a large plaza and has a stairway that leads to the Acropolis. There are also a number of notable steles.",
"title": "Archaeology"
},
{
"paragraph_id": 140,
"text": "Toniná is near the city of Ocosingo with its main features being the Casa de Piedra (House of Stone) and Acropolis. The latter is a series of seven platforms with various temples and steles. This site was a ceremonial center that flourished between 600 and 900 CE.",
"title": "Archaeology"
},
{
"paragraph_id": 141,
"text": "The capital of Sak Tz’i’ (an Ancient Maya kingdom) now named Lacanja Tzeltal, was revealed by researchers led by associate anthropology professor Charles Golden and bioarchaeologist Andrew Scherer in the Chiapas in the backyard of a Mexican farmer in 2020.",
"title": "Archaeology"
},
{
"paragraph_id": 142,
"text": "Multiple domestic constructions used by the population for religious purposes. “Plaza Muk’ul Ton” or Monuments Plaza where people used to gather for ceremonies was also unearthed by the team.",
"title": "Archaeology"
},
{
"paragraph_id": 143,
"text": "While the Mayan sites are the best-known, there are a number of other important sites in the state, including many older than the Maya civilization.",
"title": "Archaeology"
},
{
"paragraph_id": 144,
"text": "The oldest sites are in the coastal Soconusco region. This includes the Mokaya culture, the oldest ceramic culture of Mesoamerica. Later, Paso de la Amada became important. Many of these sites are in Mazatan, Chiapas area.",
"title": "Archaeology"
},
{
"paragraph_id": 145,
"text": "Izapa became an important pre-Mayan site as well.",
"title": "Archaeology"
},
{
"paragraph_id": 146,
"text": "There are also other ancient sites including Tapachula and Tecpatán, and Pijijiapan. These sites contain numerous embankments and foundations that once lay beneath pyramids and other buildings. Some of these buildings have disappeared and others have been covered by jungle for about 3,000 years, unexplored.",
"title": "Archaeology"
},
{
"paragraph_id": 147,
"text": "Pijijiapan and Izapa are on the Pacific coast and were the most important pre Hispanic cities for about 1,000 years, as the most important commercial centers between the Mexican Plateau and Central America. Sima de las Cotorras is a sinkhole 140 meters deep with a diameter of 160 meters in the municipality of Ocozocoautla. It contains ancient cave paintings depicting warriors, animals and more. It is best known as a breeding area for parrots, thousands of which leave the area at once at dawn and return at dusk. The state as its Museo Regional de Antropologia e Historia located in Tuxtla Gutiérrez focusing on the pre Hispanic peoples of the state with a room dedicated to its history from the colonial period.",
"title": "Archaeology"
},
{
"paragraph_id": 148,
"text": "The average number of years of schooling is 6.7, which is the beginning of middle school, compared to the Mexico average of 8.6. 16.5% have no schooling at all, 59.6% have only primary school/secondary school, 13.7% finish high school or technical school and 9.8% go to university. Eighteen out of every 100 people 15 years or older cannot read or write, compared to 7/100 nationally. Most of Chiapas's illiterate population are indigenous women, who are often prevented from going to school. School absenteeism and dropout rates are highest among indigenous girls.",
"title": "Education"
},
{
"paragraph_id": 149,
"text": "There are an estimated 1.4 million students in the state from preschool on up. The state has about 61,000 teachers and just over 17,000 centers of educations. Preschool and primary schools are divided into modalities called general, indigenous, private and community educations sponsored by CONAFE. Middle school is divided into technical, telesecundaria (distance education) and classes for working adults. About 98% of the student population of the state is in state schools. Higher levels of education include \"professional medio\" (vocational training), general high school and technology-focused high school. At this level, 89% of students are in public schools. There are 105 universities and similar institutions with 58 public and 47 private serving over 60,500 students.",
"title": "Education"
},
{
"paragraph_id": 150,
"text": "The state university is the Universidad Autónoma de Chiapas [es] (UNACH). It was begun when an organization to establish a state level institution was formed in 1965, with the university itself opening its doors ten years later in 1975. The university project was partially supported by UNESCO in Mexico. It integrated older schools such as the Escuela de Derecho (Law School), which originated in 1679; the Escuela de Ingeniería Civil (School of Civil Engineering), founded in 1966; and the Escuela de Comercio y Administración, which was located in Tuxtla Gutiérrez.",
"title": "Education"
},
{
"paragraph_id": 151,
"text": "The state has approximately 22,517 km (13,991 mi) of highway with 10,857 federally maintained and 11,660 maintained by the state. Almost all of these kilometers are paved. Major highways include the Las Choapas-Raudales-Ocozocoautla, which links the state to Oaxaca, Veracruz, Puebla and Mexico City. Major airports include Llano San Juan in Ocozocoautla, Francisco Sarabia National Airport (which was replaced by Ángel Albino Corzo International Airport) in Tuxtla Gutiérrez and Corazón de María Airport (which closed in 2010) in San Cristóbal de las Casas. These are used for domestic flights with the airports in Palenque and Tapachula providing international service into Guatemala. There are 22 other airfields in twelve other municipalities. Rail lines extend over 547.8 km. There are two major lines: one in the north of the state that links the center and southeast of the country, and the Costa Panamericana route, which runs from Oaxaca to the Guatemalan border.",
"title": "Infrastructure"
},
{
"paragraph_id": 152,
"text": "Chiapas's main port is just outside the city of Tapachula called the Puerto Chiapas. It faces 3,361 m (11,027 ft) of ocean, with 3,060 m (32,900 sq ft) of warehouse space. Next to it there is an industrial park that covers 2,340,000 m (25,200,000 sq ft; 234 ha; 580 acres). Puerto Chiapas has 60,000 m (650,000 sq ft) of area with a capacity to receive 1,800 containers as well as refrigerated containers. The port serves the state of Chiapas and northern Guatemala. Puerto Chiapas serves to import and export products across the Pacific to Asia, the United States, Canada and South America. It also has connections with the Panama Canal. A marina serves yachts in transit. There is an international airport located eleven km (6.8 mi) away as well as a railroad terminal ending at the port proper. Over the past five years the port has grown with its newest addition being a terminal for cruise ships with tours to the Izapa site, the Coffee Route, the city of Tapachula, Pozuelos Lake and an Artesanal Chocolate Tour. Principal exports through the port include banana and banana trees, corn, fertilizer and tuna.",
"title": "Infrastructure"
},
{
"paragraph_id": 153,
"text": "There are thirty-six AM radio stations and sixteen FM stations. There are thirty-seven local television stations and sixty-six repeaters. Newspapers of Chiapas include: Chiapas Hoy, Cuarto Poder , El Heraldo de Chiapas, El Orbe, La Voz del Sureste, and Noticias de Chiapas.",
"title": "Infrastructure"
}
] | Chiapas, officially the Free and Sovereign State of Chiapas, is one of the states that make up the 32 federal entities of Mexico. It comprises 124 municipalities as of September 2017 and its capital and largest city is Tuxtla Gutiérrez. Other important population centers in Chiapas include Ocosingo, Tapachula, San Cristóbal de las Casas, Comitán, and Arriaga. Chiapas is the southernmost state in Mexico, and it borders the states of Oaxaca to the west, Veracruz to the northwest, and Tabasco to the north, and the Petén, Quiché, Huehuetenango, and San Marcos departments of Guatemala to the east and southeast. Chiapas has a significant coastline on the Pacific Ocean to the southwest. In general, Chiapas has a humid, tropical climate. In the northern area bordering Tabasco, near Teapa, rainfall can average more than 3,000 mm (120 in) per year. In the past, natural vegetation in this region was lowland, tall perennial rainforest, but this vegetation has been almost completely cleared to allow agriculture and ranching. Rainfall decreases moving towards the Pacific Ocean, but it is still abundant enough to allow the farming of bananas and many other tropical crops near Tapachula. On the several parallel sierras or mountain ranges running along the center of Chiapas, the climate can be quite moderate and foggy, allowing the development of cloud forests like those of Reserva de la Biosfera El Triunfo, home to a handful of horned guans, resplendent quetzals, and azure-rumped tanagers. Chiapas is home to the ancient Mayan ruins of Palenque, Yaxchilán, Bonampak, Chinkultic and Toniná. It is also home to one of the largest indigenous populations in the country, with ten federally recognized ethnicities. | 2002-02-25T15:43:11Z | 2023-12-28T12:40:49Z | [
"Template:For",
"Template:IPA-es",
"Template:Cvt",
"Template:See also",
"Template:Quantify",
"Template:Doubtful",
"Template:Largest cities",
"Template:Historical populations",
"Template:By whom",
"Template:Ill",
"Template:Reflist",
"Template:Cite news",
"Template:Cite book",
"Template:In lang",
"Template:Cite encyclopedia",
"Template:Cite thesis",
"Template:Cite web",
"Template:Webarchive",
"Template:States of Mexico",
"Template:Short description",
"Template:Bar box",
"Template:Portal bar",
"Template:Cite journal",
"Template:Cite press release",
"Template:ISBN",
"Template:IPA-myn",
"Template:As of",
"Template:Further",
"Template:Qn",
"Template:Lang",
"Template:Cite act",
"Template:Lang-es",
"Template:Main",
"Template:Snd",
"Template:Commons category",
"Template:Infobox settlement",
"Template:Which",
"Template:Citation needed",
"Template:Dubious",
"Template:Cite EB1911",
"Template:Dead link",
"Template:Osmrelation",
"Template:Chiapas",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Chiapas |
6,788 | Chrysler Building | The Chrysler Building is an Art Deco skyscraper on the East Side of Manhattan in New York City, at the intersection of 42nd Street and Lexington Avenue in Midtown Manhattan. At 1,046 ft (319 m), it is the tallest brick building in the world with a steel framework, and it was the world's tallest building for 11 months after its completion in 1930. As of 2019, the Chrysler is the 12th-tallest building in the city, tied with The New York Times Building.
Originally a project of real estate developer and former New York State Senator William H. Reynolds, the building was constructed by Walter Chrysler, the head of the Chrysler Corporation. The construction of the Chrysler Building, an early skyscraper, was characterized by a competition with 40 Wall Street and the Empire State Building to become the world's tallest building. The Chrysler Building was designed and funded by Walter Chrysler personally as a real estate investment for his children, but it was not intended as the Chrysler Corporation's headquarters. An annex was completed in 1952, and the building was sold by the Chrysler family the next year, with numerous subsequent owners.
When the Chrysler Building opened, there were mixed reviews of the building's design, some calling it inane and unoriginal, others hailing it as modernist and iconic. Today the building is seen as a paragon of the Art Deco architectural style. In 2007, it was ranked ninth on the America's Favorite Architecture by the American Institute of Architects. The facade and interior became New York City designated landmarks in 1978, and the structure was added to the National Register of Historic Places as a National Historic Landmark in 1976.
The Chrysler Building is on the eastern side of Lexington Avenue between 42nd and 43rd streets in Midtown Manhattan in New York City. The land was donated to The Cooper Union for the Advancement of Science and Art in 1902. The site is roughly a trapezoid with a 201-foot-long (61 m) frontage on Lexington Avenue; a 167-foot-long (51 m) frontage on 42nd Street; and a 205-foot-long (62 m) frontage on 43rd Street. The site bordered the old Boston Post Road, which predated, and ran aslant of, the Manhattan street grid established by the Commissioners' Plan of 1811. As a result, the east side of the building's base is similarly aslant. The building is assigned its own ZIP Code, 10174; it was one of 41 buildings in Manhattan that had their own ZIP Codes as of 2019.
The Grand Hyatt New York hotel and the Graybar Building are across Lexington Avenue, while the Socony–Mobil Building is across 42nd Street. In addition, the Chanin Building is to the southwest, diagonally across Lexington Avenue and 42nd Street.
The Chrysler Building was designed by William Van Alen in the Art Deco style and is named after one of its original tenants, automotive executive Walter Chrysler. With a height of 1,046 feet (319 m), the Chrysler is the 12th-tallest building in the city as of 2019, tied with The New York Times Building. The building is constructed of a steel frame infilled with masonry, with areas of decorative metal cladding. The structure contains 3,862 exterior windows. Approximately fifty metal ornaments protrude at the building's corners on five floors reminiscent of gargoyles on Gothic cathedrals. The 31st-floor contains gargoyles as well as replicas of the 1929 Chrysler radiator caps, and the 61st-floor is adorned with eagles as a nod to America's national bird.
The design of the Chrysler Building makes extensive use of bright "Nirosta" stainless steel, an austenitic alloy developed in Germany by Krupp. It was the first use of this "18-8 stainless steel" in an American project, composed of 18% chromium and 8% nickel. Nirosta was used in the exterior ornaments, the window frames, the crown, and the needle. The steel was an integral part of Van Alen's design, as E.E. Thum explains: "The use of permanently bright metal was of greatest aid in the carrying of rising lines and the diminishing circular forms in the roof treatment, so as to accentuate the gradual upward swing until it literally dissolves into the sky...." Stainless steel producers used the Chrysler Building to evaluate the durability of the product in architecture. In 1929, the American Society for Testing Materials created an inspection committee to study its performance, which regarded the Chrysler Building as the best location to do so; a subcommittee examined the building's panels every five years until 1960, when the inspections were canceled because the panels had shown minimal deterioration.
The Chrysler Building's height and legally mandated setbacks influenced Van Alen in his design. The walls of the lowermost sixteen floors rise directly from the sidewalk property lines, except for a recess on one side that gives the building a "U"-shaped floor plan above the fourth floor. There are setbacks on floors 16, 18, 23, 28, and 31, making the building compliant with the 1916 Zoning Resolution. This gives the building the appearance of a ziggurat on one side and a U-shaped palazzo on the other. Above the 31st floor, there are no more setbacks until the 60th floor, above which the structure is funneled into a Maltese cross shape that "blends the square shaft to the finial", according to author and photographer Cervin Robinson.
The floor plans of the first sixteen floors were made as large as possible to optimize the amount of rental space nearest ground level, which was seen as most desirable. The U-shaped cut above the fourth floor served as a shaft for air flow and illumination. The area between floors 28 and 31 added "visual interest to the middle of the building, preventing it from being dominated by the heavy detail of the lower floors and the eye-catching design of the finial. They provide a base to the column of the tower, effecting a transition between the blocky lower stories and the lofty shaft."
The ground floor exterior is covered in polished black granite from Shastone, while the three floors above it are clad in white marble from Georgia. There are two main entrances, on Lexington Avenue and on 42nd Street, each three floors high with Shastone granite surrounding each proscenium-shaped entryway. At some distance into each main entryway, there are revolving doors "beneath intricately patterned metal and glass screens", designed so as to embody the Art Deco tenet of amplifying the entrance's visual impact. A smaller side entrance on 43rd Street is one story high. There are storefronts consisting of large Nirosta-steel-framed windows at ground level. Office windows penetrate the second through fourth floors.
The west and east elevations contain the air shafts above the fourth floor, while the north and south sides contain the receding setbacks. Below the 16th floor, the facade is clad with white brick, interrupted by white-marble bands in a manner similar to basket weaving. The inner faces of the brick walls are coated with a waterproof grout mixture measuring about 1⁄16 inch (1.6 mm) thick. The windows, arranged in grids, do not have window sills, the frames being flush with the facade. Between the 16th and 24th floors, the exterior exhibits vertical white brick columns that are separated by windows on each floor. This visual effect is made possible by the presence of aluminum spandrels between the columns of windows on each floor. There are abstract reliefs on the 20th through 22nd-floor spandrels, while the 24th floor contains 9-foot (2.7 m) decorative pineapples.
Above the third setback, consisting of the 24th through 27th floors, the facade contains horizontal bands and zigzagged gray-and-black brick motifs. The section above the fourth setback, between the 27th and 31st floors, serves as a podium for the main shaft of the building. There are Nirosta-steel decorations above the setbacks. At each corner of the 31st floor, large car-hood ornaments were installed to make the base look larger. These corner extensions help counter a common optical illusion seen in tall buildings with horizontal bands, whose taller floors would normally look larger. The 31st floor also contains a gray and white frieze of hubcaps and fenders, which both symbolizes the Chrysler Corporation and serves as a visual signature of the building's Art Deco design. The bonnet embellishments take the shape of Mercury's winged helmet and resemble hood ornaments installed on Chrysler vehicles at the time.
The shaft of the tower was designed to emphasize both the horizontal and vertical: each of the tower's four sides contains three columns of windows, each framed by bricks and an unbroken marble pillar that rises along the entirety of each side. The spandrels separating the windows contain "alternating vertical stripes in gray and white brick", while each corner contains horizontal rows of black brick.
The Chrysler Building is renowned for, and recognized by its terraced crown, which is an extension of the main tower. Composed of seven radiating terraced arches, Van Alen's design of the crown is a cruciform groin vault of seven concentric members with transitioning setbacks. The entire crown is clad with Nirosta steel, ribbed and riveted in a radiating sunburst pattern with many triangular vaulted windows, reminiscent of the spokes of a wheel. The windows are repeated, in smaller form, on the terraced crown's seven narrow setbacks. Due to the curved shape of the dome, the Nirosta sheets had to be measured on site, so most of the work was carried out in workshops on the building's 67th and 75th floors. According to Robinson, the terraced crown "continue[s] the wedding-cake layering of the building itself. This concept is carried forward from the 61st floor, whose eagle gargoyles echo the treatment of the 31st, to the spire, which extends the concept of 'higher and narrower' forward to infinite height and infinitesimal width. This unique treatment emphasizes the building's height, giving it an other worldly atmosphere reminiscent of the fantastic architecture of Coney Island or the Far East."
Television station WCBS-TV (Channel 2) originated its transmission from the top of the Chrysler Building in 1938. WCBS-TV transmissions were shifted to the Empire State Building in 1960 in response to competition from RCA's transmitter on that building. For many years WPAT-FM and WTFM (now WKTU) also transmitted from the Chrysler Building, but their move to the Empire State Building by the 1970s ended commercial broadcasting from the structure.
The crown and spire are illuminated by a combination of fluorescent lights framing the crown's distinctive triangular windows and colored floodlights that face toward the building, allowing it to be lit in a variety of schemes for special occasions. The V-shaped fluorescent "tube lighting" – hundreds of 480V 40W bulbs framing 120 window openings – was added in 1981, although it had been part of the original design. Until 1998, the lights were turned off at 2 a.m., but The New York Observer columnist Ron Rosenbaum convinced Tishman Speyer to keep the lights on until 6 a.m. Since 2015, the Chrysler Building and other city skyscrapers have been part of the Audubon Society's Lights Out program, turning off their lights during bird migration seasons.
The interior of the building has several elements that were innovative when the structure was constructed. The partitions between the offices are soundproofed and divided into interchangeable sections, so the layout of any could be changed quickly and comfortably. Pipes under the floors carry both telephone and electricity cables.
The lobby is triangular in plan, connecting with entrances on Lexington Avenue, 42nd Street, and 43rd Street. The lobby was the only publicly accessible part of the Chrysler Building by the 2000s. The three entrances contain Nirosta steel doors, above which are etched-glass panels that allow natural light to illuminate the space. The floors contain bands of yellow travertine from Siena, which mark the path between the entrances and elevator banks. The writer Eric Nash described the lobby as a paragon of the Art Deco style, with clear influences of German Expressionism. Chrysler wanted the design to impress other architects and automobile magnates, so he imported various materials regardless of the extra costs incurred.
The walls are covered with huge slabs of African red granite. The walls also contain storefronts and doors made of Nirosta steel. There is a wall panel dedicated to the work of clinchers, surveyors, masons, carpenters, plasterers, and builders. Fifty different figures were modeled after workers who participated in its construction. In 1999, the mural was returned to its original state after a restoration that removed the polyurethane coating and filled-in holes added in the 1970s. Originally, Van Alen's plans for the lobby included four large supporting columns, but they were removed after Chrysler objected on the grounds that the columns made the lobby appear "cramped". The lobby has dim lighting which combined with the appliqués of the lamps, create an intimate atmosphere and highlight the space. Vertical bars of fluorescent light are covered with Belgian blue marble and Mexican amber onyx bands, which soften and diffuse the light. The marble and onyx bands are designed as inverted chevrons.
Opposite the Lexington Avenue entrance is a security guard's desk topped by a digital clock. The panel behind the desk is made of marble, surrounded by Nirosta steel. The lobby connects to four elevator banks, each of a different design. To the north and south of the security desk are terrazzo staircases leading to the second floor and basement. The stairs contain marble walls and Nirosta-steel railings. The outer walls are flat but are clad with marble strips that are slightly angled to each other, which give the impression of being curved. The inner railings of each stair are designed with zigzagging Art Deco motifs, ending at red-marble newel posts on the ground story. Above each stair are aluminum-leaf ceilings with etched-glass chandeliers.
The ceiling contains a 110-by-67-foot (34 by 20 m) mural, Transport and Human Endeavor, designed by Edward Trumbull. The mural's theme is "energy and man's application of it to the solution of his problems", and it pays homage to the Golden Age of Aviation and the Machine Age. The mural is painted in the shape of a "Y" with ocher and golden tones. The central image of the mural is a "muscled giant whose brain directs his boundless energy to the attainment of the triumphs of this mechanical era", according to a 1930 pamphlet that advertised the building. The mural's Art Deco style is manifested in characteristic triangles, sharp angles, slightly curved lines, chrome ornaments, and numerous patterns. The mural depicts several silver planes, including the Spirit of St. Louis, as well as furnaces of incandescent steel and the building itself.
When the building opened, the first and second floors housed a public exhibition of Chrysler vehicles. The exhibition, known as the Chrysler Automobile Salon, was near the corner of Lexington Avenue and 42nd Streets, and opened in 1936. The ground floor featured "invisible glass" display windows, a 51-foot (16 m) diameter turntable upon which automobiles were displayed, and a ceiling with lights arranged in concentric circles. Escalators led to the showroom's second floor where Plymouths, Dodges, and DeSotos were sold. The Chrysler Salon remained operational through at least the 1960s.
There are 32 elevators in the skyscraper, clustered into four banks. At the time of opening, 28 of the elevators were for passenger use. Each bank serves different floors within the building, with several "express" elevators going from the lobby to a few landings in between, while "local" elevators connect the landings with the floors above these intermediate landings. As per Walter Chrysler's wishes, the elevators were designed to run at a rate of 900 feet per minute (270 m/min), despite the 700-foot-per-minute (210 m/min) speed restriction enforced in all city elevators at the time. This restriction was loosened soon after the Empire State Building opened in 1931, as that building had also been equipped with high-speed elevators. The Chrysler Building also had three of the longest elevator shafts in the world at the time of completion.
Over the course of a year, Van Alen painstakingly designed these elevators with the assistance of L.T.M. Ralston, who was in charge of developing the elevator cabs' mechanical parts. The cabs were manufactured by the Otis Elevator Company, while the doors were made by the Tyler Company. The dimensions of each elevator were 5.5 feet (1.7 m) deep by 8 feet (2.4 m) wide. Within the lobby, there are ziggurat-shaped Mexican onyx panels above the elevator doors. The doors are designed in a lotus pattern and are clad with steel and wood. When the doors are closed, they resemble "tall fans set off by metallic palm fronds rising through a series of silver parabolas, whose edges were set off by curved lilies" from the outside, as noted by Curcio. However, when a set of doors is open, the cab behind the doors resembles "an exquisite Art Deco room". These elements were influenced by ancient Egyptian designs, which significantly impacted the Art Deco style. According to Vincent Curcio, "these elevator interiors were perhaps the single most beautiful and, next to the dome, the most important feature of the entire building."
Even though the woods in the elevator cabs were arranged in four basic patterns, each cab had a unique combination of woods. Curcio stated that "if anything the building is based on patterned fabrics, [the elevators] certainly are. Three of the designs could be characterized as having 'geometric', 'Mexican' and vaguely 'art nouveau' motifs, which reflect the various influences on the design of the entire building." The roof of each elevator was covered with a metal plate whose design was unique to that cab, which in turn was placed on a polished wooden pattern that was also customized to the cab. Hidden behind these plates were ceiling fans. Curcio wrote that these elevators "are among the most beautiful small enclosed spaces in New York, and it is fair to say that no one who has seen or been in them has forgotten them". Curcio compared the elevators to the curtains of a Ziegfeld production, noting that each lobby contains lighting that peaks in the middle and slopes down on either side. The decoration of the cabs' interiors was also a nod to the Chrysler Corporation's vehicles: cars built during the building's early years had dashboards with wooden moldings. Both the doors and cab interiors were considered to be works of extraordinary marquetry.
On the 42nd Street side of the Chrysler Building, a staircase from the street leads directly under the building to the New York City Subway's 4, 5, 6, <6>, 7, <7>, and S trains at Grand Central–42nd Street station. It is part of the structure's original design. The Interborough Rapid Transit Company, which at the time was the operator of all the routes serving the 42nd Street station, originally sued to block construction of the new entrance because it would cause crowding, but the New York City Board of Transportation pushed to allow the corridor anyway. Chrysler eventually built and paid for the building's subway entrance. Work on the new entrance started in March 1930 and it opened along with the Chrysler Building two months later.
The basement also had a "hydrozone water bottling unit" that would filter tap water into drinkable water for the building's tenants. The drinkable water would then be bottled and shipped to higher floors.
The private Cloud Club formerly occupied the 66th through 68th floors. It opened in July 1930 with some three hundred members, all wealthy males who formed the city's elite. Its creation was spurred by Texaco's wish for a proper restaurant for its executives prior to renting fourteen floors in the building. The Cloud Club was a compromise between William Van Alen's modern style and Walter Chrysler's stately and traditional tastes. A member had to be elected and, if accepted, paid an initial fee of $200, plus a $150 to $300 annual fee. Texaco executives comprised most of the Cloud Club's membership. The club and its dining room may have inspired the Rainbow Room and the Rockefeller Center Luncheon Club at 30 Rockefeller Plaza.
There was a Tudor-style foyer on the 66th floor with oak paneling, as well as an old English-style grill room with wooden floors, wooden beams, wrought-iron chandeliers, and glass and lead doors. The main dining room had a futuristic appearance, with polished granite columns and etched glass appliqués in Art Deco style. There was a mural of a cloud on the ceiling and a mural of Manhattan on the dining room's north side. The 66th and 67th floors were connected by a Renaissance-style marble and bronze staircase. The 67th floor had an open bar with dark-wood paneling and furniture. On the same floor, Walter Chrysler and Texaco both had private dining rooms. Chrysler's dining room had a black and frosted-blue glass frieze of automobile workers. Texaco's dining room contained a mural across two walls; one wall depicted a town in New England with a Texaco gas station, while the other depicted an oil refinery and Texaco truck. The south side of the 67th floor also contained a library with wood-paneled walls and fluted pilasters. The 68th floor mainly contained service spaces.
In the 1950s and 1960s, members left the Cloud Club for other clubs. Texaco moved to Westchester County in 1977, and the club closed two years later. Although there have been several projects to rehabilitate the club or transform it into a disco or a gastronomic club, these plans have never materialized, as then-owner Cooke reportedly did not want a "conventional" restaurant operating within the old club. Tishman Speyer rented the top two floors of the old Cloud Club. The old staircase has been removed, as have many of the original decorations, which prompted objections from the Art Deco Society of New York.
Originally, Walter Chrysler had a two-story apartment on the 69th and 70th floors with a fireplace and a private office. The office also contained a gymnasium and the loftiest bathrooms in the city. The office had a medieval ambience with leaded windows, elaborate wooden doors, and heavy plaster. Chrysler did not use his gym much, instead choosing to stay at the Chrysler Corporation's headquarters in Detroit. Subsequently, the 69th and 70th floors were converted into a dental clinic. In 2005, a report by The New York Times found that one of the dentists, Charles Weiss, had operated at the clinic's current rooftop location since 1969. The office still had the suite's original bathroom and gymnasium. Chrysler also had a unit on the 58th through 60th floors, which served as his residence.
From the building's opening until 1945, it contained a 3,900 square feet (360 m) observation deck on the 71st floor, called "Celestial". For fifty cents visitors could transit its circumference through a corridor with vaulted ceilings painted with celestial motifs and bedecked with small hanging glass planets. The center of the observatory contained the toolbox that Walter P. Chrysler used at the beginning of his career as a mechanic; it was later preserved at the Chrysler Technology Center in Auburn Hills, Michigan. An image of the building resembling a rocket hung above it. According to a contemporary brochure, views of up to 100 miles (160 km) were possible on a clear day; but the small triangular windows of the observatory created strange angles that made viewing difficult, depressing traffic. When the Empire State Building opened in 1931 with two observatories at a higher elevation, the Chrysler observatory lost its clientele. After the observatory closed, it was used to house radio and television broadcasting equipment. Since 1986, the old observatory has housed the office of architects Harvey Morse and Cowperwood Interests.
The stories above the 71st floor are designed mostly for exterior appearance, functioning mainly as landings for the stairway to the spire and do not contain office space. They are very narrow, have low and sloping roofs, and are only used to house radio transmitters and other mechanical and electrical equipment. For example, the 73rd floor houses the motors of the elevators and a 15,000-US-gallon (57,000 L) water tank, of which 3,500 US gallons (13,000 L) are reserved for extinguishing fires.
In the mid-1920s, New York's metropolitan area surpassed London's as the world's most populous metropolitan area and its population exceeded ten million by the early 1930s. The era was characterized by profound social and technological changes. Consumer goods such as radio, cinema, and the automobile became widespread. In 1927, Walter Chrysler's automotive company, the Chrysler Corporation, became the third-largest car manufacturer in the United States, behind Ford and General Motors. The following year, Chrysler was named Time magazine's "Person of the Year".
The economic boom of the 1920s and speculation in the real estate market fostered a wave of new skyscraper projects in New York City. The Chrysler Building was built as part of an ongoing building boom that resulted in the city having the world's tallest building from 1908 to 1974. Following the end of World War I, European and American architects came to see simplified design as the epitome of the modern era and Art Deco skyscrapers as symbolizing progress, innovation, and modernity. The 1916 Zoning Resolution restricted the height that street-side exterior walls of New York City buildings could rise before needing to be setback from the street. This led to the construction of Art Deco structures in New York City with significant setbacks, large volumes, and striking silhouettes that were often elaborately decorated. Art Deco buildings were constructed for only a short period of time; but because that period was during the city's late-1920s real estate boom, the numerous skyscrapers built in the Art Deco style predominated in the city skyline, giving it the romantic quality seen in films and plays. The Chrysler Building project was shaped by these circumstances.
Originally, the Chrysler Building was to be the Reynolds Building, a project of real estate developer and former New York state senator William H. Reynolds. Prior to his involvement in planning the building, Reynolds was best known for developing Coney Island's Dreamland amusement park. When the amusement park was destroyed by a fire in 1911, Reynolds turned his attention to Manhattan real estate, where he set out to build the tallest building in the world.
In 1921, Reynolds rented a large plot of land at the corner of Lexington Avenue and 42nd Street with the intention of building a tall building on the site. Reynolds did not develop the property for several years, prompting the Cooper Union to try to increase the assessed value of the property in 1924. The move, which would force Reynolds to pay more rent, was unusual because property owners usually sought to decrease their property assessments and pay fewer taxes. Reynolds hired the architect William Van Alen to design a forty-story building there in 1927. Van Alen's original design featured many Modernist stylistic elements, with glazed, curved windows at the corners.
Van Alen was respected in his field for his work on the Albemarle Building at Broadway and 24th Street, designing it in collaboration with his partner H. Craig Severance. Van Alen and Severance complemented each other, with Van Alen being an original, imaginative architect and Severance being a shrewd businessperson who handled the firm's finances. The relationship between them became tense over disagreements on how best to run the firm. A 1924 article in the Architectural Review, praising the Albemarle Building's design, had mentioned Van Alen as the designer in the firm and ignored Severance's role. The architects' partnership dissolved acrimoniously several months later, with lawsuits over the firm's clients and assets lasting over a year. The rivalry influenced the design of the future Chrysler Building, since Severance's more traditional architectural style would otherwise have restrained Van Alen's more modern outlook.
By February 2, 1928, the proposed building's height had been increased to 54 stories, which would have made it the tallest building in Midtown. The proposal was changed again two weeks later, with official plans for a 63-story building. A little more than a week after that, the plan was changed for the third time, with two additional stories added. By this time, 42nd Street and Lexington Avenue were both hubs for construction activity, due to the removal of the Third Avenue Elevated's 42nd Street spur, which was seen as a blight on the area. The adjacent 56-story Chanin Building was also under construction. Because of the elevated spur's removal, real estate speculators believed that Lexington Avenue would become the "Broadway of the East Side", causing a ripple effect that would spur developments farther east.
In April 1928, Reynolds signed a 67-year lease for the plot and finalized the details of his ambitious project. Van Alen's original design for the skyscraper called for a base with first-floor showroom windows that would be triple-height, and above would be 12 stories with glass-wrapped corners, to create the impression that the tower was floating in mid-air. Reynolds's main contribution to the building's design was his insistence that it have a metallic crown, despite Van Alen's initial opposition; the metal-and-crystal crown would have looked like "a jeweled sphere" at night. Originally, the skyscraper would have risen 808 feet (246 m), with 67 floors. These plans were approved in June 1928. Van Alen's drawings were unveiled in the following August and published in a magazine run by the American Institute of Architects (AIA).
Reynolds ultimately devised an alternate design for the Reynolds Building, which was published in August 1928. The new design was much more conservative, with an Italianate dome that a critic compared to Governor Al Smith's bowler hat, and a brick arrangement on the upper floors that simulated windows in the corners, a detail that remains in the current Chrysler Building. This design almost exactly reflected the shape, setbacks, and the layout of the windows of the current building, but with a different dome.
With the design complete, groundbreaking for the Reynolds Building took place on September 19, 1928, but by late 1928, Reynolds did not have the means to carry on construction. Walter Chrysler offered to buy the building in early October 1928, and Reynolds sold the plot, lease, plans, and architect's services to Chrysler on October 15, 1928, for more than $2.5 million. That day, the Goodwin Construction Company began demolition of what had been built. A contract was awarded on October 28, and demolition was completed on November 9. Chrysler's initial plans for the building were similar to Reynolds's, but with the 808-foot building having 68 floors instead of 67. The plans entailed a ground-floor pedestrian arcade; a facade of stone below the fifth floor and brick-and-terracotta above; and a three-story bronze-and-glass "observation dome" at the top. However, Chrysler wanted a more progressive design, and he worked with Van Alen to redesign the skyscraper to be 925 ft (282 m) tall. At the new height, Chrysler's building would be taller than the 792-foot (241 m) Woolworth Building, a building in lower Manhattan that was the world's tallest at the time. At one point, Chrysler had requested that Van Alen shorten the design by ten floors, but reneged on that decision after realizing that the increased height would also result in increased publicity.
From late 1928 to early 1929, modifications to the design of the dome continued. In March 1929, the press published details of an "artistic dome" that had the shape of a giant thirty-pointed star, which would be crowned by a sculpture five meters high. The final design of the dome included several arches and triangular windows. Lower down, various architectural details were modeled after Chrysler automobile products, such as the hood ornaments of the Plymouth (see § Designs between setbacks). The building's gargoyles on the 31st floor and the eagles on the 61st floor, were created to represent flight, and to embody the machine age of the time. Even the topmost needle was built using a process similar to one Chrysler used to manufacture his cars, with precise "hand craftmanship". In his autobiography, Chrysler says he suggested that his building be taller than the Eiffel Tower.
Meanwhile, excavation of the new building's 69-foot-deep (21 m) foundation began in mid-November 1928 and was completed in mid-January 1929, when bedrock was reached. A total of 105,000,000 pounds (48,000,000 kg) of rock and 36,000,000 pounds (16,000,000 kg) of soil were excavated for the foundation, equal to 63% of the future building's weight. Construction of the building proper began on January 21, 1929. The Carnegie Steel Company provided the steel beams, the first of which was installed on March 27; and by April 9, the first upright beams had been set into place. The steel structure was "a few floors" high by June 1929, 35 floors high by early August, and completed by September. Despite a frantic steelwork construction pace of about four floors per week, no workers died during the construction of the skyscraper's steelwork. Chrysler lauded this achievement, saying, "It is the first time that any structure in the world has reached such a height, yet the entire steel construction was accomplished without loss of life". In total, 391,881 rivets were used, and approximately 3,826,000 bricks were laid to create the non-loadbearing walls of the skyscraper. Walter Chrysler personally financed the construction with his income from his car company. The Chrysler Building's height officially surpassed the Woolworth's on October 16, 1929, thereby becoming the world's tallest structure.
The same year that the Chrysler Building's construction started, banker George L. Ohrstrom proposed the construction of a 47-story office building at 40 Wall Street downtown, designed by Van Alen's former partner Severance. Shortly thereafter, Ohrstrom expanded his project to 60 floors, but it was still shorter than the Woolworth and Chrysler buildings. That April, Severance increased 40 Wall's height to 840 feet (260 m) with 62 floors, exceeding the Woolworth's height by 48 feet (15 m) and the Chrysler's by 32 feet (9.8 m). 40 Wall Street and the Chrysler Building started competing for the title of "world's tallest building". The Empire State Building, on 34th Street and Fifth Avenue, entered the competition in 1929. The race was defined by at least five other proposals, although only the Empire State Building would survive the Wall Street Crash of 1929. The "Race into the Sky", as popular media called it at the time, was representative of the country's optimism in the 1920s, which helped fuel the building boom in major cities. Van Alen expanded the Chrysler Building's height to 925 feet (282 m), prompting Severance to increase the height of 40 Wall Street to 927 feet (283 m) in April 1929. Construction of 40 Wall Street began that May and was completed twelve months later.
In response, Van Alen obtained permission for a 125-foot-long (38 m) spire and had it secretly constructed inside the frame of his building. The spire was delivered to the site in four different sections. On October 23, 1929, one week after surpassing the Woolworth Building's height and one day before the Wall Street Crash of 1929, the spire was assembled. According to one account, "the bottom section of the spire was hoisted to the top of the building's dome and lowered into the 66th floor of the building." Then, within 90 minutes the rest of the spire's pieces were raised and riveted in sequence, raising the tower to 1,046 feet. Van Alen, who witnessed the process from the street along with its engineers and Walter Chrysler, compared the experience to watching a butterfly leaving its cocoon. In the October 1930 edition of Architectural Forum, Van Alen explained the design and construction of the crown and needle:
A high spire structure with a needle-like termination was designed to surmount the dome. This is 185 feet high and 8 feet square at its base. It was made up of four corner angles, with light angle strut and diagonal members, all told weighing 27 tons. It was manifestly impossible to assemble this structure and hoist it as a unit from the ground, and equally impossible to hoist it in sections and place them as such in their final positions. Besides, it would be more spectacular, for publicity value, to have this cloud-piercing needle appear unexpectedly.
The steel tip brought the Chrysler Building to a height of 1,046 feet (319 m), greatly exceeding 40 Wall Street's height. Contemporary news media did not write of the spire's erection, nor were there any press releases celebrating the spire's erection. Even the New York Herald Tribune, which had virtually continuous coverage of the tower's construction, did not report on the spire's installation until days after the spire had been raised.
Chrysler realized that his tower's height would exceed the Empire State Building's as well, having ordered Van Alen to change the Chrysler's original roof from a stubby Romanesque dome to the narrow steel spire. However, the Empire State's developer John J. Raskob reviewed the plans and realized that he could add five more floors and a spire of his own to his 80-story building and acquired additional plots to support that building's height extension. Two days later, the Empire State Building's co-developer, former governor Al Smith, announced the updated plans for that skyscraper, with an observation deck on the 86th-floor roof at a height of 1,050 feet (320 m), higher than the Chrysler's 71st-floor observation deck at 783 feet (239 m).
In January 1930, it was announced that the Chrysler Corporation would maintain satellite offices in the Chrysler Building during Automobile Show Week. The skyscraper was never intended to become the Chrysler's Corporation's headquarters, which remained in Detroit. The first leases by outside tenants were announced in April 1930, before the building was officially completed. The building was formally opened on May 27, 1930, in a ceremony that coincided with the 42nd Street Property Owners and Merchants Association's meeting that year. In the lobby of the building, a bronze plaque that read "in recognition of Mr. Chrysler's contribution to civic advancement" was unveiled. Former Governor Smith, former Assemblyman Martin G. McCue, and 42nd Street Association president George W. Sweeney were among those in attendance. By June, it was reported that 65% of the available space had been leased. By August, the building was declared complete, but the New York City Department of Construction did not mark it as finished until February 1932.
The added height of the spire allowed the Chrysler Building to surpass 40 Wall Street as the tallest building in the world and the Eiffel Tower as the tallest structure. The Chrysler Building was thus the first man-made structure to be taller than 1,000 feet (300 m); and as one newspaper noted, the tower was also taller than the highest points of five states. The tower remained the world's tallest for 11 months after its completion. The Chrysler Building was appraised at $14 million, but was exempt from city taxes per an 1859 law that gave tax exemptions to sites owned by the Cooper Union. The city had attempted to repeal the tax exemption, but Cooper Union had opposed that measure. Because the Chrysler Building retains the tax exemption, it has paid Cooper Union for the use of their land since opening. While the Chrysler Corporation was a tenant, it was not involved in the construction or ownership of the Chrysler Building; rather, the tower was a project of Walter P. Chrysler for his children. In his autobiography, Chrysler wrote that he wanted to erect the building "so that his sons would have something to be responsible for".
Van Alen's satisfaction at these accomplishments was likely muted by Walter Chrysler's later refusal to pay the balance of his architectural fee. Chrysler alleged that Van Alen had received bribes from suppliers, and Van Alen had not signed any contracts with Walter Chrysler when he took over the project. Van Alen sued and the courts ruled in his favor, requiring Chrysler to pay Van Alen $840,000, or six percent of the total budget of the building. However, the lawsuit against Chrysler markedly diminished Van Alen's reputation as an architect, which, along with the effects of the Great Depression and negative criticism, ended up ruining his career. Van Alen ended his career as professor of sculpture at the nearby Beaux-Arts Institute of Design and died in 1954. According to author Neal Bascomb, "The Chrysler Building was his greatest accomplishment, and the one that guaranteed his obscurity."
The Chrysler Building's distinction as the world's tallest building was short-lived. John Raskob realized the 1,050-foot Empire State Building would only be 4 feet (1.2 m) taller than the Chrysler Building, and Raskob was afraid that Walter Chrysler might try to "pull a trick like hiding a rod in the spire and then sticking it up at the last minute." Another revision brought the Empire State Building's roof to 1,250 feet (380 m), making it the tallest building in the world by far when it opened on May 1, 1931. However, the Chrysler Building is still the world's tallest steel-supported brick building. The Chrysler Building fared better commercially than the Empire State Building did: by 1935, the Chrysler had already rented 70 percent of its floor area. By contrast, Empire State had only leased 23 percent of its space and was popularly derided as the "Empty State Building".
The Chrysler family inherited the property after the death of Walter Chrysler in 1940, with the property being under the ownership of W.P. Chrysler Building Corporation. In 1944, the corporation filed plans to build a 38-story annex to the east of the building, at 666 Third Avenue. In 1949, this was revised to a 32-story annex costing $9 million. The annex building, designed by Reinhard, Hofmeister & Walquist, had a facade similar to that of the original Chrysler Building. The stone for the original building was no longer manufactured, and had to be specially replicated. Construction started on the annex in June 1950, and the first tenants started leasing in June 1951. The building itself was completed by 1952, and a sky bridge connecting the two buildings' seventh floors was built in 1959.
The family sold the building in 1953 to William Zeckendorf for its assessed price of $18 million. The 1953 deal included the annex and the nearby Graybar Building, which, along with the Chrysler Building, sold for a combined $52 million. The new owners were Zeckendorf's company Webb and Knapp, who held a 75% interest in the sale, and the Graysler Corporation, who held a 25% stake. At the time, it was reported to be the largest real estate sale in New York City's history. In 1957, the Chrysler Building, its annex, and the Graybar Building were sold for $66 million to Lawrence Wien's realty syndicate, setting a new record for the largest sale in the city.
In 1960, the complex was purchased by Sol Goldman and Alex DiLorenzo, who received a mortgage from the Massachusetts Mutual Life Insurance Company. The next year, the building's stainless steel elements, including the needle, crown, gargoyles, and entrance doors, were polished for the first time. A group of ten workers steam-cleaned the facade below the 30th floor, and manually cleaned the portion of the tower above the 30th floor, for a cost of about $200,000. Under Goldman and DiLorenzo's operation, the building began to develop leaks and cracked walls, and about 1,200 cubic yards (920 m) of garbage piled up in the basement. The scale of the deterioration led one observer to say that the Chrysler Building was being operated "like a tenement in the South Bronx". The Chrysler Building remained profitable until 1974, when the owners faced increasing taxes and fuel costs.
Foreclosure proceedings against the building began in August 1975, when Goldman and DiLorenzo defaulted on the $29 million first mortgage and a $15 million second mortgage. The building was about 17 percent vacant at the time. Massachusetts Mutual acquired the Chrysler Building for $35 million, purchasing all the outstanding debt on the building via several transactions. The next year, the Chrysler Building was designated as a National Historic Landmark. Texaco, one of the building's major tenants, was relocating to Westchester County, New York, by then, vacating hundreds of thousands of square feet at the Chrysler Building. In early 1978, Mass Mutual devised plans to renovate the facade, heating, ventilation, air‐conditioning, elevators, lobby murals, and Cloud Club headquarters for $23 million. At a press conference announcing the renovation, mayor Ed Koch proclaimed that "the steel eagles and the gargoyles of the Chrysler Building are all shouting the renaissance of New York". Massachusetts Mutual had hired Josephine Sokolski, who had proposed modifying Van Alen's original lobby design substantially.
After the renovation was announced, the New York City Landmarks Preservation Commission (LPC) considered designating the Chrysler Building as a city landmark. Though Mass Mutual had proclaimed "sensitivity and respect" for the building's architecture, it had opposed the city landmark designation, concerned that the designation would hinder leasing. At the time, the building had 500,000 square feet (46,000 m) of vacant floor space, representing 40% of the total floor area. The owners hired the Edward S. Gordon Company as the building's leasing agent, and the firm leased 750,000 square feet (70,000 m) of vacant space within five years. The LPC designated the lobby and facade as city landmarks in September 1978. Massachusetts Mutual had hired Josephine Sokolski to renovate the lobby, but the LPC objected that many aspects of Sokolski's planned redesign had deviated too much from Van Alen's original design. As a result of these disputes, the renovation of the lobby was delayed.
The building was sold again in August 1979, this time to entrepreneur and Washington Redskins owner Jack Kent Cooke, in a deal that also transferred ownership of the Los Angeles Kings and Lakers to Jerry Buss. At the time, the building was 96 percent occupied. The new owners hired Kenneth Kleiman of Descon Interiors to redesign the lobby and elevator cabs in a style that was much more closer to Van Alen's original design. Cooke also oversaw the completion of a lighting scheme at the pinnacle, which had been part of the original design but was never completed. The lighting system, consisting of 580 fluorescent tubes installed within the triangular windows of the top stories, was first illuminated in September 1981.
Cooke next hired Hoffman Architects to restore the exterior and spire from 1995 to 1996. The joints in the now-closed observation deck were polished, and the facade restored, as part of a $1.5 million project. Some damaged steel strips of the needle were replaced and several parts of the gargoyles were re-welded together. The cleaning received the New York Landmarks Conservancy's Lucy G. Moses Preservation Award for 1997. Cooke died in April 1997, and his mortgage lender Fuji Bank moved to foreclose on the building the next month. Shortly after Fuji announced its intent to foreclose, several developers and companies announced that they were interested in buying the building. Ultimately, 20 potential buyers submitted bids to buy the Chrysler Building and several adjacent buildings.
Tishman Speyer Properties and the Travelers Insurance Group won the right to buy the building in November 1997, having submitted a bid for about $220 million (equal to $400 million in 2022). Tishman Speyer had negotiated a 150-year lease from the Cooper Union, which continued to own the land under the Chrysler Building. In 1998, Tishman Speyer announced that it had hired Beyer Blinder Belle to renovate the building and incorporate it into a commercial complex known as the Chrysler Center. As part of this project, EverGreene Architectural Arts restored the Transport and Human Endeavor mural in the lobby, which had been covered up during the late-1970s renovation. The renovation cost $100 million.
In 2001, a 75 percent stake in the building was sold, for US$300 million (equal to $500 million in 2022), to TMW, the German arm of an Atlanta-based investment fund. In June 2008, it was reported that the Abu Dhabi Investment Council was in negotiations to buy TMW's 75 percent ownership stake, Tishman Speyer's 15 percent stake, and a share of the Trylons retail structure next door for US$800 million. In July 2008, it was announced that the transaction had been completed, and that the Abu Dhabi Investment Council now owned a 90 percent stake in the building, with Tishman Speyer retaining 10 percent.
From 2010 to 2011, the building's energy, plumbing, and waste management systems were renovated. This resulted in a 21 percent decrease in the building's total energy consumption and 64 percent decrease in water consumption. In addition, 81 percent of waste was recycled. In 2012, the building received a LEED Gold accreditation from the U.S. Green Building Council, which recognized the building's environmental sustainability and energy efficiency.
The Abu Dhabi Investment Council and Tishman Speyer put the Chrysler Building on sale again in January 2019. That March, the media reported that Aby Rosen's RFR Holding LLC, in a joint venture with the Austrian SIGNA Group, had reached an agreement to purchase the Chrysler Building at a steeply discounted US$150 million. Rosen had initially planned to convert the building into a hotel, but he dropped these plans in April 2019, citing difficulties with the ground lease. Rosen then announced plans for an observation deck on the 61st-story setback, which the LPC approved in May 2020. Rosen also sought to renegotiate the terms of his ground lease with Cooper Union, and he evicted storeowners from all of the shops in the building's lobby. To attract tenants following the onset of the COVID-19 pandemic in New York City in 2020, Rosen spent $200 million converting the Chrysler Building's ground-floor space into a tenant amenity center.
Chrysler Center is the name of the building complex consisting of the Chrysler Building to the west, Chrysler Building East to the east, and the Chrysler Trylons commercial pavilion in the middle. After Tishman Speyer had acquired the entire complex, the firm renovated it completely from 1998 to 2000.
The structure at 666 Third Avenue, known as the Kent Building at the time, was renovated and renamed Chrysler Building East. This International Style building, built in 1952, is 432 feet (132 m) high and has 32 floors. The mechanical systems were modernized and the interior was modified. Postmodern architect Philip Johnson designed a new facade of dark-blue glass, which was placed about 4 inches (100 mm) in front of the Kent Building's existing facade. The structure did not resemble its western neighbor; Johnson explained that he did not "even like the architecture" of the Chrysler Building, despite acknowledging it as "the most loved building in New York". His design also included a 135,000-square-foot (12,500 m) extension. which surrounded the elevator core on the western end of the original Kent Building. The expansion utilized 150,000 square feet (14,000 m) of unused air rights above the buildings in the middle of the block. The Kent Building was not a New York City designated landmark, unlike the Chrysler Building, so its renovation did not require the LPC's approval. After the addition, the total area of the Kent building was 770,000 square feet (72,000 m).
A new building, also designed by Philip Johnson, was built between the original skyscraper and the annex. This became the Chrysler Trylons, a commercial pavilion three stories high with a retail area of 22,000 square feet (2,000 m). Its design consists of three triangular glass "trylons" measuring 57 ft (17 m), 68 ft (21 m), and 73 ft (22 m) tall; each is slanted in a different direction. The trylons are supported by vertical steel mullions measuring 10 in (250 mm) wide; between the mullions are 535 panes of reflective gray glass. The retail structures themselves are placed on either side of the trylons. Due to the complexity of the structural work, structural engineer Severud Associates built a replica at Rimouski, Quebec. Johnson designed the Chrysler Trylons as "a monument for 42nd Street [...] to give you the top of the Chrysler Building at street level."
After these modifications, the total leasable area of the complex was 2,062,772 square feet (191,637.8 m). The total cost of this project was about one hundred million dollars. This renovation has won several awards and commendations, including an Energy Star rating from the Environmental Protection Agency; a LEED Gold designation; and the Skyscraper Museum Outstanding Renovation Award of 2001.
In January 1930, the Chrysler Corporation opened satellite offices in the Chrysler Building during Automobile Show Week. In addition to the Chrysler Salon product showroom on the first and second floors, the building had a lounge and a theater for showing films of Chrysler products. Other original large tenants included Time, Inc. and Texaco oil. Needing more office space, Time moved to Rockefeller Center in 1937. Texaco relocated to a more suburban workplace in Purchase, New York, in 1977. In addition, the offices of Shaw Walker and J. S. Bache & Company were immediately atop the Chrysler Salon, while A. B. Dick, Pan American World Airways, Adams Hats, Schrafft's, and Florsheim Shoes also had offices in the building.
Notable modern tenants include:
The completed Chrysler Building garnered mixed reviews in the press. Van Alen was hailed as the "Doctor of Altitude" by Architect magazine, while architect Kenneth Murchison called Van Alen the "Ziegfeld of his profession", comparing him to popular Broadway producer Florenz Ziegfeld Jr. The building was praised for being "an expression of the intense activity and vibrant life of our day", and for "teem[ing] with the spirit of modernism, ... the epitome of modern business life, stand[ing] for progress in architecture and in modern building methods." An anonymous critic wrote in Architectural Forum's October 1930 issue: "The Chrysler...stands by itself, something apart and alone. It is simply the realization, the fulfillment in metal and masonry, of a one-man dream, a dream of such ambitions and such magnitude as to defy the comprehension and the criticism of ordinary men or by ordinary standards."
The journalist George S. Chappell called the Chrysler's design "distinctly a stunt design, evolved to make the man in the street look up". Douglas Haskell stated that the building "embodies no compelling, organic idea", and alleged that Van Alen had abandoned "some of his best innovations in behalf of stunts and new 'effects'". Others compared the Chrysler Building to "an upended swordfish", or claimed it had a "Little Nemo"-like design. Lewis Mumford, a supporter of the International Style and one of the foremost architectural critics of the United States at the time, despised the building for its "inane romanticism, meaningless voluptuousness, [and] void symbolism". The public also had mixed reviews of the Chrysler Building, as Murchison wrote: "Some think it's a freak; some think it's a stunt."
Later reviews were more positive. Architect Robert A. M. Stern wrote that the Chrysler Building was "the most extreme example of the [1920s and 1930s] period's stylistic experimentation", as contrasted with 40 Wall Street and its "thin" detailing. George H. Douglas wrote in 2004 that the Chrysler Building "remains one of the most appealing and awe-inspiring of skyscrapers". Architect Le Corbusier called the building "hot jazz in stone and steel". Architectural critic Ada Louise Huxtable stated that the building had "a wonderful, decorative, evocative aesthetic", while Paul Goldberger noted the "compressed, intense energy" of the lobby, the "magnificent" elevators, and the "magical" view from the crown. Anthony W. Robins said the Chrysler Building was "one-of-a-kind, staggering, romantic, soaring, the embodiment of 1920s skyscraper pizzazz, the great symbol of Art Deco New York".
The LPC said that the tower "embodies the romantic essence of the New York City skyscraper". The travel guide Frommer's gave the building an "exceptional" recommendation, with author Pauline Frommer writing, "In the Chrysler Building we see the roaring-twenties version of what Alan Greenspan called 'irrational exuberance'—a last burst of corporate headquarter building before stocks succumbed to the thudding crash of 1929."
The Chrysler Building appears in several films set in New York and is widely considered one of the most positively acclaimed buildings in the city. A 1996 survey of New York architects revealed it as their favorite, and The New York Times described it in 2005 as "the single most important emblem of architectural imagery on the New York skyline". In mid-2005, the Skyscraper Museum in Lower Manhattan asked 100 architects, builders, critics, engineers, historians, and scholars, among others, to choose their 10 favorites among 25 of the city's towers. The Chrysler Building came in first place, with 90 respondents placing it on their ballots. In 2007, the building ranked ninth among 150 buildings in the AIA's List of America's Favorite Architecture.
The Chrysler Building is widely heralded as an Art Deco icon. Fodor's New York City 2010 described the building as being "one of the great art deco masterpieces" which "wins many a New Yorker's vote for the city's most iconic and beloved skyscraper". Frommer's states that the Chrysler was "one of the most impressive Art Deco buildings ever constructed". Insight Guides' 2016 edition maintains that the Chrysler Building is considered among the city's "most beautiful" buildings. Its distinctive profile has inspired similar skyscrapers worldwide, including One Liberty Place in Philadelphia, Two Prudential Plaza in Chicago, and the Al Kazim Towers in Dubai. In addition, the New York-New York Hotel and Casino in Paradise, Nevada, contains the "Chrysler Tower", a replica of the Chrysler Building measuring 35 or 40 stories tall. A portion of the hotel's interior was also designed to resemble the Chrysler Building's interior.
While seen in many films, the Chrysler Building almost never appears as a main setting in them, prompting architect and author James Sanders to quip it should win "the Award for Best Supporting Skyscraper". The building was supposed to be featured in the 1933 film King Kong, but only makes a cameo at the end thanks to its producers opting for the Empire State Building in a central role. The Chrysler Building notably appears in the background of The Wiz (1978); as the setting of much of Q - The Winged Serpent (1982); in the initial credits of The Shadow of the Witness (1987); and during or after apocalyptic events in Independence Day (1996), Armageddon (1998), Deep Impact (1998), Godzilla (1998), and A.I. Artificial Intelligence (2001). The building also appears in other films, such as Spider-Man (2002), Fantastic Four: Rise of the Silver Surfer (2007), Two Weeks Notice (2002), The Sorcerer's Apprentice (2010), The Avengers (2012) and Men in Black 3 (2012).
In addition to films, the building is mentioned in the number "It's the Hard Knock Life" for the musical Annie. In the Squaresoft video game Parasite Eve, the building is the setting for the post-game content.
The Chrysler Building is frequently a subject of photographs. In December 1929, Walter Chrysler hired Margaret Bourke-White to take publicity images from a scaffold 400 feet (120 m) high. She was deeply inspired by the new structure and especially smitten by the massive eagle's-head figures projecting off the building. In her autobiography, Portrait of Myself, Bourke-White wrote, "On the sixty-first floor, the workmen started building some curious structures which overhung 42nd Street and Lexington Avenue below. When I learned these were to be gargoyles à la Notre Dame, but made of stainless steel as more suitable for the twentieth century, I decided that here would be my new studio. There was no place in the world that I would accept as a substitute."
According to one account, Bourke-White wanted to live in the building for the duration of the photo shoot, but the only person able to do so was the janitor, so she was instead relegated to co-leasing a studio with Time Inc. In 1930, several of her photographs were used in a special report on skyscrapers in the then-new Fortune magazine. Bourke-White worked in a 61st-floor studio designed by John Vassos until she was evicted in 1934. In 1934, Bourke-White's partner Oscar Graubner took a famous photo called "Margaret Bourke-White atop the Chrysler Building", which depicts her taking a photo of the city's skyline while sitting on one of the 61st-floor eagle ornaments. On October 5, 1998, Christie's auctioned the photograph for $96,000. In addition, during a January 1931 dance organized by the Society of Beaux-Arts, six architects, including Van Alen, were photographed while wearing costumes resembling the buildings that each architect designed. | [
{
"paragraph_id": 0,
"text": "The Chrysler Building is an Art Deco skyscraper on the East Side of Manhattan in New York City, at the intersection of 42nd Street and Lexington Avenue in Midtown Manhattan. At 1,046 ft (319 m), it is the tallest brick building in the world with a steel framework, and it was the world's tallest building for 11 months after its completion in 1930. As of 2019, the Chrysler is the 12th-tallest building in the city, tied with The New York Times Building.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Originally a project of real estate developer and former New York State Senator William H. Reynolds, the building was constructed by Walter Chrysler, the head of the Chrysler Corporation. The construction of the Chrysler Building, an early skyscraper, was characterized by a competition with 40 Wall Street and the Empire State Building to become the world's tallest building. The Chrysler Building was designed and funded by Walter Chrysler personally as a real estate investment for his children, but it was not intended as the Chrysler Corporation's headquarters. An annex was completed in 1952, and the building was sold by the Chrysler family the next year, with numerous subsequent owners.",
"title": ""
},
{
"paragraph_id": 2,
"text": "When the Chrysler Building opened, there were mixed reviews of the building's design, some calling it inane and unoriginal, others hailing it as modernist and iconic. Today the building is seen as a paragon of the Art Deco architectural style. In 2007, it was ranked ninth on the America's Favorite Architecture by the American Institute of Architects. The facade and interior became New York City designated landmarks in 1978, and the structure was added to the National Register of Historic Places as a National Historic Landmark in 1976.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Chrysler Building is on the eastern side of Lexington Avenue between 42nd and 43rd streets in Midtown Manhattan in New York City. The land was donated to The Cooper Union for the Advancement of Science and Art in 1902. The site is roughly a trapezoid with a 201-foot-long (61 m) frontage on Lexington Avenue; a 167-foot-long (51 m) frontage on 42nd Street; and a 205-foot-long (62 m) frontage on 43rd Street. The site bordered the old Boston Post Road, which predated, and ran aslant of, the Manhattan street grid established by the Commissioners' Plan of 1811. As a result, the east side of the building's base is similarly aslant. The building is assigned its own ZIP Code, 10174; it was one of 41 buildings in Manhattan that had their own ZIP Codes as of 2019.",
"title": "Site"
},
{
"paragraph_id": 4,
"text": "The Grand Hyatt New York hotel and the Graybar Building are across Lexington Avenue, while the Socony–Mobil Building is across 42nd Street. In addition, the Chanin Building is to the southwest, diagonally across Lexington Avenue and 42nd Street.",
"title": "Site"
},
{
"paragraph_id": 5,
"text": "The Chrysler Building was designed by William Van Alen in the Art Deco style and is named after one of its original tenants, automotive executive Walter Chrysler. With a height of 1,046 feet (319 m), the Chrysler is the 12th-tallest building in the city as of 2019, tied with The New York Times Building. The building is constructed of a steel frame infilled with masonry, with areas of decorative metal cladding. The structure contains 3,862 exterior windows. Approximately fifty metal ornaments protrude at the building's corners on five floors reminiscent of gargoyles on Gothic cathedrals. The 31st-floor contains gargoyles as well as replicas of the 1929 Chrysler radiator caps, and the 61st-floor is adorned with eagles as a nod to America's national bird.",
"title": "Architecture"
},
{
"paragraph_id": 6,
"text": "The design of the Chrysler Building makes extensive use of bright \"Nirosta\" stainless steel, an austenitic alloy developed in Germany by Krupp. It was the first use of this \"18-8 stainless steel\" in an American project, composed of 18% chromium and 8% nickel. Nirosta was used in the exterior ornaments, the window frames, the crown, and the needle. The steel was an integral part of Van Alen's design, as E.E. Thum explains: \"The use of permanently bright metal was of greatest aid in the carrying of rising lines and the diminishing circular forms in the roof treatment, so as to accentuate the gradual upward swing until it literally dissolves into the sky....\" Stainless steel producers used the Chrysler Building to evaluate the durability of the product in architecture. In 1929, the American Society for Testing Materials created an inspection committee to study its performance, which regarded the Chrysler Building as the best location to do so; a subcommittee examined the building's panels every five years until 1960, when the inspections were canceled because the panels had shown minimal deterioration.",
"title": "Architecture"
},
{
"paragraph_id": 7,
"text": "The Chrysler Building's height and legally mandated setbacks influenced Van Alen in his design. The walls of the lowermost sixteen floors rise directly from the sidewalk property lines, except for a recess on one side that gives the building a \"U\"-shaped floor plan above the fourth floor. There are setbacks on floors 16, 18, 23, 28, and 31, making the building compliant with the 1916 Zoning Resolution. This gives the building the appearance of a ziggurat on one side and a U-shaped palazzo on the other. Above the 31st floor, there are no more setbacks until the 60th floor, above which the structure is funneled into a Maltese cross shape that \"blends the square shaft to the finial\", according to author and photographer Cervin Robinson.",
"title": "Architecture"
},
{
"paragraph_id": 8,
"text": "The floor plans of the first sixteen floors were made as large as possible to optimize the amount of rental space nearest ground level, which was seen as most desirable. The U-shaped cut above the fourth floor served as a shaft for air flow and illumination. The area between floors 28 and 31 added \"visual interest to the middle of the building, preventing it from being dominated by the heavy detail of the lower floors and the eye-catching design of the finial. They provide a base to the column of the tower, effecting a transition between the blocky lower stories and the lofty shaft.\"",
"title": "Architecture"
},
{
"paragraph_id": 9,
"text": "The ground floor exterior is covered in polished black granite from Shastone, while the three floors above it are clad in white marble from Georgia. There are two main entrances, on Lexington Avenue and on 42nd Street, each three floors high with Shastone granite surrounding each proscenium-shaped entryway. At some distance into each main entryway, there are revolving doors \"beneath intricately patterned metal and glass screens\", designed so as to embody the Art Deco tenet of amplifying the entrance's visual impact. A smaller side entrance on 43rd Street is one story high. There are storefronts consisting of large Nirosta-steel-framed windows at ground level. Office windows penetrate the second through fourth floors.",
"title": "Architecture"
},
{
"paragraph_id": 10,
"text": "The west and east elevations contain the air shafts above the fourth floor, while the north and south sides contain the receding setbacks. Below the 16th floor, the facade is clad with white brick, interrupted by white-marble bands in a manner similar to basket weaving. The inner faces of the brick walls are coated with a waterproof grout mixture measuring about 1⁄16 inch (1.6 mm) thick. The windows, arranged in grids, do not have window sills, the frames being flush with the facade. Between the 16th and 24th floors, the exterior exhibits vertical white brick columns that are separated by windows on each floor. This visual effect is made possible by the presence of aluminum spandrels between the columns of windows on each floor. There are abstract reliefs on the 20th through 22nd-floor spandrels, while the 24th floor contains 9-foot (2.7 m) decorative pineapples.",
"title": "Architecture"
},
{
"paragraph_id": 11,
"text": "Above the third setback, consisting of the 24th through 27th floors, the facade contains horizontal bands and zigzagged gray-and-black brick motifs. The section above the fourth setback, between the 27th and 31st floors, serves as a podium for the main shaft of the building. There are Nirosta-steel decorations above the setbacks. At each corner of the 31st floor, large car-hood ornaments were installed to make the base look larger. These corner extensions help counter a common optical illusion seen in tall buildings with horizontal bands, whose taller floors would normally look larger. The 31st floor also contains a gray and white frieze of hubcaps and fenders, which both symbolizes the Chrysler Corporation and serves as a visual signature of the building's Art Deco design. The bonnet embellishments take the shape of Mercury's winged helmet and resemble hood ornaments installed on Chrysler vehicles at the time.",
"title": "Architecture"
},
{
"paragraph_id": 12,
"text": "The shaft of the tower was designed to emphasize both the horizontal and vertical: each of the tower's four sides contains three columns of windows, each framed by bricks and an unbroken marble pillar that rises along the entirety of each side. The spandrels separating the windows contain \"alternating vertical stripes in gray and white brick\", while each corner contains horizontal rows of black brick.",
"title": "Architecture"
},
{
"paragraph_id": 13,
"text": "The Chrysler Building is renowned for, and recognized by its terraced crown, which is an extension of the main tower. Composed of seven radiating terraced arches, Van Alen's design of the crown is a cruciform groin vault of seven concentric members with transitioning setbacks. The entire crown is clad with Nirosta steel, ribbed and riveted in a radiating sunburst pattern with many triangular vaulted windows, reminiscent of the spokes of a wheel. The windows are repeated, in smaller form, on the terraced crown's seven narrow setbacks. Due to the curved shape of the dome, the Nirosta sheets had to be measured on site, so most of the work was carried out in workshops on the building's 67th and 75th floors. According to Robinson, the terraced crown \"continue[s] the wedding-cake layering of the building itself. This concept is carried forward from the 61st floor, whose eagle gargoyles echo the treatment of the 31st, to the spire, which extends the concept of 'higher and narrower' forward to infinite height and infinitesimal width. This unique treatment emphasizes the building's height, giving it an other worldly atmosphere reminiscent of the fantastic architecture of Coney Island or the Far East.\"",
"title": "Architecture"
},
{
"paragraph_id": 14,
"text": "Television station WCBS-TV (Channel 2) originated its transmission from the top of the Chrysler Building in 1938. WCBS-TV transmissions were shifted to the Empire State Building in 1960 in response to competition from RCA's transmitter on that building. For many years WPAT-FM and WTFM (now WKTU) also transmitted from the Chrysler Building, but their move to the Empire State Building by the 1970s ended commercial broadcasting from the structure.",
"title": "Architecture"
},
{
"paragraph_id": 15,
"text": "The crown and spire are illuminated by a combination of fluorescent lights framing the crown's distinctive triangular windows and colored floodlights that face toward the building, allowing it to be lit in a variety of schemes for special occasions. The V-shaped fluorescent \"tube lighting\" – hundreds of 480V 40W bulbs framing 120 window openings – was added in 1981, although it had been part of the original design. Until 1998, the lights were turned off at 2 a.m., but The New York Observer columnist Ron Rosenbaum convinced Tishman Speyer to keep the lights on until 6 a.m. Since 2015, the Chrysler Building and other city skyscrapers have been part of the Audubon Society's Lights Out program, turning off their lights during bird migration seasons.",
"title": "Architecture"
},
{
"paragraph_id": 16,
"text": "The interior of the building has several elements that were innovative when the structure was constructed. The partitions between the offices are soundproofed and divided into interchangeable sections, so the layout of any could be changed quickly and comfortably. Pipes under the floors carry both telephone and electricity cables.",
"title": "Architecture"
},
{
"paragraph_id": 17,
"text": "The lobby is triangular in plan, connecting with entrances on Lexington Avenue, 42nd Street, and 43rd Street. The lobby was the only publicly accessible part of the Chrysler Building by the 2000s. The three entrances contain Nirosta steel doors, above which are etched-glass panels that allow natural light to illuminate the space. The floors contain bands of yellow travertine from Siena, which mark the path between the entrances and elevator banks. The writer Eric Nash described the lobby as a paragon of the Art Deco style, with clear influences of German Expressionism. Chrysler wanted the design to impress other architects and automobile magnates, so he imported various materials regardless of the extra costs incurred.",
"title": "Architecture"
},
{
"paragraph_id": 18,
"text": "The walls are covered with huge slabs of African red granite. The walls also contain storefronts and doors made of Nirosta steel. There is a wall panel dedicated to the work of clinchers, surveyors, masons, carpenters, plasterers, and builders. Fifty different figures were modeled after workers who participated in its construction. In 1999, the mural was returned to its original state after a restoration that removed the polyurethane coating and filled-in holes added in the 1970s. Originally, Van Alen's plans for the lobby included four large supporting columns, but they were removed after Chrysler objected on the grounds that the columns made the lobby appear \"cramped\". The lobby has dim lighting which combined with the appliqués of the lamps, create an intimate atmosphere and highlight the space. Vertical bars of fluorescent light are covered with Belgian blue marble and Mexican amber onyx bands, which soften and diffuse the light. The marble and onyx bands are designed as inverted chevrons.",
"title": "Architecture"
},
{
"paragraph_id": 19,
"text": "Opposite the Lexington Avenue entrance is a security guard's desk topped by a digital clock. The panel behind the desk is made of marble, surrounded by Nirosta steel. The lobby connects to four elevator banks, each of a different design. To the north and south of the security desk are terrazzo staircases leading to the second floor and basement. The stairs contain marble walls and Nirosta-steel railings. The outer walls are flat but are clad with marble strips that are slightly angled to each other, which give the impression of being curved. The inner railings of each stair are designed with zigzagging Art Deco motifs, ending at red-marble newel posts on the ground story. Above each stair are aluminum-leaf ceilings with etched-glass chandeliers.",
"title": "Architecture"
},
{
"paragraph_id": 20,
"text": "The ceiling contains a 110-by-67-foot (34 by 20 m) mural, Transport and Human Endeavor, designed by Edward Trumbull. The mural's theme is \"energy and man's application of it to the solution of his problems\", and it pays homage to the Golden Age of Aviation and the Machine Age. The mural is painted in the shape of a \"Y\" with ocher and golden tones. The central image of the mural is a \"muscled giant whose brain directs his boundless energy to the attainment of the triumphs of this mechanical era\", according to a 1930 pamphlet that advertised the building. The mural's Art Deco style is manifested in characteristic triangles, sharp angles, slightly curved lines, chrome ornaments, and numerous patterns. The mural depicts several silver planes, including the Spirit of St. Louis, as well as furnaces of incandescent steel and the building itself.",
"title": "Architecture"
},
{
"paragraph_id": 21,
"text": "When the building opened, the first and second floors housed a public exhibition of Chrysler vehicles. The exhibition, known as the Chrysler Automobile Salon, was near the corner of Lexington Avenue and 42nd Streets, and opened in 1936. The ground floor featured \"invisible glass\" display windows, a 51-foot (16 m) diameter turntable upon which automobiles were displayed, and a ceiling with lights arranged in concentric circles. Escalators led to the showroom's second floor where Plymouths, Dodges, and DeSotos were sold. The Chrysler Salon remained operational through at least the 1960s.",
"title": "Architecture"
},
{
"paragraph_id": 22,
"text": "There are 32 elevators in the skyscraper, clustered into four banks. At the time of opening, 28 of the elevators were for passenger use. Each bank serves different floors within the building, with several \"express\" elevators going from the lobby to a few landings in between, while \"local\" elevators connect the landings with the floors above these intermediate landings. As per Walter Chrysler's wishes, the elevators were designed to run at a rate of 900 feet per minute (270 m/min), despite the 700-foot-per-minute (210 m/min) speed restriction enforced in all city elevators at the time. This restriction was loosened soon after the Empire State Building opened in 1931, as that building had also been equipped with high-speed elevators. The Chrysler Building also had three of the longest elevator shafts in the world at the time of completion.",
"title": "Architecture"
},
{
"paragraph_id": 23,
"text": "Over the course of a year, Van Alen painstakingly designed these elevators with the assistance of L.T.M. Ralston, who was in charge of developing the elevator cabs' mechanical parts. The cabs were manufactured by the Otis Elevator Company, while the doors were made by the Tyler Company. The dimensions of each elevator were 5.5 feet (1.7 m) deep by 8 feet (2.4 m) wide. Within the lobby, there are ziggurat-shaped Mexican onyx panels above the elevator doors. The doors are designed in a lotus pattern and are clad with steel and wood. When the doors are closed, they resemble \"tall fans set off by metallic palm fronds rising through a series of silver parabolas, whose edges were set off by curved lilies\" from the outside, as noted by Curcio. However, when a set of doors is open, the cab behind the doors resembles \"an exquisite Art Deco room\". These elements were influenced by ancient Egyptian designs, which significantly impacted the Art Deco style. According to Vincent Curcio, \"these elevator interiors were perhaps the single most beautiful and, next to the dome, the most important feature of the entire building.\"",
"title": "Architecture"
},
{
"paragraph_id": 24,
"text": "Even though the woods in the elevator cabs were arranged in four basic patterns, each cab had a unique combination of woods. Curcio stated that \"if anything the building is based on patterned fabrics, [the elevators] certainly are. Three of the designs could be characterized as having 'geometric', 'Mexican' and vaguely 'art nouveau' motifs, which reflect the various influences on the design of the entire building.\" The roof of each elevator was covered with a metal plate whose design was unique to that cab, which in turn was placed on a polished wooden pattern that was also customized to the cab. Hidden behind these plates were ceiling fans. Curcio wrote that these elevators \"are among the most beautiful small enclosed spaces in New York, and it is fair to say that no one who has seen or been in them has forgotten them\". Curcio compared the elevators to the curtains of a Ziegfeld production, noting that each lobby contains lighting that peaks in the middle and slopes down on either side. The decoration of the cabs' interiors was also a nod to the Chrysler Corporation's vehicles: cars built during the building's early years had dashboards with wooden moldings. Both the doors and cab interiors were considered to be works of extraordinary marquetry.",
"title": "Architecture"
},
{
"paragraph_id": 25,
"text": "On the 42nd Street side of the Chrysler Building, a staircase from the street leads directly under the building to the New York City Subway's 4, 5, 6, <6>, 7, <7>, and S trains at Grand Central–42nd Street station. It is part of the structure's original design. The Interborough Rapid Transit Company, which at the time was the operator of all the routes serving the 42nd Street station, originally sued to block construction of the new entrance because it would cause crowding, but the New York City Board of Transportation pushed to allow the corridor anyway. Chrysler eventually built and paid for the building's subway entrance. Work on the new entrance started in March 1930 and it opened along with the Chrysler Building two months later.",
"title": "Architecture"
},
{
"paragraph_id": 26,
"text": "The basement also had a \"hydrozone water bottling unit\" that would filter tap water into drinkable water for the building's tenants. The drinkable water would then be bottled and shipped to higher floors.",
"title": "Architecture"
},
{
"paragraph_id": 27,
"text": "The private Cloud Club formerly occupied the 66th through 68th floors. It opened in July 1930 with some three hundred members, all wealthy males who formed the city's elite. Its creation was spurred by Texaco's wish for a proper restaurant for its executives prior to renting fourteen floors in the building. The Cloud Club was a compromise between William Van Alen's modern style and Walter Chrysler's stately and traditional tastes. A member had to be elected and, if accepted, paid an initial fee of $200, plus a $150 to $300 annual fee. Texaco executives comprised most of the Cloud Club's membership. The club and its dining room may have inspired the Rainbow Room and the Rockefeller Center Luncheon Club at 30 Rockefeller Plaza.",
"title": "Architecture"
},
{
"paragraph_id": 28,
"text": "There was a Tudor-style foyer on the 66th floor with oak paneling, as well as an old English-style grill room with wooden floors, wooden beams, wrought-iron chandeliers, and glass and lead doors. The main dining room had a futuristic appearance, with polished granite columns and etched glass appliqués in Art Deco style. There was a mural of a cloud on the ceiling and a mural of Manhattan on the dining room's north side. The 66th and 67th floors were connected by a Renaissance-style marble and bronze staircase. The 67th floor had an open bar with dark-wood paneling and furniture. On the same floor, Walter Chrysler and Texaco both had private dining rooms. Chrysler's dining room had a black and frosted-blue glass frieze of automobile workers. Texaco's dining room contained a mural across two walls; one wall depicted a town in New England with a Texaco gas station, while the other depicted an oil refinery and Texaco truck. The south side of the 67th floor also contained a library with wood-paneled walls and fluted pilasters. The 68th floor mainly contained service spaces.",
"title": "Architecture"
},
{
"paragraph_id": 29,
"text": "In the 1950s and 1960s, members left the Cloud Club for other clubs. Texaco moved to Westchester County in 1977, and the club closed two years later. Although there have been several projects to rehabilitate the club or transform it into a disco or a gastronomic club, these plans have never materialized, as then-owner Cooke reportedly did not want a \"conventional\" restaurant operating within the old club. Tishman Speyer rented the top two floors of the old Cloud Club. The old staircase has been removed, as have many of the original decorations, which prompted objections from the Art Deco Society of New York.",
"title": "Architecture"
},
{
"paragraph_id": 30,
"text": "Originally, Walter Chrysler had a two-story apartment on the 69th and 70th floors with a fireplace and a private office. The office also contained a gymnasium and the loftiest bathrooms in the city. The office had a medieval ambience with leaded windows, elaborate wooden doors, and heavy plaster. Chrysler did not use his gym much, instead choosing to stay at the Chrysler Corporation's headquarters in Detroit. Subsequently, the 69th and 70th floors were converted into a dental clinic. In 2005, a report by The New York Times found that one of the dentists, Charles Weiss, had operated at the clinic's current rooftop location since 1969. The office still had the suite's original bathroom and gymnasium. Chrysler also had a unit on the 58th through 60th floors, which served as his residence.",
"title": "Architecture"
},
{
"paragraph_id": 31,
"text": "From the building's opening until 1945, it contained a 3,900 square feet (360 m) observation deck on the 71st floor, called \"Celestial\". For fifty cents visitors could transit its circumference through a corridor with vaulted ceilings painted with celestial motifs and bedecked with small hanging glass planets. The center of the observatory contained the toolbox that Walter P. Chrysler used at the beginning of his career as a mechanic; it was later preserved at the Chrysler Technology Center in Auburn Hills, Michigan. An image of the building resembling a rocket hung above it. According to a contemporary brochure, views of up to 100 miles (160 km) were possible on a clear day; but the small triangular windows of the observatory created strange angles that made viewing difficult, depressing traffic. When the Empire State Building opened in 1931 with two observatories at a higher elevation, the Chrysler observatory lost its clientele. After the observatory closed, it was used to house radio and television broadcasting equipment. Since 1986, the old observatory has housed the office of architects Harvey Morse and Cowperwood Interests.",
"title": "Architecture"
},
{
"paragraph_id": 32,
"text": "The stories above the 71st floor are designed mostly for exterior appearance, functioning mainly as landings for the stairway to the spire and do not contain office space. They are very narrow, have low and sloping roofs, and are only used to house radio transmitters and other mechanical and electrical equipment. For example, the 73rd floor houses the motors of the elevators and a 15,000-US-gallon (57,000 L) water tank, of which 3,500 US gallons (13,000 L) are reserved for extinguishing fires.",
"title": "Architecture"
},
{
"paragraph_id": 33,
"text": "In the mid-1920s, New York's metropolitan area surpassed London's as the world's most populous metropolitan area and its population exceeded ten million by the early 1930s. The era was characterized by profound social and technological changes. Consumer goods such as radio, cinema, and the automobile became widespread. In 1927, Walter Chrysler's automotive company, the Chrysler Corporation, became the third-largest car manufacturer in the United States, behind Ford and General Motors. The following year, Chrysler was named Time magazine's \"Person of the Year\".",
"title": "History"
},
{
"paragraph_id": 34,
"text": "The economic boom of the 1920s and speculation in the real estate market fostered a wave of new skyscraper projects in New York City. The Chrysler Building was built as part of an ongoing building boom that resulted in the city having the world's tallest building from 1908 to 1974. Following the end of World War I, European and American architects came to see simplified design as the epitome of the modern era and Art Deco skyscrapers as symbolizing progress, innovation, and modernity. The 1916 Zoning Resolution restricted the height that street-side exterior walls of New York City buildings could rise before needing to be setback from the street. This led to the construction of Art Deco structures in New York City with significant setbacks, large volumes, and striking silhouettes that were often elaborately decorated. Art Deco buildings were constructed for only a short period of time; but because that period was during the city's late-1920s real estate boom, the numerous skyscrapers built in the Art Deco style predominated in the city skyline, giving it the romantic quality seen in films and plays. The Chrysler Building project was shaped by these circumstances.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "Originally, the Chrysler Building was to be the Reynolds Building, a project of real estate developer and former New York state senator William H. Reynolds. Prior to his involvement in planning the building, Reynolds was best known for developing Coney Island's Dreamland amusement park. When the amusement park was destroyed by a fire in 1911, Reynolds turned his attention to Manhattan real estate, where he set out to build the tallest building in the world.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "In 1921, Reynolds rented a large plot of land at the corner of Lexington Avenue and 42nd Street with the intention of building a tall building on the site. Reynolds did not develop the property for several years, prompting the Cooper Union to try to increase the assessed value of the property in 1924. The move, which would force Reynolds to pay more rent, was unusual because property owners usually sought to decrease their property assessments and pay fewer taxes. Reynolds hired the architect William Van Alen to design a forty-story building there in 1927. Van Alen's original design featured many Modernist stylistic elements, with glazed, curved windows at the corners.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "Van Alen was respected in his field for his work on the Albemarle Building at Broadway and 24th Street, designing it in collaboration with his partner H. Craig Severance. Van Alen and Severance complemented each other, with Van Alen being an original, imaginative architect and Severance being a shrewd businessperson who handled the firm's finances. The relationship between them became tense over disagreements on how best to run the firm. A 1924 article in the Architectural Review, praising the Albemarle Building's design, had mentioned Van Alen as the designer in the firm and ignored Severance's role. The architects' partnership dissolved acrimoniously several months later, with lawsuits over the firm's clients and assets lasting over a year. The rivalry influenced the design of the future Chrysler Building, since Severance's more traditional architectural style would otherwise have restrained Van Alen's more modern outlook.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "By February 2, 1928, the proposed building's height had been increased to 54 stories, which would have made it the tallest building in Midtown. The proposal was changed again two weeks later, with official plans for a 63-story building. A little more than a week after that, the plan was changed for the third time, with two additional stories added. By this time, 42nd Street and Lexington Avenue were both hubs for construction activity, due to the removal of the Third Avenue Elevated's 42nd Street spur, which was seen as a blight on the area. The adjacent 56-story Chanin Building was also under construction. Because of the elevated spur's removal, real estate speculators believed that Lexington Avenue would become the \"Broadway of the East Side\", causing a ripple effect that would spur developments farther east.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "In April 1928, Reynolds signed a 67-year lease for the plot and finalized the details of his ambitious project. Van Alen's original design for the skyscraper called for a base with first-floor showroom windows that would be triple-height, and above would be 12 stories with glass-wrapped corners, to create the impression that the tower was floating in mid-air. Reynolds's main contribution to the building's design was his insistence that it have a metallic crown, despite Van Alen's initial opposition; the metal-and-crystal crown would have looked like \"a jeweled sphere\" at night. Originally, the skyscraper would have risen 808 feet (246 m), with 67 floors. These plans were approved in June 1928. Van Alen's drawings were unveiled in the following August and published in a magazine run by the American Institute of Architects (AIA).",
"title": "History"
},
{
"paragraph_id": 40,
"text": "Reynolds ultimately devised an alternate design for the Reynolds Building, which was published in August 1928. The new design was much more conservative, with an Italianate dome that a critic compared to Governor Al Smith's bowler hat, and a brick arrangement on the upper floors that simulated windows in the corners, a detail that remains in the current Chrysler Building. This design almost exactly reflected the shape, setbacks, and the layout of the windows of the current building, but with a different dome.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "With the design complete, groundbreaking for the Reynolds Building took place on September 19, 1928, but by late 1928, Reynolds did not have the means to carry on construction. Walter Chrysler offered to buy the building in early October 1928, and Reynolds sold the plot, lease, plans, and architect's services to Chrysler on October 15, 1928, for more than $2.5 million. That day, the Goodwin Construction Company began demolition of what had been built. A contract was awarded on October 28, and demolition was completed on November 9. Chrysler's initial plans for the building were similar to Reynolds's, but with the 808-foot building having 68 floors instead of 67. The plans entailed a ground-floor pedestrian arcade; a facade of stone below the fifth floor and brick-and-terracotta above; and a three-story bronze-and-glass \"observation dome\" at the top. However, Chrysler wanted a more progressive design, and he worked with Van Alen to redesign the skyscraper to be 925 ft (282 m) tall. At the new height, Chrysler's building would be taller than the 792-foot (241 m) Woolworth Building, a building in lower Manhattan that was the world's tallest at the time. At one point, Chrysler had requested that Van Alen shorten the design by ten floors, but reneged on that decision after realizing that the increased height would also result in increased publicity.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "From late 1928 to early 1929, modifications to the design of the dome continued. In March 1929, the press published details of an \"artistic dome\" that had the shape of a giant thirty-pointed star, which would be crowned by a sculpture five meters high. The final design of the dome included several arches and triangular windows. Lower down, various architectural details were modeled after Chrysler automobile products, such as the hood ornaments of the Plymouth (see § Designs between setbacks). The building's gargoyles on the 31st floor and the eagles on the 61st floor, were created to represent flight, and to embody the machine age of the time. Even the topmost needle was built using a process similar to one Chrysler used to manufacture his cars, with precise \"hand craftmanship\". In his autobiography, Chrysler says he suggested that his building be taller than the Eiffel Tower.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "Meanwhile, excavation of the new building's 69-foot-deep (21 m) foundation began in mid-November 1928 and was completed in mid-January 1929, when bedrock was reached. A total of 105,000,000 pounds (48,000,000 kg) of rock and 36,000,000 pounds (16,000,000 kg) of soil were excavated for the foundation, equal to 63% of the future building's weight. Construction of the building proper began on January 21, 1929. The Carnegie Steel Company provided the steel beams, the first of which was installed on March 27; and by April 9, the first upright beams had been set into place. The steel structure was \"a few floors\" high by June 1929, 35 floors high by early August, and completed by September. Despite a frantic steelwork construction pace of about four floors per week, no workers died during the construction of the skyscraper's steelwork. Chrysler lauded this achievement, saying, \"It is the first time that any structure in the world has reached such a height, yet the entire steel construction was accomplished without loss of life\". In total, 391,881 rivets were used, and approximately 3,826,000 bricks were laid to create the non-loadbearing walls of the skyscraper. Walter Chrysler personally financed the construction with his income from his car company. The Chrysler Building's height officially surpassed the Woolworth's on October 16, 1929, thereby becoming the world's tallest structure.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "The same year that the Chrysler Building's construction started, banker George L. Ohrstrom proposed the construction of a 47-story office building at 40 Wall Street downtown, designed by Van Alen's former partner Severance. Shortly thereafter, Ohrstrom expanded his project to 60 floors, but it was still shorter than the Woolworth and Chrysler buildings. That April, Severance increased 40 Wall's height to 840 feet (260 m) with 62 floors, exceeding the Woolworth's height by 48 feet (15 m) and the Chrysler's by 32 feet (9.8 m). 40 Wall Street and the Chrysler Building started competing for the title of \"world's tallest building\". The Empire State Building, on 34th Street and Fifth Avenue, entered the competition in 1929. The race was defined by at least five other proposals, although only the Empire State Building would survive the Wall Street Crash of 1929. The \"Race into the Sky\", as popular media called it at the time, was representative of the country's optimism in the 1920s, which helped fuel the building boom in major cities. Van Alen expanded the Chrysler Building's height to 925 feet (282 m), prompting Severance to increase the height of 40 Wall Street to 927 feet (283 m) in April 1929. Construction of 40 Wall Street began that May and was completed twelve months later.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "In response, Van Alen obtained permission for a 125-foot-long (38 m) spire and had it secretly constructed inside the frame of his building. The spire was delivered to the site in four different sections. On October 23, 1929, one week after surpassing the Woolworth Building's height and one day before the Wall Street Crash of 1929, the spire was assembled. According to one account, \"the bottom section of the spire was hoisted to the top of the building's dome and lowered into the 66th floor of the building.\" Then, within 90 minutes the rest of the spire's pieces were raised and riveted in sequence, raising the tower to 1,046 feet. Van Alen, who witnessed the process from the street along with its engineers and Walter Chrysler, compared the experience to watching a butterfly leaving its cocoon. In the October 1930 edition of Architectural Forum, Van Alen explained the design and construction of the crown and needle:",
"title": "History"
},
{
"paragraph_id": 46,
"text": "A high spire structure with a needle-like termination was designed to surmount the dome. This is 185 feet high and 8 feet square at its base. It was made up of four corner angles, with light angle strut and diagonal members, all told weighing 27 tons. It was manifestly impossible to assemble this structure and hoist it as a unit from the ground, and equally impossible to hoist it in sections and place them as such in their final positions. Besides, it would be more spectacular, for publicity value, to have this cloud-piercing needle appear unexpectedly.",
"title": "History"
},
{
"paragraph_id": 47,
"text": "The steel tip brought the Chrysler Building to a height of 1,046 feet (319 m), greatly exceeding 40 Wall Street's height. Contemporary news media did not write of the spire's erection, nor were there any press releases celebrating the spire's erection. Even the New York Herald Tribune, which had virtually continuous coverage of the tower's construction, did not report on the spire's installation until days after the spire had been raised.",
"title": "History"
},
{
"paragraph_id": 48,
"text": "Chrysler realized that his tower's height would exceed the Empire State Building's as well, having ordered Van Alen to change the Chrysler's original roof from a stubby Romanesque dome to the narrow steel spire. However, the Empire State's developer John J. Raskob reviewed the plans and realized that he could add five more floors and a spire of his own to his 80-story building and acquired additional plots to support that building's height extension. Two days later, the Empire State Building's co-developer, former governor Al Smith, announced the updated plans for that skyscraper, with an observation deck on the 86th-floor roof at a height of 1,050 feet (320 m), higher than the Chrysler's 71st-floor observation deck at 783 feet (239 m).",
"title": "History"
},
{
"paragraph_id": 49,
"text": "In January 1930, it was announced that the Chrysler Corporation would maintain satellite offices in the Chrysler Building during Automobile Show Week. The skyscraper was never intended to become the Chrysler's Corporation's headquarters, which remained in Detroit. The first leases by outside tenants were announced in April 1930, before the building was officially completed. The building was formally opened on May 27, 1930, in a ceremony that coincided with the 42nd Street Property Owners and Merchants Association's meeting that year. In the lobby of the building, a bronze plaque that read \"in recognition of Mr. Chrysler's contribution to civic advancement\" was unveiled. Former Governor Smith, former Assemblyman Martin G. McCue, and 42nd Street Association president George W. Sweeney were among those in attendance. By June, it was reported that 65% of the available space had been leased. By August, the building was declared complete, but the New York City Department of Construction did not mark it as finished until February 1932.",
"title": "History"
},
{
"paragraph_id": 50,
"text": "The added height of the spire allowed the Chrysler Building to surpass 40 Wall Street as the tallest building in the world and the Eiffel Tower as the tallest structure. The Chrysler Building was thus the first man-made structure to be taller than 1,000 feet (300 m); and as one newspaper noted, the tower was also taller than the highest points of five states. The tower remained the world's tallest for 11 months after its completion. The Chrysler Building was appraised at $14 million, but was exempt from city taxes per an 1859 law that gave tax exemptions to sites owned by the Cooper Union. The city had attempted to repeal the tax exemption, but Cooper Union had opposed that measure. Because the Chrysler Building retains the tax exemption, it has paid Cooper Union for the use of their land since opening. While the Chrysler Corporation was a tenant, it was not involved in the construction or ownership of the Chrysler Building; rather, the tower was a project of Walter P. Chrysler for his children. In his autobiography, Chrysler wrote that he wanted to erect the building \"so that his sons would have something to be responsible for\".",
"title": "History"
},
{
"paragraph_id": 51,
"text": "Van Alen's satisfaction at these accomplishments was likely muted by Walter Chrysler's later refusal to pay the balance of his architectural fee. Chrysler alleged that Van Alen had received bribes from suppliers, and Van Alen had not signed any contracts with Walter Chrysler when he took over the project. Van Alen sued and the courts ruled in his favor, requiring Chrysler to pay Van Alen $840,000, or six percent of the total budget of the building. However, the lawsuit against Chrysler markedly diminished Van Alen's reputation as an architect, which, along with the effects of the Great Depression and negative criticism, ended up ruining his career. Van Alen ended his career as professor of sculpture at the nearby Beaux-Arts Institute of Design and died in 1954. According to author Neal Bascomb, \"The Chrysler Building was his greatest accomplishment, and the one that guaranteed his obscurity.\"",
"title": "History"
},
{
"paragraph_id": 52,
"text": "The Chrysler Building's distinction as the world's tallest building was short-lived. John Raskob realized the 1,050-foot Empire State Building would only be 4 feet (1.2 m) taller than the Chrysler Building, and Raskob was afraid that Walter Chrysler might try to \"pull a trick like hiding a rod in the spire and then sticking it up at the last minute.\" Another revision brought the Empire State Building's roof to 1,250 feet (380 m), making it the tallest building in the world by far when it opened on May 1, 1931. However, the Chrysler Building is still the world's tallest steel-supported brick building. The Chrysler Building fared better commercially than the Empire State Building did: by 1935, the Chrysler had already rented 70 percent of its floor area. By contrast, Empire State had only leased 23 percent of its space and was popularly derided as the \"Empty State Building\".",
"title": "History"
},
{
"paragraph_id": 53,
"text": "The Chrysler family inherited the property after the death of Walter Chrysler in 1940, with the property being under the ownership of W.P. Chrysler Building Corporation. In 1944, the corporation filed plans to build a 38-story annex to the east of the building, at 666 Third Avenue. In 1949, this was revised to a 32-story annex costing $9 million. The annex building, designed by Reinhard, Hofmeister & Walquist, had a facade similar to that of the original Chrysler Building. The stone for the original building was no longer manufactured, and had to be specially replicated. Construction started on the annex in June 1950, and the first tenants started leasing in June 1951. The building itself was completed by 1952, and a sky bridge connecting the two buildings' seventh floors was built in 1959.",
"title": "History"
},
{
"paragraph_id": 54,
"text": "The family sold the building in 1953 to William Zeckendorf for its assessed price of $18 million. The 1953 deal included the annex and the nearby Graybar Building, which, along with the Chrysler Building, sold for a combined $52 million. The new owners were Zeckendorf's company Webb and Knapp, who held a 75% interest in the sale, and the Graysler Corporation, who held a 25% stake. At the time, it was reported to be the largest real estate sale in New York City's history. In 1957, the Chrysler Building, its annex, and the Graybar Building were sold for $66 million to Lawrence Wien's realty syndicate, setting a new record for the largest sale in the city.",
"title": "History"
},
{
"paragraph_id": 55,
"text": "In 1960, the complex was purchased by Sol Goldman and Alex DiLorenzo, who received a mortgage from the Massachusetts Mutual Life Insurance Company. The next year, the building's stainless steel elements, including the needle, crown, gargoyles, and entrance doors, were polished for the first time. A group of ten workers steam-cleaned the facade below the 30th floor, and manually cleaned the portion of the tower above the 30th floor, for a cost of about $200,000. Under Goldman and DiLorenzo's operation, the building began to develop leaks and cracked walls, and about 1,200 cubic yards (920 m) of garbage piled up in the basement. The scale of the deterioration led one observer to say that the Chrysler Building was being operated \"like a tenement in the South Bronx\". The Chrysler Building remained profitable until 1974, when the owners faced increasing taxes and fuel costs.",
"title": "History"
},
{
"paragraph_id": 56,
"text": "Foreclosure proceedings against the building began in August 1975, when Goldman and DiLorenzo defaulted on the $29 million first mortgage and a $15 million second mortgage. The building was about 17 percent vacant at the time. Massachusetts Mutual acquired the Chrysler Building for $35 million, purchasing all the outstanding debt on the building via several transactions. The next year, the Chrysler Building was designated as a National Historic Landmark. Texaco, one of the building's major tenants, was relocating to Westchester County, New York, by then, vacating hundreds of thousands of square feet at the Chrysler Building. In early 1978, Mass Mutual devised plans to renovate the facade, heating, ventilation, air‐conditioning, elevators, lobby murals, and Cloud Club headquarters for $23 million. At a press conference announcing the renovation, mayor Ed Koch proclaimed that \"the steel eagles and the gargoyles of the Chrysler Building are all shouting the renaissance of New York\". Massachusetts Mutual had hired Josephine Sokolski, who had proposed modifying Van Alen's original lobby design substantially.",
"title": "History"
},
{
"paragraph_id": 57,
"text": "After the renovation was announced, the New York City Landmarks Preservation Commission (LPC) considered designating the Chrysler Building as a city landmark. Though Mass Mutual had proclaimed \"sensitivity and respect\" for the building's architecture, it had opposed the city landmark designation, concerned that the designation would hinder leasing. At the time, the building had 500,000 square feet (46,000 m) of vacant floor space, representing 40% of the total floor area. The owners hired the Edward S. Gordon Company as the building's leasing agent, and the firm leased 750,000 square feet (70,000 m) of vacant space within five years. The LPC designated the lobby and facade as city landmarks in September 1978. Massachusetts Mutual had hired Josephine Sokolski to renovate the lobby, but the LPC objected that many aspects of Sokolski's planned redesign had deviated too much from Van Alen's original design. As a result of these disputes, the renovation of the lobby was delayed.",
"title": "History"
},
{
"paragraph_id": 58,
"text": "The building was sold again in August 1979, this time to entrepreneur and Washington Redskins owner Jack Kent Cooke, in a deal that also transferred ownership of the Los Angeles Kings and Lakers to Jerry Buss. At the time, the building was 96 percent occupied. The new owners hired Kenneth Kleiman of Descon Interiors to redesign the lobby and elevator cabs in a style that was much more closer to Van Alen's original design. Cooke also oversaw the completion of a lighting scheme at the pinnacle, which had been part of the original design but was never completed. The lighting system, consisting of 580 fluorescent tubes installed within the triangular windows of the top stories, was first illuminated in September 1981.",
"title": "History"
},
{
"paragraph_id": 59,
"text": "Cooke next hired Hoffman Architects to restore the exterior and spire from 1995 to 1996. The joints in the now-closed observation deck were polished, and the facade restored, as part of a $1.5 million project. Some damaged steel strips of the needle were replaced and several parts of the gargoyles were re-welded together. The cleaning received the New York Landmarks Conservancy's Lucy G. Moses Preservation Award for 1997. Cooke died in April 1997, and his mortgage lender Fuji Bank moved to foreclose on the building the next month. Shortly after Fuji announced its intent to foreclose, several developers and companies announced that they were interested in buying the building. Ultimately, 20 potential buyers submitted bids to buy the Chrysler Building and several adjacent buildings.",
"title": "History"
},
{
"paragraph_id": 60,
"text": "Tishman Speyer Properties and the Travelers Insurance Group won the right to buy the building in November 1997, having submitted a bid for about $220 million (equal to $400 million in 2022). Tishman Speyer had negotiated a 150-year lease from the Cooper Union, which continued to own the land under the Chrysler Building. In 1998, Tishman Speyer announced that it had hired Beyer Blinder Belle to renovate the building and incorporate it into a commercial complex known as the Chrysler Center. As part of this project, EverGreene Architectural Arts restored the Transport and Human Endeavor mural in the lobby, which had been covered up during the late-1970s renovation. The renovation cost $100 million.",
"title": "History"
},
{
"paragraph_id": 61,
"text": "In 2001, a 75 percent stake in the building was sold, for US$300 million (equal to $500 million in 2022), to TMW, the German arm of an Atlanta-based investment fund. In June 2008, it was reported that the Abu Dhabi Investment Council was in negotiations to buy TMW's 75 percent ownership stake, Tishman Speyer's 15 percent stake, and a share of the Trylons retail structure next door for US$800 million. In July 2008, it was announced that the transaction had been completed, and that the Abu Dhabi Investment Council now owned a 90 percent stake in the building, with Tishman Speyer retaining 10 percent.",
"title": "History"
},
{
"paragraph_id": 62,
"text": "From 2010 to 2011, the building's energy, plumbing, and waste management systems were renovated. This resulted in a 21 percent decrease in the building's total energy consumption and 64 percent decrease in water consumption. In addition, 81 percent of waste was recycled. In 2012, the building received a LEED Gold accreditation from the U.S. Green Building Council, which recognized the building's environmental sustainability and energy efficiency.",
"title": "History"
},
{
"paragraph_id": 63,
"text": "The Abu Dhabi Investment Council and Tishman Speyer put the Chrysler Building on sale again in January 2019. That March, the media reported that Aby Rosen's RFR Holding LLC, in a joint venture with the Austrian SIGNA Group, had reached an agreement to purchase the Chrysler Building at a steeply discounted US$150 million. Rosen had initially planned to convert the building into a hotel, but he dropped these plans in April 2019, citing difficulties with the ground lease. Rosen then announced plans for an observation deck on the 61st-story setback, which the LPC approved in May 2020. Rosen also sought to renegotiate the terms of his ground lease with Cooper Union, and he evicted storeowners from all of the shops in the building's lobby. To attract tenants following the onset of the COVID-19 pandemic in New York City in 2020, Rosen spent $200 million converting the Chrysler Building's ground-floor space into a tenant amenity center.",
"title": "History"
},
{
"paragraph_id": 64,
"text": "Chrysler Center is the name of the building complex consisting of the Chrysler Building to the west, Chrysler Building East to the east, and the Chrysler Trylons commercial pavilion in the middle. After Tishman Speyer had acquired the entire complex, the firm renovated it completely from 1998 to 2000.",
"title": "Chrysler Center"
},
{
"paragraph_id": 65,
"text": "The structure at 666 Third Avenue, known as the Kent Building at the time, was renovated and renamed Chrysler Building East. This International Style building, built in 1952, is 432 feet (132 m) high and has 32 floors. The mechanical systems were modernized and the interior was modified. Postmodern architect Philip Johnson designed a new facade of dark-blue glass, which was placed about 4 inches (100 mm) in front of the Kent Building's existing facade. The structure did not resemble its western neighbor; Johnson explained that he did not \"even like the architecture\" of the Chrysler Building, despite acknowledging it as \"the most loved building in New York\". His design also included a 135,000-square-foot (12,500 m) extension. which surrounded the elevator core on the western end of the original Kent Building. The expansion utilized 150,000 square feet (14,000 m) of unused air rights above the buildings in the middle of the block. The Kent Building was not a New York City designated landmark, unlike the Chrysler Building, so its renovation did not require the LPC's approval. After the addition, the total area of the Kent building was 770,000 square feet (72,000 m).",
"title": "Chrysler Center"
},
{
"paragraph_id": 66,
"text": "A new building, also designed by Philip Johnson, was built between the original skyscraper and the annex. This became the Chrysler Trylons, a commercial pavilion three stories high with a retail area of 22,000 square feet (2,000 m). Its design consists of three triangular glass \"trylons\" measuring 57 ft (17 m), 68 ft (21 m), and 73 ft (22 m) tall; each is slanted in a different direction. The trylons are supported by vertical steel mullions measuring 10 in (250 mm) wide; between the mullions are 535 panes of reflective gray glass. The retail structures themselves are placed on either side of the trylons. Due to the complexity of the structural work, structural engineer Severud Associates built a replica at Rimouski, Quebec. Johnson designed the Chrysler Trylons as \"a monument for 42nd Street [...] to give you the top of the Chrysler Building at street level.\"",
"title": "Chrysler Center"
},
{
"paragraph_id": 67,
"text": "After these modifications, the total leasable area of the complex was 2,062,772 square feet (191,637.8 m). The total cost of this project was about one hundred million dollars. This renovation has won several awards and commendations, including an Energy Star rating from the Environmental Protection Agency; a LEED Gold designation; and the Skyscraper Museum Outstanding Renovation Award of 2001.",
"title": "Chrysler Center"
},
{
"paragraph_id": 68,
"text": "In January 1930, the Chrysler Corporation opened satellite offices in the Chrysler Building during Automobile Show Week. In addition to the Chrysler Salon product showroom on the first and second floors, the building had a lounge and a theater for showing films of Chrysler products. Other original large tenants included Time, Inc. and Texaco oil. Needing more office space, Time moved to Rockefeller Center in 1937. Texaco relocated to a more suburban workplace in Purchase, New York, in 1977. In addition, the offices of Shaw Walker and J. S. Bache & Company were immediately atop the Chrysler Salon, while A. B. Dick, Pan American World Airways, Adams Hats, Schrafft's, and Florsheim Shoes also had offices in the building.",
"title": "Tenants"
},
{
"paragraph_id": 69,
"text": "Notable modern tenants include:",
"title": "Tenants"
},
{
"paragraph_id": 70,
"text": "The completed Chrysler Building garnered mixed reviews in the press. Van Alen was hailed as the \"Doctor of Altitude\" by Architect magazine, while architect Kenneth Murchison called Van Alen the \"Ziegfeld of his profession\", comparing him to popular Broadway producer Florenz Ziegfeld Jr. The building was praised for being \"an expression of the intense activity and vibrant life of our day\", and for \"teem[ing] with the spirit of modernism, ... the epitome of modern business life, stand[ing] for progress in architecture and in modern building methods.\" An anonymous critic wrote in Architectural Forum's October 1930 issue: \"The Chrysler...stands by itself, something apart and alone. It is simply the realization, the fulfillment in metal and masonry, of a one-man dream, a dream of such ambitions and such magnitude as to defy the comprehension and the criticism of ordinary men or by ordinary standards.\"",
"title": "Impact "
},
{
"paragraph_id": 71,
"text": "The journalist George S. Chappell called the Chrysler's design \"distinctly a stunt design, evolved to make the man in the street look up\". Douglas Haskell stated that the building \"embodies no compelling, organic idea\", and alleged that Van Alen had abandoned \"some of his best innovations in behalf of stunts and new 'effects'\". Others compared the Chrysler Building to \"an upended swordfish\", or claimed it had a \"Little Nemo\"-like design. Lewis Mumford, a supporter of the International Style and one of the foremost architectural critics of the United States at the time, despised the building for its \"inane romanticism, meaningless voluptuousness, [and] void symbolism\". The public also had mixed reviews of the Chrysler Building, as Murchison wrote: \"Some think it's a freak; some think it's a stunt.\"",
"title": "Impact "
},
{
"paragraph_id": 72,
"text": "Later reviews were more positive. Architect Robert A. M. Stern wrote that the Chrysler Building was \"the most extreme example of the [1920s and 1930s] period's stylistic experimentation\", as contrasted with 40 Wall Street and its \"thin\" detailing. George H. Douglas wrote in 2004 that the Chrysler Building \"remains one of the most appealing and awe-inspiring of skyscrapers\". Architect Le Corbusier called the building \"hot jazz in stone and steel\". Architectural critic Ada Louise Huxtable stated that the building had \"a wonderful, decorative, evocative aesthetic\", while Paul Goldberger noted the \"compressed, intense energy\" of the lobby, the \"magnificent\" elevators, and the \"magical\" view from the crown. Anthony W. Robins said the Chrysler Building was \"one-of-a-kind, staggering, romantic, soaring, the embodiment of 1920s skyscraper pizzazz, the great symbol of Art Deco New York\".",
"title": "Impact "
},
{
"paragraph_id": 73,
"text": "The LPC said that the tower \"embodies the romantic essence of the New York City skyscraper\". The travel guide Frommer's gave the building an \"exceptional\" recommendation, with author Pauline Frommer writing, \"In the Chrysler Building we see the roaring-twenties version of what Alan Greenspan called 'irrational exuberance'—a last burst of corporate headquarter building before stocks succumbed to the thudding crash of 1929.\"",
"title": "Impact "
},
{
"paragraph_id": 74,
"text": "The Chrysler Building appears in several films set in New York and is widely considered one of the most positively acclaimed buildings in the city. A 1996 survey of New York architects revealed it as their favorite, and The New York Times described it in 2005 as \"the single most important emblem of architectural imagery on the New York skyline\". In mid-2005, the Skyscraper Museum in Lower Manhattan asked 100 architects, builders, critics, engineers, historians, and scholars, among others, to choose their 10 favorites among 25 of the city's towers. The Chrysler Building came in first place, with 90 respondents placing it on their ballots. In 2007, the building ranked ninth among 150 buildings in the AIA's List of America's Favorite Architecture.",
"title": "Impact "
},
{
"paragraph_id": 75,
"text": "The Chrysler Building is widely heralded as an Art Deco icon. Fodor's New York City 2010 described the building as being \"one of the great art deco masterpieces\" which \"wins many a New Yorker's vote for the city's most iconic and beloved skyscraper\". Frommer's states that the Chrysler was \"one of the most impressive Art Deco buildings ever constructed\". Insight Guides' 2016 edition maintains that the Chrysler Building is considered among the city's \"most beautiful\" buildings. Its distinctive profile has inspired similar skyscrapers worldwide, including One Liberty Place in Philadelphia, Two Prudential Plaza in Chicago, and the Al Kazim Towers in Dubai. In addition, the New York-New York Hotel and Casino in Paradise, Nevada, contains the \"Chrysler Tower\", a replica of the Chrysler Building measuring 35 or 40 stories tall. A portion of the hotel's interior was also designed to resemble the Chrysler Building's interior.",
"title": "Impact "
},
{
"paragraph_id": 76,
"text": "While seen in many films, the Chrysler Building almost never appears as a main setting in them, prompting architect and author James Sanders to quip it should win \"the Award for Best Supporting Skyscraper\". The building was supposed to be featured in the 1933 film King Kong, but only makes a cameo at the end thanks to its producers opting for the Empire State Building in a central role. The Chrysler Building notably appears in the background of The Wiz (1978); as the setting of much of Q - The Winged Serpent (1982); in the initial credits of The Shadow of the Witness (1987); and during or after apocalyptic events in Independence Day (1996), Armageddon (1998), Deep Impact (1998), Godzilla (1998), and A.I. Artificial Intelligence (2001). The building also appears in other films, such as Spider-Man (2002), Fantastic Four: Rise of the Silver Surfer (2007), Two Weeks Notice (2002), The Sorcerer's Apprentice (2010), The Avengers (2012) and Men in Black 3 (2012).",
"title": "Impact "
},
{
"paragraph_id": 77,
"text": "In addition to films, the building is mentioned in the number \"It's the Hard Knock Life\" for the musical Annie. In the Squaresoft video game Parasite Eve, the building is the setting for the post-game content.",
"title": "Impact "
},
{
"paragraph_id": 78,
"text": "The Chrysler Building is frequently a subject of photographs. In December 1929, Walter Chrysler hired Margaret Bourke-White to take publicity images from a scaffold 400 feet (120 m) high. She was deeply inspired by the new structure and especially smitten by the massive eagle's-head figures projecting off the building. In her autobiography, Portrait of Myself, Bourke-White wrote, \"On the sixty-first floor, the workmen started building some curious structures which overhung 42nd Street and Lexington Avenue below. When I learned these were to be gargoyles à la Notre Dame, but made of stainless steel as more suitable for the twentieth century, I decided that here would be my new studio. There was no place in the world that I would accept as a substitute.\"",
"title": "Impact "
},
{
"paragraph_id": 79,
"text": "According to one account, Bourke-White wanted to live in the building for the duration of the photo shoot, but the only person able to do so was the janitor, so she was instead relegated to co-leasing a studio with Time Inc. In 1930, several of her photographs were used in a special report on skyscrapers in the then-new Fortune magazine. Bourke-White worked in a 61st-floor studio designed by John Vassos until she was evicted in 1934. In 1934, Bourke-White's partner Oscar Graubner took a famous photo called \"Margaret Bourke-White atop the Chrysler Building\", which depicts her taking a photo of the city's skyline while sitting on one of the 61st-floor eagle ornaments. On October 5, 1998, Christie's auctioned the photograph for $96,000. In addition, during a January 1931 dance organized by the Society of Beaux-Arts, six architects, including Van Alen, were photographed while wearing costumes resembling the buildings that each architect designed.",
"title": "Impact "
}
] | The Chrysler Building is an Art Deco skyscraper on the East Side of Manhattan in New York City, at the intersection of 42nd Street and Lexington Avenue in Midtown Manhattan. At 1,046 ft (319 m), it is the tallest brick building in the world with a steel framework, and it was the world's tallest building for 11 months after its completion in 1930. As of 2019, the Chrysler is the 12th-tallest building in the city, tied with The New York Times Building. Originally a project of real estate developer and former New York State Senator William H. Reynolds, the building was constructed by Walter Chrysler, the head of the Chrysler Corporation. The construction of the Chrysler Building, an early skyscraper, was characterized by a competition with 40 Wall Street and the Empire State Building to become the world's tallest building. The Chrysler Building was designed and funded by Walter Chrysler personally as a real estate investment for his children, but it was not intended as the Chrysler Corporation's headquarters. An annex was completed in 1952, and the building was sold by the Chrysler family the next year, with numerous subsequent owners. When the Chrysler Building opened, there were mixed reviews of the building's design, some calling it inane and unoriginal, others hailing it as modernist and iconic. Today the building is seen as a paragon of the Art Deco architectural style. In 2007, it was ranked ninth on the America's Favorite Architecture by the American Institute of Architects. The facade and interior became New York City designated landmarks in 1978, and the structure was added to the National Register of Historic Places as a National Historic Landmark in 1976. | 2001-10-13T23:15:25Z | 2023-12-20T17:12:05Z | [
"Template:Infobox building",
"Template:Authority control",
"Template:Use American English",
"Template:Cite book",
"Template:Cite magazine",
"Template:Commons category",
"Template:Ctbuh",
"Template:Refbegin",
"Template:Portal",
"Template:Cite news",
"Template:Cite aia5",
"Template:Short description",
"Template:Cbignore",
"Template:Section link",
"Template:Multiple image",
"Template:'s",
"Template:NHLS url",
"Template:Refend",
"Template:Blockquote",
"Template:Inflation",
"Template:S-end",
"Template:Inflation-year",
"Template:Reflist",
"Template:Wikiquote",
"Template:Cite encyclopedia",
"Template:Cite New York 2000",
"Template:Official website",
"Template:S-ach",
"Template:Cite enc-nyc",
"Template:Webarchive",
"Template:Navboxes",
"Template:Cite New York 1930",
"Template:Use mdy dates",
"Template:Convert",
"Template:As of",
"Template:About",
"Template:Cite web",
"Template:S-ttl",
"Template:Good article",
"Template:Main",
"Template:Clear",
"Template:'",
"Template:Notelist",
"Template:Small",
"Template:Efn",
"Template:Cvt",
"Template:Cite report",
"Template:S-start",
"Template:Cite NYCS map",
"Template:S-bef",
"Template:S-aft",
"Template:Sfn",
"Template:NYCS trains"
] | https://en.wikipedia.org/wiki/Chrysler_Building |
6,790 | Cape Breton (disambiguation) | Cape Breton Island is an island in the Canadian province of Nova Scotia, in Canada.
Cape Breton may also refer to: | [
{
"paragraph_id": 0,
"text": "Cape Breton Island is an island in the Canadian province of Nova Scotia, in Canada.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cape Breton may also refer to:",
"title": ""
}
] | Cape Breton Island is an island in the Canadian province of Nova Scotia, in Canada. Cape Breton may also refer to: | 2023-07-25T03:55:48Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Canned search",
"Template:Srt",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/Cape_Breton_(disambiguation) |
|
6,794 | Comet Shoemaker–Levy 9 | Comet Shoemaker–Levy 9 (formally designated D/1993 F2) broke apart in July 1992 and collided with Jupiter in July 1994, providing the first direct observation of an extraterrestrial collision of Solar System objects. This generated a large amount of coverage in the popular media, and the comet was closely observed by astronomers worldwide. The collision provided new information about Jupiter and highlighted its possible role in reducing space debris in the inner Solar System.
The comet was discovered by astronomers Carolyn and Eugene M. Shoemaker, and David Levy in 1993. Shoemaker–Levy 9 (SL9) had been captured by Jupiter and was orbiting the planet at the time. It was located on the night of March 24 in a photograph taken with the 46 cm (18 in) Schmidt telescope at the Palomar Observatory in California. It was the first active comet observed to be orbiting a planet, and had probably been captured by Jupiter around 20 to 30 years earlier.
Calculations showed that its unusual fragmented form was due to a previous closer approach to Jupiter in July 1992. At that time, the orbit of Shoemaker–Levy 9 passed within Jupiter's Roche limit, and Jupiter's tidal forces had acted to pull apart the comet. The comet was later observed as a series of fragments ranging up to 2 km (1.2 mi) in diameter. These fragments collided with Jupiter's southern hemisphere between July 16 and 22, 1994 at a speed of approximately 60 km/s (37 mi/s) (Jupiter's escape velocity) or 216,000 km/h (134,000 mph). The prominent scars from the impacts were more easily visible than the Great Red Spot and persisted for many months.
While conducting a program of observations designed to uncover near-Earth objects, the Shoemakers and Levy discovered Comet Shoemaker–Levy 9 on the night of March 24, 1993, in a photograph taken with the 0.46 m (1.5 ft) Schmidt telescope at the Palomar Observatory in California. The comet was thus a serendipitous discovery, but one that quickly overshadowed the results from their main observing program.
Comet Shoemaker–Levy 9 was the ninth periodic comet (a comet whose orbital period is 200 years or less) discovered by the Shoemakers and Levy, hence its name. It was their eleventh comet discovery overall including their discovery of two non-periodic comets, which use a different nomenclature. The discovery was announced in IAU Circular 5725 on March 26, 1993.
The discovery image gave the first hint that comet Shoemaker–Levy 9 was an unusual comet, as it appeared to show multiple nuclei in an elongated region about 50 arcseconds long and 10 arcseconds wide. Brian G. Marsden of the Central Bureau for Astronomical Telegrams noted that the comet lay only about 4 degrees from Jupiter as seen from Earth, and that although this could be a line-of-sight effect, its apparent motion in the sky suggested that the comet was physically close to the planet.
Orbital studies of the new comet soon revealed that it was orbiting Jupiter rather than the Sun, unlike all other comets known at the time. Its orbit around Jupiter was very loosely bound, with a period of about 2 years and an apoapsis (the point in the orbit farthest from the planet) of 0.33 astronomical units (49 million kilometres; 31 million miles). Its orbit around the planet was highly eccentric (e = 0.9986).
Tracing back the comet's orbital motion revealed that it had been orbiting Jupiter for some time. It is likely that it was captured from a solar orbit in the early 1970s, although the capture may have occurred as early as the mid-1960s. Several other observers found images of the comet in precovery images obtained before March 24, including Kin Endate from a photograph exposed on March 15, S. Otomo on March 17, and a team led by Eleanor Helin from images on March 19. An image of the comet on a Schmidt photographic plate taken on March 19 was identified on March 21 by M. Lindgren, in a project searching for comets near Jupiter. However, as his team were expecting comets to be inactive or at best exhibit a weak dust coma, and SL9 had a peculiar morphology, its true nature was not recognised until the official announcement 5 days later. No precovery images dating back to earlier than March 1993 have been found. Before the comet was captured by Jupiter, it was probably a short-period comet with an aphelion just inside Jupiter's orbit, and a perihelion interior to the asteroid belt.
The volume of space within which an object can be said to orbit Jupiter is defined by Jupiter's Hill sphere. When the comet passed Jupiter in the late 1960s or early 1970s, it happened to be near its aphelion, and found itself slightly within Jupiter's Hill sphere. Jupiter's gravity nudged the comet towards it. Because the comet's motion with respect to Jupiter was very small, it fell almost straight toward Jupiter, which is why it ended up on a Jove-centric orbit of very high eccentricity—that is to say, the ellipse was nearly flattened out.
The comet had apparently passed extremely close to Jupiter on July 7, 1992, just over 40,000 km (25,000 mi) above its cloud tops—a smaller distance than Jupiter's radius of 70,000 km (43,000 mi), and well within the orbit of Jupiter's innermost moon Metis and the planet's Roche limit, inside which tidal forces are strong enough to disrupt a body held together only by gravity. Although the comet had approached Jupiter closely before, the July 7 encounter seemed to be by far the closest, and the fragmentation of the comet is thought to have occurred at this time. Each fragment of the comet was denoted by a letter of the alphabet, from "fragment A" through to "fragment W", a practice already established from previously observed fragmented comets.
More exciting for planetary astronomers was that the best orbital calculations suggested that the comet would pass within 45,000 km (28,000 mi) of the center of Jupiter, a distance smaller than the planet's radius, meaning that there was an extremely high probability that SL9 would collide with Jupiter in July 1994. Studies suggested that the train of nuclei would plow into Jupiter's atmosphere over a period of about five days.
The discovery that the comet was likely to collide with Jupiter caused great excitement within the astronomical community and beyond, as astronomers had never before seen two significant Solar System bodies collide. Intense studies of the comet were undertaken, and as its orbit became more accurately established, the possibility of a collision became a certainty. The collision would provide a unique opportunity for scientists to look inside Jupiter's atmosphere, as the collisions were expected to cause eruptions of material from the layers normally hidden beneath the clouds.
Astronomers estimated that the visible fragments of SL9 ranged in size from a few hundred metres (around 1,000 ft) to two kilometres (1.2 mi) across, suggesting that the original comet may have had a nucleus up to 5 km (3.1 mi) across—somewhat larger than Comet Hyakutake, which became very bright when it passed close to the Earth in 1996. One of the great debates in advance of the impact was whether the effects of the impact of such small bodies would be noticeable from Earth, apart from a flash as they disintegrated like giant meteors. The most optimistic prediction was that large, asymmetric ballistic fireballs would rise above the limb of Jupiter and into sunlight to be visible from Earth. Other suggested effects of the impacts were seismic waves travelling across the planet, an increase in stratospheric haze on the planet due to dust from the impacts, and an increase in the mass of the Jovian ring system. However, given that observing such a collision was completely unprecedented, astronomers were cautious with their predictions of what the event might reveal.
Anticipation grew as the predicted date for the collisions approached, and astronomers trained terrestrial telescopes on Jupiter. Several space observatories did the same, including the Hubble Space Telescope, the ROSAT X-ray-observing satellite, the W. M. Keck Observatory, and the Galileo spacecraft, then on its way to a rendezvous with Jupiter scheduled for 1995. Although the impacts took place on the side of Jupiter hidden from Earth, Galileo, then at a distance of 1.6 AU (240 million km; 150 million mi) from the planet, was able to see the impacts as they occurred. Jupiter's rapid rotation brought the impact sites into view for terrestrial observers a few minutes after the collisions.
Two other space probes made observations at the time of the impact: the Ulysses spacecraft, primarily designed for solar observations, was pointed towards Jupiter from its location 2.6 AU (390 million km; 240 million mi) away, and the distant Voyager 2 probe, some 44 AU (6.6 billion km; 4.1 billion mi) from Jupiter and on its way out of the Solar System following its encounter with Neptune in 1989, was programmed to look for radio emission in the 1–390 kHz range and make observations with its ultraviolet spectrometer.
Astronomer Ian Morison described the impacts as following:
The first impact occurred at 20:13 UTC on July 16, 1994, when fragment A of the [comet's] nucleus slammed into Jupiter's southern hemisphere at about 60 km/s (35 mi/s). Instruments on Galileo detected a fireball that reached a peak temperature of about 24,000 K (23,700 °C; 42,700 °F), compared to the typical Jovian cloud-top temperature of about 130 K (−143 °C; −226 °F). It then expanded and cooled rapidly to about 1,500 K (1,230 °C; 2,240 °F). The plume from the fireball quickly reached a height of over 3,000 km (1,900 mi) and was observed by the HST.
A few minutes after the impact fireball was detected, Galileo measured renewed heating, probably due to ejected material falling back onto the planet. Earth-based observers detected the fireball rising over the limb of the planet shortly after the initial impact.
Despite published predictions, astronomers had not expected to see the fireballs from the impacts and did not have any idea how visible the other atmospheric effects of the impacts would be from Earth. Observers soon saw a huge dark spot after the first impact; the spot was visible from Earth. This and subsequent dark spots were thought to have been caused by debris from the impacts, and were markedly asymmetric, forming crescent shapes in front of the direction of impact.
Over the next six days, 21 distinct impacts were observed, with the largest coming on July 18 at 07:33 UTC when fragment G struck Jupiter. This impact created a giant dark spot over 12,000 km or 7,500 mi (almost one Earth diameter) across, and was estimated to have released an energy equivalent to 6,000,000 megatons of TNT (600 times the world's nuclear arsenal). Two impacts 12 hours apart on July 19 created impact marks of similar size to that caused by fragment G, and impacts continued until July 22, when fragment W struck the planet.
Observers hoped that the impacts would give them a first glimpse of Jupiter beneath the cloud tops, as lower material was exposed by the comet fragments punching through the upper atmosphere. Spectroscopic studies revealed absorption lines in the Jovian spectrum due to diatomic sulfur (S2) and carbon disulfide (CS2), the first detection of either in Jupiter, and only the second detection of S2 in any astronomical object. Other molecules detected included ammonia (NH3) and hydrogen sulfide (H2S). The amount of sulfur implied by the quantities of these compounds was much greater than the amount that would be expected in a small cometary nucleus, showing that material from within Jupiter was being revealed. Oxygen-bearing molecules such as sulfur dioxide were not detected, to the surprise of astronomers.
As well as these molecules, emission from heavy atoms such as iron, magnesium and silicon was detected, with abundances consistent with what would be found in a cometary nucleus. Although a substantial amount of water was detected spectroscopically, it was not as much as predicted, meaning that either the water layer thought to exist below the clouds was thinner than predicted, or that the cometary fragments did not penetrate deeply enough.
As predicted, the collisions generated enormous waves that swept across Jupiter at speeds of 450 m/s (1,500 ft/s) and were observed for over two hours after the largest impacts. The waves were thought to be travelling within a stable layer acting as a waveguide, and some scientists thought the stable layer must lie within the hypothesised tropospheric water cloud. However, other evidence seemed to indicate that the cometary fragments had not reached the water layer, and the waves were instead propagating within the stratosphere.
Radio observations revealed a sharp increase in continuum emission at a wavelength of 21 cm (8.3 in) after the largest impacts, which peaked at 120% of the normal emission from the planet. This was thought to be due to synchrotron radiation, caused by the injection of relativistic electrons—electrons with velocities near the speed of light—into the Jovian magnetosphere by the impacts.
About an hour after fragment K entered Jupiter, observers recorded auroral emission near the impact region, as well as at the antipode of the impact site with respect to Jupiter's strong magnetic field. The cause of these emissions was difficult to establish due to a lack of knowledge of Jupiter's internal magnetic field and of the geometry of the impact sites. One possible explanation was that upwardly accelerating shock waves from the impact accelerated charged particles enough to cause auroral emission, a phenomenon more typically associated with fast-moving solar wind particles striking a planetary atmosphere near a magnetic pole.
Some astronomers had suggested that the impacts might have a noticeable effect on the Io torus, a torus of high-energy particles connecting Jupiter with the highly volcanic moon Io. High resolution spectroscopic studies found that variations in the ion density, rotational velocity, and temperatures at the time of impact and afterwards were within the normal limits.
Voyager 2 failed to detect anything with calculations showing that the fireballs were just below the craft's limit of detection; no abnormal levels of UV radiation or radio signals were registered after the blast. Ulysses also failed to detect any abnormal radio frequencies.
Several models were devised to compute the density and size of Shoemaker–Levy 9. Its average density was calculated to be about 0.5 g/cm (0.018 lb/cu in); the breakup of a much less dense comet would not have resembled the observed string of objects. The size of the parent comet was calculated to be about 1.8 km (1.1 mi) in diameter. These predictions were among the few that were actually confirmed by subsequent observation.
One of the surprises of the impacts was the small amount of water revealed compared to prior predictions. Before the impact, models of Jupiter's atmosphere had indicated that the break-up of the largest fragments would occur at atmospheric pressures of anywhere from 30 kilopascals to a few tens of megapascals (from 0.3 to a few hundred bar), with some predictions that the comet would penetrate a layer of water and create a bluish shroud over that region of Jupiter.
Astronomers did not observe large amounts of water following the collisions, and later impact studies found that fragmentation and destruction of the cometary fragments in a meteor air burst probably occurred at much higher altitudes than previously expected, with even the largest fragments being destroyed when the pressure reached 250 kPa (36 psi), well above the expected depth of the water layer. The smaller fragments were probably destroyed before they even reached the cloud layer.
The visible scars from the impacts could be seen on Jupiter for many months. They were extremely prominent, and observers described them as more easily visible than the Great Red Spot. A search of historical observations revealed that the spots were probably the most prominent transient features ever seen on the planet, and that although the Great Red Spot is notable for its striking color, no spots of the size and darkness of those caused by the SL9 impacts had ever been recorded before, or since.
Spectroscopic observers found that ammonia and carbon disulfide persisted in the atmosphere for at least fourteen months after the collisions, with a considerable amount of ammonia being present in the stratosphere as opposed to its normal location in the troposphere.
Counterintuitively, the atmospheric temperature dropped to normal levels much more quickly at the larger impact sites than at the smaller sites: at the larger impact sites, temperatures were elevated over a region 15,000 to 20,000 km (9,300 to 12,400 mi) wide, but dropped back to normal levels within a week of the impact. At smaller sites, temperatures 10 K (10 °C; 18 °F) higher than the surroundings persisted for almost two weeks. Global stratospheric temperatures rose immediately after the impacts, then fell to below pre-impact temperatures 2–3 weeks afterwards, before rising slowly to normal temperatures.
SL9 is not unique in having orbited Jupiter for a time; five comets, (including 82P/Gehrels, 147P/Kushida–Muramatsu, and 111P/Helin–Roman–Crockett) are known to have been temporarily captured by the planet. Cometary orbits around Jupiter are unstable, as they will be highly elliptical and likely to be strongly perturbed by the Sun's gravity at apojove (the farthest point on the orbit from the planet).
By far the most massive planet in the Solar System, Jupiter can capture objects relatively frequently, but the size of SL9 makes it a rarity: one post-impact study estimated that comets 0.3 km (0.19 mi) in diameter impact the planet once in approximately 500 years and those 1.6 km (1 mi) in diameter do so just once in every 6,000 years.
There is very strong evidence that comets have previously been fragmented and collided with Jupiter and its satellites. During the Voyager missions to the planet, planetary scientists identified 13 crater chains on Callisto and three on Ganymede, the origin of which was initially a mystery. Crater chains seen on the Moon often radiate from large craters, and are thought to be caused by secondary impacts of the original ejecta, but the chains on the Jovian moons did not lead back to a larger crater. The impact of SL9 strongly implied that the chains were due to trains of disrupted cometary fragments crashing into the satellites.
On July 19, 2009, exactly 15 years after the SL9 impacts, a new black spot about the size of the Pacific Ocean appeared in Jupiter's southern hemisphere. Thermal infrared measurements showed the impact site was warm and spectroscopic analysis detected the production of excess hot ammonia and silica-rich dust in the upper regions of Jupiter's atmosphere. Scientists have concluded that another impact event had occurred, but this time a more compact and stronger object, probably a small undiscovered asteroid, was the cause.
The events of SL9's interaction with Jupiter greatly highlighted Jupiter's role in protecting the inner planets from both interstellar and in-system debris by acting as a "cosmic vacuum cleaner" for the Solar System (Jupiter barrier). The planet's strong gravitational influence attracts many small comets and asteroids and the rate of cometary impacts on Jupiter is thought to be between 2,000 and 8,000 times higher than the rate on Earth.
The extinction of the non-avian dinosaurs at the end of the Cretaceous period is generally thought to have been caused by the Cretaceous–Paleogene impact event, which created the Chicxulub crater, demonstrating that cometary impacts are indeed a serious threat to life on Earth. Astronomers have speculated that without Jupiter's immense gravity, extinction events might have been more frequent on Earth and complex life might not have been able to develop. This is part of the argument used in the Rare Earth hypothesis.
In 2009, it was shown that the presence of a smaller planet at Jupiter's position in the Solar System might increase the impact rate of comets on the Earth significantly. A planet of Jupiter's mass still seems to provide increased protection against asteroids, but the total effect on all orbital bodies within the Solar System is unclear. This and other recent models call into question the nature of Jupiter's influence on Earth impacts. | [
{
"paragraph_id": 0,
"text": "Comet Shoemaker–Levy 9 (formally designated D/1993 F2) broke apart in July 1992 and collided with Jupiter in July 1994, providing the first direct observation of an extraterrestrial collision of Solar System objects. This generated a large amount of coverage in the popular media, and the comet was closely observed by astronomers worldwide. The collision provided new information about Jupiter and highlighted its possible role in reducing space debris in the inner Solar System.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The comet was discovered by astronomers Carolyn and Eugene M. Shoemaker, and David Levy in 1993. Shoemaker–Levy 9 (SL9) had been captured by Jupiter and was orbiting the planet at the time. It was located on the night of March 24 in a photograph taken with the 46 cm (18 in) Schmidt telescope at the Palomar Observatory in California. It was the first active comet observed to be orbiting a planet, and had probably been captured by Jupiter around 20 to 30 years earlier.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Calculations showed that its unusual fragmented form was due to a previous closer approach to Jupiter in July 1992. At that time, the orbit of Shoemaker–Levy 9 passed within Jupiter's Roche limit, and Jupiter's tidal forces had acted to pull apart the comet. The comet was later observed as a series of fragments ranging up to 2 km (1.2 mi) in diameter. These fragments collided with Jupiter's southern hemisphere between July 16 and 22, 1994 at a speed of approximately 60 km/s (37 mi/s) (Jupiter's escape velocity) or 216,000 km/h (134,000 mph). The prominent scars from the impacts were more easily visible than the Great Red Spot and persisted for many months.",
"title": ""
},
{
"paragraph_id": 3,
"text": "While conducting a program of observations designed to uncover near-Earth objects, the Shoemakers and Levy discovered Comet Shoemaker–Levy 9 on the night of March 24, 1993, in a photograph taken with the 0.46 m (1.5 ft) Schmidt telescope at the Palomar Observatory in California. The comet was thus a serendipitous discovery, but one that quickly overshadowed the results from their main observing program.",
"title": "Discovery"
},
{
"paragraph_id": 4,
"text": "Comet Shoemaker–Levy 9 was the ninth periodic comet (a comet whose orbital period is 200 years or less) discovered by the Shoemakers and Levy, hence its name. It was their eleventh comet discovery overall including their discovery of two non-periodic comets, which use a different nomenclature. The discovery was announced in IAU Circular 5725 on March 26, 1993.",
"title": "Discovery"
},
{
"paragraph_id": 5,
"text": "The discovery image gave the first hint that comet Shoemaker–Levy 9 was an unusual comet, as it appeared to show multiple nuclei in an elongated region about 50 arcseconds long and 10 arcseconds wide. Brian G. Marsden of the Central Bureau for Astronomical Telegrams noted that the comet lay only about 4 degrees from Jupiter as seen from Earth, and that although this could be a line-of-sight effect, its apparent motion in the sky suggested that the comet was physically close to the planet.",
"title": "Discovery"
},
{
"paragraph_id": 6,
"text": "Orbital studies of the new comet soon revealed that it was orbiting Jupiter rather than the Sun, unlike all other comets known at the time. Its orbit around Jupiter was very loosely bound, with a period of about 2 years and an apoapsis (the point in the orbit farthest from the planet) of 0.33 astronomical units (49 million kilometres; 31 million miles). Its orbit around the planet was highly eccentric (e = 0.9986).",
"title": "Comet with a Jovian orbit"
},
{
"paragraph_id": 7,
"text": "Tracing back the comet's orbital motion revealed that it had been orbiting Jupiter for some time. It is likely that it was captured from a solar orbit in the early 1970s, although the capture may have occurred as early as the mid-1960s. Several other observers found images of the comet in precovery images obtained before March 24, including Kin Endate from a photograph exposed on March 15, S. Otomo on March 17, and a team led by Eleanor Helin from images on March 19. An image of the comet on a Schmidt photographic plate taken on March 19 was identified on March 21 by M. Lindgren, in a project searching for comets near Jupiter. However, as his team were expecting comets to be inactive or at best exhibit a weak dust coma, and SL9 had a peculiar morphology, its true nature was not recognised until the official announcement 5 days later. No precovery images dating back to earlier than March 1993 have been found. Before the comet was captured by Jupiter, it was probably a short-period comet with an aphelion just inside Jupiter's orbit, and a perihelion interior to the asteroid belt.",
"title": "Comet with a Jovian orbit"
},
{
"paragraph_id": 8,
"text": "The volume of space within which an object can be said to orbit Jupiter is defined by Jupiter's Hill sphere. When the comet passed Jupiter in the late 1960s or early 1970s, it happened to be near its aphelion, and found itself slightly within Jupiter's Hill sphere. Jupiter's gravity nudged the comet towards it. Because the comet's motion with respect to Jupiter was very small, it fell almost straight toward Jupiter, which is why it ended up on a Jove-centric orbit of very high eccentricity—that is to say, the ellipse was nearly flattened out.",
"title": "Comet with a Jovian orbit"
},
{
"paragraph_id": 9,
"text": "The comet had apparently passed extremely close to Jupiter on July 7, 1992, just over 40,000 km (25,000 mi) above its cloud tops—a smaller distance than Jupiter's radius of 70,000 km (43,000 mi), and well within the orbit of Jupiter's innermost moon Metis and the planet's Roche limit, inside which tidal forces are strong enough to disrupt a body held together only by gravity. Although the comet had approached Jupiter closely before, the July 7 encounter seemed to be by far the closest, and the fragmentation of the comet is thought to have occurred at this time. Each fragment of the comet was denoted by a letter of the alphabet, from \"fragment A\" through to \"fragment W\", a practice already established from previously observed fragmented comets.",
"title": "Comet with a Jovian orbit"
},
{
"paragraph_id": 10,
"text": "More exciting for planetary astronomers was that the best orbital calculations suggested that the comet would pass within 45,000 km (28,000 mi) of the center of Jupiter, a distance smaller than the planet's radius, meaning that there was an extremely high probability that SL9 would collide with Jupiter in July 1994. Studies suggested that the train of nuclei would plow into Jupiter's atmosphere over a period of about five days.",
"title": "Comet with a Jovian orbit"
},
{
"paragraph_id": 11,
"text": "The discovery that the comet was likely to collide with Jupiter caused great excitement within the astronomical community and beyond, as astronomers had never before seen two significant Solar System bodies collide. Intense studies of the comet were undertaken, and as its orbit became more accurately established, the possibility of a collision became a certainty. The collision would provide a unique opportunity for scientists to look inside Jupiter's atmosphere, as the collisions were expected to cause eruptions of material from the layers normally hidden beneath the clouds.",
"title": "Predictions for the collision"
},
{
"paragraph_id": 12,
"text": "Astronomers estimated that the visible fragments of SL9 ranged in size from a few hundred metres (around 1,000 ft) to two kilometres (1.2 mi) across, suggesting that the original comet may have had a nucleus up to 5 km (3.1 mi) across—somewhat larger than Comet Hyakutake, which became very bright when it passed close to the Earth in 1996. One of the great debates in advance of the impact was whether the effects of the impact of such small bodies would be noticeable from Earth, apart from a flash as they disintegrated like giant meteors. The most optimistic prediction was that large, asymmetric ballistic fireballs would rise above the limb of Jupiter and into sunlight to be visible from Earth. Other suggested effects of the impacts were seismic waves travelling across the planet, an increase in stratospheric haze on the planet due to dust from the impacts, and an increase in the mass of the Jovian ring system. However, given that observing such a collision was completely unprecedented, astronomers were cautious with their predictions of what the event might reveal.",
"title": "Predictions for the collision"
},
{
"paragraph_id": 13,
"text": "Anticipation grew as the predicted date for the collisions approached, and astronomers trained terrestrial telescopes on Jupiter. Several space observatories did the same, including the Hubble Space Telescope, the ROSAT X-ray-observing satellite, the W. M. Keck Observatory, and the Galileo spacecraft, then on its way to a rendezvous with Jupiter scheduled for 1995. Although the impacts took place on the side of Jupiter hidden from Earth, Galileo, then at a distance of 1.6 AU (240 million km; 150 million mi) from the planet, was able to see the impacts as they occurred. Jupiter's rapid rotation brought the impact sites into view for terrestrial observers a few minutes after the collisions.",
"title": "Impacts"
},
{
"paragraph_id": 14,
"text": "Two other space probes made observations at the time of the impact: the Ulysses spacecraft, primarily designed for solar observations, was pointed towards Jupiter from its location 2.6 AU (390 million km; 240 million mi) away, and the distant Voyager 2 probe, some 44 AU (6.6 billion km; 4.1 billion mi) from Jupiter and on its way out of the Solar System following its encounter with Neptune in 1989, was programmed to look for radio emission in the 1–390 kHz range and make observations with its ultraviolet spectrometer.",
"title": "Impacts"
},
{
"paragraph_id": 15,
"text": "Astronomer Ian Morison described the impacts as following:",
"title": "Impacts"
},
{
"paragraph_id": 16,
"text": "The first impact occurred at 20:13 UTC on July 16, 1994, when fragment A of the [comet's] nucleus slammed into Jupiter's southern hemisphere at about 60 km/s (35 mi/s). Instruments on Galileo detected a fireball that reached a peak temperature of about 24,000 K (23,700 °C; 42,700 °F), compared to the typical Jovian cloud-top temperature of about 130 K (−143 °C; −226 °F). It then expanded and cooled rapidly to about 1,500 K (1,230 °C; 2,240 °F). The plume from the fireball quickly reached a height of over 3,000 km (1,900 mi) and was observed by the HST.",
"title": "Impacts"
},
{
"paragraph_id": 17,
"text": "A few minutes after the impact fireball was detected, Galileo measured renewed heating, probably due to ejected material falling back onto the planet. Earth-based observers detected the fireball rising over the limb of the planet shortly after the initial impact.",
"title": "Impacts"
},
{
"paragraph_id": 18,
"text": "Despite published predictions, astronomers had not expected to see the fireballs from the impacts and did not have any idea how visible the other atmospheric effects of the impacts would be from Earth. Observers soon saw a huge dark spot after the first impact; the spot was visible from Earth. This and subsequent dark spots were thought to have been caused by debris from the impacts, and were markedly asymmetric, forming crescent shapes in front of the direction of impact.",
"title": "Impacts"
},
{
"paragraph_id": 19,
"text": "Over the next six days, 21 distinct impacts were observed, with the largest coming on July 18 at 07:33 UTC when fragment G struck Jupiter. This impact created a giant dark spot over 12,000 km or 7,500 mi (almost one Earth diameter) across, and was estimated to have released an energy equivalent to 6,000,000 megatons of TNT (600 times the world's nuclear arsenal). Two impacts 12 hours apart on July 19 created impact marks of similar size to that caused by fragment G, and impacts continued until July 22, when fragment W struck the planet.",
"title": "Impacts"
},
{
"paragraph_id": 20,
"text": "Observers hoped that the impacts would give them a first glimpse of Jupiter beneath the cloud tops, as lower material was exposed by the comet fragments punching through the upper atmosphere. Spectroscopic studies revealed absorption lines in the Jovian spectrum due to diatomic sulfur (S2) and carbon disulfide (CS2), the first detection of either in Jupiter, and only the second detection of S2 in any astronomical object. Other molecules detected included ammonia (NH3) and hydrogen sulfide (H2S). The amount of sulfur implied by the quantities of these compounds was much greater than the amount that would be expected in a small cometary nucleus, showing that material from within Jupiter was being revealed. Oxygen-bearing molecules such as sulfur dioxide were not detected, to the surprise of astronomers.",
"title": "Observations and discoveries"
},
{
"paragraph_id": 21,
"text": "As well as these molecules, emission from heavy atoms such as iron, magnesium and silicon was detected, with abundances consistent with what would be found in a cometary nucleus. Although a substantial amount of water was detected spectroscopically, it was not as much as predicted, meaning that either the water layer thought to exist below the clouds was thinner than predicted, or that the cometary fragments did not penetrate deeply enough.",
"title": "Observations and discoveries"
},
{
"paragraph_id": 22,
"text": "As predicted, the collisions generated enormous waves that swept across Jupiter at speeds of 450 m/s (1,500 ft/s) and were observed for over two hours after the largest impacts. The waves were thought to be travelling within a stable layer acting as a waveguide, and some scientists thought the stable layer must lie within the hypothesised tropospheric water cloud. However, other evidence seemed to indicate that the cometary fragments had not reached the water layer, and the waves were instead propagating within the stratosphere.",
"title": "Observations and discoveries"
},
{
"paragraph_id": 23,
"text": "Radio observations revealed a sharp increase in continuum emission at a wavelength of 21 cm (8.3 in) after the largest impacts, which peaked at 120% of the normal emission from the planet. This was thought to be due to synchrotron radiation, caused by the injection of relativistic electrons—electrons with velocities near the speed of light—into the Jovian magnetosphere by the impacts.",
"title": "Observations and discoveries"
},
{
"paragraph_id": 24,
"text": "About an hour after fragment K entered Jupiter, observers recorded auroral emission near the impact region, as well as at the antipode of the impact site with respect to Jupiter's strong magnetic field. The cause of these emissions was difficult to establish due to a lack of knowledge of Jupiter's internal magnetic field and of the geometry of the impact sites. One possible explanation was that upwardly accelerating shock waves from the impact accelerated charged particles enough to cause auroral emission, a phenomenon more typically associated with fast-moving solar wind particles striking a planetary atmosphere near a magnetic pole.",
"title": "Observations and discoveries"
},
{
"paragraph_id": 25,
"text": "Some astronomers had suggested that the impacts might have a noticeable effect on the Io torus, a torus of high-energy particles connecting Jupiter with the highly volcanic moon Io. High resolution spectroscopic studies found that variations in the ion density, rotational velocity, and temperatures at the time of impact and afterwards were within the normal limits.",
"title": "Observations and discoveries"
},
{
"paragraph_id": 26,
"text": "Voyager 2 failed to detect anything with calculations showing that the fireballs were just below the craft's limit of detection; no abnormal levels of UV radiation or radio signals were registered after the blast. Ulysses also failed to detect any abnormal radio frequencies.",
"title": "Observations and discoveries"
},
{
"paragraph_id": 27,
"text": "Several models were devised to compute the density and size of Shoemaker–Levy 9. Its average density was calculated to be about 0.5 g/cm (0.018 lb/cu in); the breakup of a much less dense comet would not have resembled the observed string of objects. The size of the parent comet was calculated to be about 1.8 km (1.1 mi) in diameter. These predictions were among the few that were actually confirmed by subsequent observation.",
"title": "Post-impact analysis"
},
{
"paragraph_id": 28,
"text": "One of the surprises of the impacts was the small amount of water revealed compared to prior predictions. Before the impact, models of Jupiter's atmosphere had indicated that the break-up of the largest fragments would occur at atmospheric pressures of anywhere from 30 kilopascals to a few tens of megapascals (from 0.3 to a few hundred bar), with some predictions that the comet would penetrate a layer of water and create a bluish shroud over that region of Jupiter.",
"title": "Post-impact analysis"
},
{
"paragraph_id": 29,
"text": "Astronomers did not observe large amounts of water following the collisions, and later impact studies found that fragmentation and destruction of the cometary fragments in a meteor air burst probably occurred at much higher altitudes than previously expected, with even the largest fragments being destroyed when the pressure reached 250 kPa (36 psi), well above the expected depth of the water layer. The smaller fragments were probably destroyed before they even reached the cloud layer.",
"title": "Post-impact analysis"
},
{
"paragraph_id": 30,
"text": "The visible scars from the impacts could be seen on Jupiter for many months. They were extremely prominent, and observers described them as more easily visible than the Great Red Spot. A search of historical observations revealed that the spots were probably the most prominent transient features ever seen on the planet, and that although the Great Red Spot is notable for its striking color, no spots of the size and darkness of those caused by the SL9 impacts had ever been recorded before, or since.",
"title": "Longer-term effects"
},
{
"paragraph_id": 31,
"text": "Spectroscopic observers found that ammonia and carbon disulfide persisted in the atmosphere for at least fourteen months after the collisions, with a considerable amount of ammonia being present in the stratosphere as opposed to its normal location in the troposphere.",
"title": "Longer-term effects"
},
{
"paragraph_id": 32,
"text": "Counterintuitively, the atmospheric temperature dropped to normal levels much more quickly at the larger impact sites than at the smaller sites: at the larger impact sites, temperatures were elevated over a region 15,000 to 20,000 km (9,300 to 12,400 mi) wide, but dropped back to normal levels within a week of the impact. At smaller sites, temperatures 10 K (10 °C; 18 °F) higher than the surroundings persisted for almost two weeks. Global stratospheric temperatures rose immediately after the impacts, then fell to below pre-impact temperatures 2–3 weeks afterwards, before rising slowly to normal temperatures.",
"title": "Longer-term effects"
},
{
"paragraph_id": 33,
"text": "SL9 is not unique in having orbited Jupiter for a time; five comets, (including 82P/Gehrels, 147P/Kushida–Muramatsu, and 111P/Helin–Roman–Crockett) are known to have been temporarily captured by the planet. Cometary orbits around Jupiter are unstable, as they will be highly elliptical and likely to be strongly perturbed by the Sun's gravity at apojove (the farthest point on the orbit from the planet).",
"title": "Frequency of impacts"
},
{
"paragraph_id": 34,
"text": "By far the most massive planet in the Solar System, Jupiter can capture objects relatively frequently, but the size of SL9 makes it a rarity: one post-impact study estimated that comets 0.3 km (0.19 mi) in diameter impact the planet once in approximately 500 years and those 1.6 km (1 mi) in diameter do so just once in every 6,000 years.",
"title": "Frequency of impacts"
},
{
"paragraph_id": 35,
"text": "There is very strong evidence that comets have previously been fragmented and collided with Jupiter and its satellites. During the Voyager missions to the planet, planetary scientists identified 13 crater chains on Callisto and three on Ganymede, the origin of which was initially a mystery. Crater chains seen on the Moon often radiate from large craters, and are thought to be caused by secondary impacts of the original ejecta, but the chains on the Jovian moons did not lead back to a larger crater. The impact of SL9 strongly implied that the chains were due to trains of disrupted cometary fragments crashing into the satellites.",
"title": "Frequency of impacts"
},
{
"paragraph_id": 36,
"text": "On July 19, 2009, exactly 15 years after the SL9 impacts, a new black spot about the size of the Pacific Ocean appeared in Jupiter's southern hemisphere. Thermal infrared measurements showed the impact site was warm and spectroscopic analysis detected the production of excess hot ammonia and silica-rich dust in the upper regions of Jupiter's atmosphere. Scientists have concluded that another impact event had occurred, but this time a more compact and stronger object, probably a small undiscovered asteroid, was the cause.",
"title": "Frequency of impacts"
},
{
"paragraph_id": 37,
"text": "The events of SL9's interaction with Jupiter greatly highlighted Jupiter's role in protecting the inner planets from both interstellar and in-system debris by acting as a \"cosmic vacuum cleaner\" for the Solar System (Jupiter barrier). The planet's strong gravitational influence attracts many small comets and asteroids and the rate of cometary impacts on Jupiter is thought to be between 2,000 and 8,000 times higher than the rate on Earth.",
"title": "Jupiter's role in protection of the inner Solar System"
},
{
"paragraph_id": 38,
"text": "The extinction of the non-avian dinosaurs at the end of the Cretaceous period is generally thought to have been caused by the Cretaceous–Paleogene impact event, which created the Chicxulub crater, demonstrating that cometary impacts are indeed a serious threat to life on Earth. Astronomers have speculated that without Jupiter's immense gravity, extinction events might have been more frequent on Earth and complex life might not have been able to develop. This is part of the argument used in the Rare Earth hypothesis.",
"title": "Jupiter's role in protection of the inner Solar System"
},
{
"paragraph_id": 39,
"text": "In 2009, it was shown that the presence of a smaller planet at Jupiter's position in the Solar System might increase the impact rate of comets on the Earth significantly. A planet of Jupiter's mass still seems to provide increased protection against asteroids, but the total effect on all orbital bodies within the Solar System is unclear. This and other recent models call into question the nature of Jupiter's influence on Earth impacts.",
"title": "Jupiter's role in protection of the inner Solar System"
},
{
"paragraph_id": 40,
"text": "",
"title": "External links"
}
] | Comet Shoemaker–Levy 9 broke apart in July 1992 and collided with Jupiter in July 1994, providing the first direct observation of an extraterrestrial collision of Solar System objects. This generated a large amount of coverage in the popular media, and the comet was closely observed by astronomers worldwide. The collision provided new information about Jupiter and highlighted its possible role in reducing space debris in the inner Solar System. The comet was discovered by astronomers Carolyn and Eugene M. Shoemaker, and David Levy in 1993. Shoemaker–Levy 9 (SL9) had been captured by Jupiter and was orbiting the planet at the time. It was located on the night of March 24 in a photograph taken with the 46 cm (18 in) Schmidt telescope at the Palomar Observatory in California. It was the first active comet observed to be orbiting a planet, and had probably been captured by Jupiter around 20 to 30 years earlier. Calculations showed that its unusual fragmented form was due to a previous closer approach to Jupiter in July 1992. At that time, the orbit of Shoemaker–Levy 9 passed within Jupiter's Roche limit, and Jupiter's tidal forces had acted to pull apart the comet. The comet was later observed as a series of fragments ranging up to 2 km (1.2 mi) in diameter. These fragments collided with Jupiter's southern hemisphere between July 16 and 22, 1994 at a speed of approximately 60 km/s (37 mi/s) or 216,000 km/h (134,000 mph). The prominent scars from the impacts were more easily visible than the Great Red Spot and persisted for many months. | 2001-10-15T09:58:19Z | 2023-11-28T16:33:59Z | [
"Template:·",
"Template:Cite book",
"Template:Modern impact events",
"Template:Portal bar",
"Template:Use mdy dates",
"Template:Reflist",
"Template:Comets",
"Template:Jupiter",
"Template:Featured article",
"Template:Redirect",
"Template:Convert",
"Template:Cvt",
"Template:Cite journal",
"Template:Authority control",
"Template:Cite conference",
"Template:Short description",
"Template:Infobox comet",
"Template:Legend2",
"Template:Anchor",
"Template:Main",
"Template:See also",
"Template:Cite web",
"Template:Commons category",
"Template:Spoken Wikipedia"
] | https://en.wikipedia.org/wiki/Comet_Shoemaker%E2%80%93Levy_9 |
6,796 | Ceres Brewery | The Ceres Brewery was a beer and soft drink producing facility in Århus, Denmark, that operated from 1856 until 2008. Although the brewery was closed by its owner Royal Unibrew the Ceres brand continues, with the product brewed at other facilities. The area where the brewery stood is being redeveloped for residential and commercial use and has been named CeresByen (Ceres City).
"Ceres Brewery" was founded in 1856 by Malthe Conrad Lottrup, a grocer, with chemists "A. S. Aagard" and "Knud Redelien", as the city's seventh brewery. It was named after the Roman goddess Ceres, and its opening was announced in the local newspaper, Århus Stiftstidende.
Lottrup expanded the brewery after ten years, adding a grand new building as his private residence.
He was succeeded by his son-in-law, Laurits Christian Meulengracht, who ran the brewery for almost thirty years, expanding it further before selling it to "Østjyske Bryggerier", another brewing firm.
The Ceres brewery was named an official purveyor to the "Royal Danish Court" in 1914. | [
{
"paragraph_id": 0,
"text": "The Ceres Brewery was a beer and soft drink producing facility in Århus, Denmark, that operated from 1856 until 2008. Although the brewery was closed by its owner Royal Unibrew the Ceres brand continues, with the product brewed at other facilities. The area where the brewery stood is being redeveloped for residential and commercial use and has been named CeresByen (Ceres City).",
"title": ""
},
{
"paragraph_id": 1,
"text": "\"Ceres Brewery\" was founded in 1856 by Malthe Conrad Lottrup, a grocer, with chemists \"A. S. Aagard\" and \"Knud Redelien\", as the city's seventh brewery. It was named after the Roman goddess Ceres, and its opening was announced in the local newspaper, Århus Stiftstidende.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Lottrup expanded the brewery after ten years, adding a grand new building as his private residence.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "He was succeeded by his son-in-law, Laurits Christian Meulengracht, who ran the brewery for almost thirty years, expanding it further before selling it to \"Østjyske Bryggerier\", another brewing firm.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "The Ceres brewery was named an official purveyor to the \"Royal Danish Court\" in 1914.",
"title": "History"
}
] | The Ceres Brewery was a beer and soft drink producing facility in Århus, Denmark, that operated from 1856 until 2008. Although the brewery was closed by its owner Royal Unibrew the Ceres brand continues, with the product brewed at other facilities. The area where the brewery stood is being redeveloped for residential and commercial use and has been named CeresByen. | 2001-10-14T23:23:29Z | 2023-12-27T00:12:55Z | [
"Template:Short description",
"Template:Infobox company",
"Template:Reflist",
"Template:Cite web",
"Template:Webarchive",
"Template:Commons category",
"Template:Danish beer"
] | https://en.wikipedia.org/wiki/Ceres_Brewery |
6,799 | COBOL | COBOL (/ˈkoʊbɒl, -bɔːl/; an acronym for "common business-oriented language") is a compiled English-like computer programming language designed for business use. It is an imperative, procedural and, since 2002, object-oriented language. COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still widely used in applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs. Many large financial institutions were developing new systems in the language as late as 2006, but most programming in COBOL today is purely to maintain existing applications. Programs are being moved to new platforms, rewritten in modern languages or replaced with other software.
COBOL was designed in 1959 by CODASYL and was partly based on the programming language FLOW-MATIC designed by Grace Hopper. It was created as part of a U.S. Department of Defense effort to create a portable programming language for data processing. It was originally seen as a stopgap, but the Defense Department promptly forced computer manufacturers to provide it, resulting in its widespread adoption. It was standardized in 1968 and has been revised five times. Expansions include support for structured and object-oriented programming. The current standard is ISO/IEC 1989:2023.
COBOL statements have prose syntax such as MOVE x TO y, which was designed to be self-documenting and highly readable. However, it is verbose and uses over 300 reserved words. This contrasts with the succinct and mathematically-inspired syntax of other languages (in this case, y = x;).
COBOL code is split into four divisions (identification, environment, data, and procedure) containing a rigid hierarchy of sections, paragraphs and sentences. Lacking a large standard library, the standard specifies 43 statements, 87 functions and just one class.
Academic computer scientists were generally uninterested in business applications when COBOL was created and were not involved in its design; it was (effectively) designed from the ground up as a computer language for business, with an emphasis on inputs and outputs, whose only data types were numbers and strings of text.
COBOL has been criticized for its verbosity, design process, and poor support for structured programming. These weaknesses result in monolithic programs that are hard to comprehend as a whole, despite their local readability.
For years, COBOL has been assumed as a programming language for business operations in mainframes, although in recent years, many COBOL operations have been moved to cloud computing.
In the late 1950s, computer users and manufacturers were becoming concerned about the rising cost of programming. A 1959 survey had found that in any data processing installation, the programming cost US$800,000 on average and that translating programs to run on new hardware would cost $600,000. At a time when new programming languages were proliferating, the same survey suggested that if a common business-oriented language were used, conversion would be far cheaper and faster.
On 8 April 1959, Mary K. Hawes, a computer scientist at Burroughs Corporation, called a meeting of representatives from academia, computer users, and manufacturers at the University of Pennsylvania to organize a formal meeting on common business languages. Representatives included Grace Hopper (inventor of the English-like data processing language FLOW-MATIC), Jean Sammet and Saul Gorn.
At the April meeting, the group asked the Department of Defense (DoD) to sponsor an effort to create a common business language. The delegation impressed Charles A. Phillips, director of the Data System Research Staff at the DoD, who thought that they "thoroughly understood" the DoD's problems. The DoD operated 225 computers, had 175 more on order and had spent over $200 million on implementing programs to run on them. Portable programs would save time, reduce costs and ease modernization.
Charles Phillips agreed to sponsor the meeting and tasked the delegation with drafting the agenda.
On 28 and 29 May 1959 (exactly one year after the Zürich ALGOL 58 meeting), a meeting was held at the Pentagon to discuss the creation of a common programming language for business. It was attended by 41 people and was chaired by Phillips. The Department of Defense was concerned about whether it could run the same data processing programs on different computers. FORTRAN, the only mainstream language at the time, lacked the features needed to write such programs.
Representatives enthusiastically described a language that could work in a wide variety of environments, from banking and insurance to utilities and inventory control. They agreed unanimously that more people should be able to program and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximal use of English, be capable of change, be machine-independent and be easy to use, even at the expense of power.
The meeting resulted in the creation of a steering committee and short, intermediate and long-range committees. The short-range committee was given to September (three months) to produce specifications for an interim language, which would then be improved upon by the other committees. Their official mission, however, was to identify the strengths and weaknesses of existing programming languages and did not explicitly direct them to create a new language.
The deadline was met with disbelief by the short-range committee. One member, Betty Holberton, described the three-month deadline as "gross optimism" and doubted that the language really would be a stopgap.
The steering committee met on 4 June and agreed to name the entire activity as the Committee on Data Systems Languages, or CODASYL, and to form an executive committee.
The short-range committee members represented six computer manufacturers and three government agencies. The computer manufacturers were Burroughs Corporation, IBM, Minneapolis-Honeywell (Honeywell Labs), RCA, Sperry Rand, and Sylvania Electric Products. The government agencies were the U.S. Air Force, the Navy's David Taylor Model Basin, and the National Bureau of Standards (now the National Institute of Standards and Technology). The committee was chaired by Joseph Wegstein of the U.S. National Bureau of Standards. Work began by investigating data description, statements, existing applications and user experiences.
The committee mainly examined the FLOW-MATIC, AIMACO and COMTRAN programming languages. The FLOW-MATIC language was particularly influential because it had been implemented and because AIMACO was a derivative of it with only minor changes. FLOW-MATIC's inventor, Grace Hopper, also served as a technical adviser to the committee. FLOW-MATIC's major contributions to COBOL were long variable names, English words for commands and the separation of data descriptions and instructions.
Hopper is sometimes called "the mother of COBOL" or "the grandmother of COBOL", although Jean Sammet, a lead designer of COBOL, said Hopper "was not the mother, creator or developer of Cobol".
IBM's COMTRAN language, invented by Bob Bemer, was regarded as a competitor to FLOW-MATIC by a short-range committee made up of colleagues of Grace Hopper. Some of its features were not incorporated into COBOL so that it would not look like IBM had dominated the design process, and Jean Sammet said in 1981 that there had been a "strong anti-IBM bias" from some committee members (herself included). In one case, after Roy Goldfinger, author of the COMTRAN manual and intermediate-range committee member, attended a subcommittee meeting to support his language and encourage the use of algebraic expressions, Grace Hopper sent a memo to the short-range committee reiterating Sperry Rand's efforts to create a language based on English.
In 1980, Grace Hopper commented that "COBOL 60 is 95% FLOW-MATIC" and that COMTRAN had had an "extremely small" influence. Furthermore, she said that she would claim that work was influenced by both FLOW-MATIC and COMTRAN only to "keep other people happy [so they] wouldn't try to knock us out".
Features from COMTRAN incorporated into COBOL included formulas, the PICTURE clause, an improved IF statement, which obviated the need for GO TOs, and a more robust file management system.
The usefulness of the committee's work was subject of great debate. While some members thought the language had too many compromises and was the result of design by committee, others felt it was better than the three languages examined. Some felt the language was too complex; others, too simple.
Controversial features included those some considered useless or too advanced for data processing users. Such features included Boolean expressions, formulas and table subscripts (indices). Another point of controversy was whether to make keywords context-sensitive and the effect that would have on readability. Although context-sensitive keywords were rejected, the approach was later used in PL/I and partially in COBOL from 2002. Little consideration was given to interactivity, interaction with operating systems (few existed at that time) and functions (thought of as purely mathematical and of no use in data processing).
The specifications were presented to the executive committee on 4 September. They fell short of expectations: Joseph Wegstein noted that "it contains rough spots and requires some additions", and Bob Bemer later described them as a "hodgepodge". The subcommittee was given until December to improve it.
At a mid-September meeting, the committee discussed the new language's name. Suggestions included "BUSY" (Business System), "INFOSYL" (Information System Language) and "COCOSYL" (Common Computer Systems Language). It is unclear who coined the name "COBOL", although Bob Bemer later claimed it had been his suggestion.
In October, the intermediate-range committee received copies of the FACT language specification created by Roy Nutt. Its features impressed the committee so much that they passed a resolution to base COBOL on it.
This was a blow to the short-range committee, who had made good progress on the specification. Despite being technically superior, FACT had not been created with portability in mind or through manufacturer and user consensus. It also lacked a demonstrable implementation, allowing supporters of a FLOW-MATIC-based COBOL to overturn the resolution. RCA representative Howard Bromberg also blocked FACT, so that RCA's work on a COBOL implementation would not go to waste.
It soon became apparent that the committee was too large for any further progress to be made quickly. A frustrated Howard Bromberg bought a $15 tombstone with "COBOL" engraved on it and sent it to Charles Phillips to demonstrate his displeasure.
A sub-committee was formed to analyze existing languages and was made up of six individuals:
The sub-committee did most of the work creating the specification, leaving the short-range committee to review and modify their work before producing the finished specification.
The specifications were approved by the executive committee on 8 January 1960, and sent to the government printing office, which printed them as COBOL 60. The language's stated objectives were to allow efficient, portable programs to be easily written, to allow users to move to new systems with minimal effort and cost, and to be suitable for inexperienced programmers.
The CODASYL Executive Committee later created the COBOL Maintenance Committee to answer questions from users and vendors and to improve and expand the specifications.
During 1960, the list of manufacturers planning to build COBOL compilers grew. By September, five more manufacturers had joined CODASYL (Bendix, Control Data Corporation, General Electric (GE), National Cash Register and Philco), and all represented manufacturers had announced COBOL compilers. GE and IBM planned to integrate COBOL into their own languages, GECOM and COMTRAN, respectively. In contrast, International Computers and Tabulators planned to replace their language, CODEL, with COBOL.
Meanwhile, RCA and Sperry Rand worked on creating COBOL compilers. The first COBOL program ran on 17 August on an RCA 501. On 6 and 7 December, the same COBOL program (albeit with minor changes) ran on an RCA computer and a Remington-Rand Univac computer, demonstrating that compatibility could be achieved.
The relative influences of which languages were used continues to this day in the recommended advisory printed in all COBOL reference manuals:
COBOL is an industry language and is not the property of any company or group of companies, or of any organization or group of organizations.
No warranty, expressed or implied, is made by any contributor or by the CODASYL COBOL Committee as to the accuracy and functioning of the programming system and language. Moreover, no responsibility is assumed by any contributor, or by the committee, in connection therewith. The authors and copyright holders of the copyrighted material used herein are as follows:
They have specifically authorized the use of this material, in whole or in part, in the COBOL specifications. Such authorization extends to the reproduction and use of COBOL specifications in programming manuals or similar publications.
It is rather unlikely that Cobol will be around by the end of the decade.
Anonymous, June 1960
Many logical flaws were found in COBOL 60, leading General Electric's Charles Katz to warn that it could not be interpreted unambiguously. A reluctant short-term committee performed a total cleanup and, by March 1963, it was reported that COBOL's syntax was as definable as ALGOL's, although semantic ambiguities remained.
COBOL is a difficult language to write a compiler for, due to the large syntax and many optional elements within syntactic constructs as well as to the need to generate efficient code for a language with many possible data representations, implicit type conversions, and necessary set-ups for I/O operations. Early COBOL compilers were primitive and slow. A 1962 US Navy evaluation found compilation speeds of 3–11 statements per minute. By mid-1964, they had increased to 11–1000 statements per minute. It was observed that increasing memory would drastically increase speed and that compilation costs varied wildly: costs per statement were between $0.23 and $18.91.
In late 1962, IBM announced that COBOL would be their primary development language and that development of COMTRAN would cease.
The COBOL specification was revised three times in the five years after its publication. COBOL-60 was replaced in 1961 by COBOL-61. This was then replaced by the COBOL-61 Extended specifications in 1963, which introduced the sort and report writer facilities. The added facilities corrected flaws identified by Honeywell in late 1959 in a letter to the short-range committee. COBOL Edition 1965 brought further clarifications to the specifications and introduced facilities for handling mass storage files and tables.
Efforts began to standardize COBOL to overcome incompatibilities between versions. In late 1962, both ISO and the United States of America Standards Institute (now ANSI) formed groups to create standards. ANSI produced USA Standard COBOL X3.23 in August 1968, which became the cornerstone for later versions. This version was known as American National Standard (ANS) COBOL and was adopted by ISO in 1972.
By 1970, COBOL had become the most widely used programming language in the world.
Independently of the ANSI committee, the CODASYL Programming Language Committee was working on improving the language. They described new versions in 1968, 1969, 1970 and 1973, including changes such as new inter-program communication, debugging and file merging facilities as well as improved string-handling and library inclusion features.
Although CODASYL was independent of the ANSI committee, the CODASYL Journal of Development was used by ANSI to identify features that were popular enough to warrant implementing. The Programming Language Committee also liaised with ECMA and the Japanese COBOL Standard committee.
The Programming Language Committee was not well-known, however. The vice-president, William Rinehuls, complained that two-thirds of the COBOL community did not know of the committee's existence. It also lacked the funds to make public documents, such as minutes of meetings and change proposals, freely available.
In 1974, ANSI published a revised version of (ANS) COBOL, containing new features such as file organizations, the DELETE statement and the segmentation module. Deleted features included the NOTE statement, the EXAMINE statement (which was replaced by INSPECT) and the implementer-defined random access module (which was superseded by the new sequential and relative I/O modules). These made up 44 changes, which rendered existing statements incompatible with the new standard. The report writer was slated to be removed from COBOL, but was reinstated before the standard was published. ISO later adopted the updated standard in 1978.
In June 1978, work began on revising COBOL-74. The proposed standard (commonly called COBOL-80) differed significantly from the previous one, causing concerns about incompatibility and conversion costs. In January 1981, Joseph T. Brophy, Senior Vice-president of Travelers Insurance, threatened to sue the standard committee because it was not upwards compatible with COBOL-74. Mr. Brophy described previous conversions of their 40-million-line code base as "non-productive" and a "complete waste of our programmer resources". Later that year, the Data Processing Management Association (DPMA) said it was "strongly opposed" to the new standard, citing "prohibitive" conversion costs and enhancements that were "forced on the user".
During the first public review period, the committee received 2,200 responses, of which 1,700 were negative form letters. Other responses were detailed analyses of the effect COBOL-80 would have on their systems; conversion costs were predicted to be at least 50 cents per line of code. Fewer than a dozen of the responses were in favor of the proposed standard.
ISO TC97-SC5 installed in 1979 the international COBOL Experts Group, on initiative of Wim Ebbinkhuijsen. The group consisted of COBOL experts from many countries, including the United States. Its goal was to achieve mutual understanding and respect between ANSI and the rest of the world with regard to the need of new COBOL features. After three years, ISO changed the status of the group to a formal Working Group: WG 4 COBOL. The group took primary ownership and development of the COBOL standard, where ANSI made most of the proposals.
In 1983, the DPMA withdrew its opposition to the standard, citing the responsiveness of the committee to public concerns. In the same year, a National Bureau of Standards study concluded that the proposed standard would present few problems. A year later, DEC released a VAX/VMS COBOL-80, and noted that conversion of COBOL-74 programs posed few problems. The new EVALUATE statement and inline PERFORM were particularly well received and improved productivity, thanks to simplified control flow and debugging.
The second public review drew another 1,000 (mainly negative) responses, while the last drew just 25, by which time many concerns had been addressed.
In 1985, the ISO Working Group 4 accepted the then-version of the ANSI proposed standard, made several changes and set it as the new ISO standard COBOL 85. It was published in late 1985.
Sixty features were changed or deprecated and 115 were added, such as:
The new standard was adopted by all national standard bodies, including ANSI.
Two amendments followed in 1989 and 1993, the first introducing intrinsic functions and the other providing corrections.
In 1997, Gartner Group estimated that there were a total of 200 billion lines of COBOL in existence, which ran 80% of all business programs.
In the early 1990s, work began on adding object-orientation in the next full revision of COBOL. Object-oriented features were taken from C++ and Smalltalk.
The initial estimate was to have this revision completed by 1997, and an ISO Committee Draft (CD) was available by 1997. Some vendors (including Micro Focus, Fujitsu, and IBM) introduced object-oriented syntax based on drafts of the full revision. The final approved ISO standard was approved and published in late 2002.
Fujitsu/GTSoftware, Micro Focus and RainCode introduced object-oriented COBOL compilers targeting the .NET Framework.
There were many other new features, many of which had been in the CODASYL COBOL Journal of Development since 1978 and had missed the opportunity to be included in COBOL-85. These other features included:
Three corrigenda were published for the standard: two in 2006 and one in 2009.
Between 2003 and 2009, three technical reports were produced describing object finalization, XML processing and collection classes for COBOL.
COBOL 2002 suffered from poor support: no compilers completely supported the standard. Micro Focus found that it was due to a lack of user demand for the new features and due to the abolition of the NIST test suite, which had been used to test compiler conformance. The standardization process was also found to be slow and under-resourced.
COBOL 2014 includes the following changes:
The COBOL 2023 standard added a few new features:
There is as yet no known complete implementation of this standard.
COBOL programs are used globally in governments and businesses and are running on diverse operating systems such as z/OS, z/VSE, VME, Unix, NonStop OS, OpenVMS and Windows. In 1997, the Gartner Group reported that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more being written annually.
Near the end of the 20th century, the year 2000 problem (Y2K) was the focus of significant COBOL programming effort, sometimes by the same programmers who had designed the systems decades before. The particular level of effort required to correct COBOL code has been attributed to the large amount of business-oriented COBOL, as business applications use dates heavily, and to fixed-length data fields. Some studies attribute as much as "24% of Y2K software repair costs to Cobol". After the clean-up effort put into these programs for Y2K, a 2003 survey found that many remained in use. The authors said that the survey data suggest "a gradual decline in the importance of COBOL in application development over the [following] 10 years unless ... integration with other languages and technologies can be adopted".
In 2006 and 2012, Computerworld surveys (of 352 readers) found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software. 36% of managers said they planned to migrate from COBOL, and 25% said that they would do so if not for the expense of rewriting legacy code. Alternatively, some businesses have migrated their COBOL programs from mainframes to cheaper, faster hardware.
Testimony before the House of Representatives in 2016 indicated that COBOL is still in use by many federal agencies. Reuters reported in 2017 that 43% of banking systems still used COBOL with over 220 billion lines of COBOL code in use.
By 2019, the number of COBOL programmers was shrinking fast due to retirements, leading to an impending skills gap in business and government organizations which still use mainframe systems for high-volume transaction processing. Efforts to rewrite systems in newer languages have proven expensive and problematic, as has the outsourcing of code maintenance, thus proposals to train more people in COBOL are advocated.
During the COVID-19 pandemic and the ensuing surge of unemployment, several US states reported a shortage of skilled COBOL programmers to support the legacy systems used for unemployment benefit management. Many of these systems had been in the process of conversion to more modern programming languages prior to the pandemic, but the process was put on hold. Similarly, the US Internal Revenue Service rushed to patch its COBOL-based Individual Master File in order to disburse the tens of millions of payments mandated by the Coronavirus Aid, Relief, and Economic Security Act.
COBOL has an English-like syntax, which is used to describe nearly everything in a program. For example, a condition can be expressed as x IS GREATER THAN y or more concisely as x GREATER y or x > y. More complex conditions can be "abbreviated" by removing repeated conditions and variables. For example, a > b AND a > c OR a = d can be shortened to a > b AND c OR = d. To support this syntax, COBOL has over 300 keywords. Some of the keywords are simple alternative or pluralized spellings of the same word, which provides for more grammatically appropriate statements and clauses; e.g., the IN and OF keywords can be used interchangeably, as can TIME and TIMES, and VALUE and VALUES.
Each COBOL program is made up of four basic lexical items: words, literals, picture character-strings (see § PICTURE clause) and separators. Words include reserved words and user-defined identifiers. They are up to 31 characters long and may include letters, digits, hyphens and underscores. Literals include numerals (e.g. 12) and strings (e.g. 'Hello!'). Separators include the space character and commas and semi-colons followed by a space.
A COBOL program is split into four divisions: the identification division, the environment division, the data division and the procedure division. The identification division specifies the name and type of the source element and is where classes and interfaces are specified. The environment division specifies any program features that depend on the system running it, such as files and character sets. The data division is used to declare variables and parameters. The procedure division contains the program's statements. Each division is sub-divided into sections, which are made up of paragraphs.
COBOL's syntax is usually described with a unique metalanguage using braces, brackets, bars and underlining. The metalanguage was developed for the original COBOL specifications. Although Backus–Naur form did exist at the time, the committee had not heard of it.
As an example, consider the following description of an ADD statement:
This description permits the following variants:
The height of COBOL's popularity coincided with the era of keypunch machines and punched cards. The program itself was written onto punched cards, then read in and compiled, and the data fed into the program was sometimes on cards as well.
COBOL can be written in two formats: fixed (the default) or free. In fixed-format, code must be aligned to fit in certain areas (a hold-over from using punched cards). Until COBOL 2002, these were:
In COBOL 2002, Areas A and B were merged to form the program-text area, which now ends at an implementor-defined column.
COBOL 2002 also introduced free-format code. Free-format code can be placed in any column of the file, as in newer programming languages. Comments are specified using *>, which can be placed anywhere and can also be used in fixed-format source code. Continuation lines are not present, and the >>PAGE directive replaces the / indicator.
The identification division identifies the following code entity and contains the definition of a class or interface.
Classes and interfaces have been in COBOL since 2002. Classes have factory objects, containing class methods and variables, and instance objects, containing instance methods and variables. Inheritance and interfaces provide polymorphism. Support for generic programming is provided through parameterized classes, which can be instantiated to use any class or interface. Objects are stored as references which may be restricted to a certain type. There are two ways of calling a method: the INVOKE statement, which acts similarly to CALL, or through inline method invocation, which is analogous to using functions.
COBOL does not provide a way to hide methods. Class data can be hidden, however, by declaring it without a PROPERTY clause, which leaves external code no way to access it. Method overloading was added in COBOL 2014.
The environment division contains the configuration section and the input-output section. The configuration section is used to specify variable features such as currency signs, locales and character sets. The input-output section contains file-related information.
COBOL supports three file formats, or organizations: sequential, indexed and relative. In sequential files, records are contiguous and must be traversed sequentially, similarly to a linked list. Indexed files have one or more indexes which allow records to be randomly accessed and which can be sorted on them. Each record must have a unique key, but other, alternate, record keys need not be unique. Implementations of indexed files vary between vendors, although common implementations, such as C-ISAM and VSAM, are based on IBM's ISAM. other implementations are Record Management Services on OpenVMS and Enscribe on HPE NonStop (Tandem). Relative files, like indexed files, have a unique record key, but they do not have alternate keys. A relative record's key is its ordinal position; for example, the 10th record has a key of 10. This means that creating a record with a key of 5 may require the creation of (empty) preceding records. Relative files also allow for both sequential and random access.
A common non-standard extension is the line sequential organization, used to process text files. Records in a file are terminated by a newline and may be of varying length.
The data division is split into six sections which declare different items: the file section, for file records; the working-storage section, for static variables; the local-storage section, for automatic variables; the linkage section, for parameters and the return value; the report section and the screen section, for text-based user interfaces.
Data items in COBOL are declared hierarchically through the use of level-numbers which indicate if a data item is part of another. An item with a higher level-number is subordinate to an item with a lower one. Top-level data items, with a level-number of 1, are called records. Items that have subordinate aggregate data are called group items; those that do not are called elementary items. Level-numbers used to describe standard data items are between 1 and 49.
In the above example, elementary item num and group item the-date are subordinate to the record some-record, while elementary items the-year, the-month, and the-day are part of the group item the-date.
Subordinate items can be disambiguated with the IN (or OF) keyword. For example, consider the example code above along with the following example:
The names the-year, the-month, and the-day are ambiguous by themselves, since more than one data item is defined with those names. To specify a particular data item, for instance one of the items contained within the sale-date group, the programmer would use the-year IN sale-date (or the equivalent the-year OF sale-date). This syntax is similar to the "dot notation" supported by most contemporary languages.
A level-number of 66 is used to declare a re-grouping of previously defined items, irrespective of how those items are structured. This data level, also referred to by the associated RENAMES clause, is rarely used and, circa 1988, was usually found in old programs. Its ability to ignore the hierarchical and logical structure data meant its use was not recommended and many installations forbade its use.
A 77 level-number indicates the item is stand-alone, and in such situations is equivalent to the level-number 01. For example, the following code declares two 77-level data items, property-name and sales-region, which are non-group data items that are independent of (not subordinate to) any other data items:
An 88 level-number declares a condition name (a so-called 88-level) which is true when its parent data item contains one of the values specified in its VALUE clause. For example, the following code defines two 88-level condition-name items that are true or false depending on the current character data value of the wage-type data item. When the data item contains a value of 'H', the condition-name wage-is-hourly is true, whereas when it contains a value of 'S' or 'Y', the condition-name wage-is-yearly is true. If the data item contains some other value, both of the condition-names are false.
Standard COBOL provides the following data types:
Type safety is variable in COBOL. Numeric data is converted between different representations and sizes silently and alphanumeric data can be placed in any data item that can be stored as a string, including numeric and group data. In contrast, object references and pointers may only be assigned from items of the same type and their values may be restricted to a certain type.
A PICTURE (or PIC) clause is a string of characters, each of which represents a portion of the data item and what it may contain. Some picture characters specify the type of the item and how many characters or digits it occupies in memory. For example, a 9 indicates a decimal digit, and an S indicates that the item is signed. Other picture characters (called insertion and editing characters) specify how an item should be formatted. For example, a series of + characters define character positions as well as how a leading sign character is to be positioned within the final character data; the rightmost non-numeric character will contain the item's sign, while other character positions corresponding to a + to the left of this position will contain a space. Repeated characters can be specified more concisely by specifying a number in parentheses after a picture character; for example, 9(7) is equivalent to 9999999. Picture specifications containing only digit (9) and sign (S) characters define purely numeric data items, while picture specifications containing alphabetic (A) or alphanumeric (X) characters define alphanumeric data items. The presence of other formatting characters define edited numeric or edited alphanumeric data items.
The USAGE clause declares the format in which data is stored. Depending on the data type, it can either complement or be used instead of a PICTURE clause. While it can be used to declare pointers and object references, it is mostly geared towards specifying numeric types. These numeric formats are:
The report writer is a declarative facility for creating reports. The programmer need only specify the report layout and the data required to produce it, freeing them from having to write code to handle things like page breaks, data formatting, and headings and footings.
Reports are associated with report files, which are files which may only be written to through report writer statements.
Each report is defined in the report section of the data division. A report is split into report groups which define the report's headings, footings and details. Reports work around hierarchical control breaks. Control breaks occur when a key variable changes it value; for example, when creating a report detailing customers' orders, a control break could occur when the program reaches a different customer's orders. Here is an example report description for a report which gives a salesperson's sales and which warns of any invalid records:
The above report description describes the following layout:
Four statements control the report writer: INITIATE, which prepares the report writer for printing; GENERATE, which prints a report group; SUPPRESS, which suppresses the printing of a report group; and TERMINATE, which terminates report processing. For the above sales report example, the procedure division might look like this:
Use of the Report Writer facility tends to vary considerably; some organizations use it extensively and some not at all. In addition, implementations of Report Writer ranged in quality, with those at the lower end sometimes using excessive amounts of memory at runtime.
The sections and paragraphs in the procedure division (collectively called procedures) can be used as labels and as simple subroutines. Unlike in other divisions, paragraphs do not need to be in sections.
Execution goes down through the procedures of a program until it is terminated. To use procedures as subroutines, the PERFORM verb is used.
A PERFORM statement somewhat resembles a procedure call in a newer languages in the sense that execution returns to the code following the PERFORM statement at the end of the called code; however, it does not provide a mechanism for parameter passing or for returning a result value. If a subroutine is invoked using a simple statement like PERFORM subroutine, then control returns at the end of the called procedure. However, PERFORM is unusual in that it may be used to call a range spanning a sequence of several adjacent procedures. This is done with the PERFORM sub-1 THRU sub-n construct:
The output of this program will be: "A A B C".
PERFORM also differs from conventional procedure calls in that there is, at least traditionally, no notion of a call stack. As a consequence, nested invocations are possible (a sequence of code being PERFORM'ed may execute a PERFORM statement itself), but require extra care if parts of the same code are executed by both invocations. The problem arises when the code in the inner invocation reaches the exit point of the outer invocation. More formally, if control passes through the exit point of a PERFORM invocation that was called earlier but has not yet completed, the COBOL 2002 standard stipulates that the behavior is undefined.
The reason is that COBOL, rather than a "return address", operates with what may be called a continuation address. When control flow reaches the end of any procedure, the continuation address is looked up and control is transferred to that address. Before the program runs, the continuation address for every procedure is initialized to the start address of the procedure that comes next in the program text so that, if no PERFORM statements happen, control flows from top to bottom through the program. But when a PERFORM statement executes, it modifies the continuation address of the called procedure (or the last procedure of the called range, if PERFORM THRU was used), so that control will return to the call site at the end. The original value is saved and is restored afterwards, but there is only one storage position. If two nested invocations operate on overlapping code, they may interfere which each other's management of the continuation address in several ways.
The following example (taken from Veerman & Verhoeven 2006) illustrates the problem:
One might expect that the output of this program would be "1 2 3 4 3": After displaying "2", the second PERFORM causes "3" and "4" to be displayed, and then the first invocation continues on with "3". In traditional COBOL implementations, this is not the case. Rather, the first PERFORM statement sets the continuation address at the end of LABEL3 so that it will jump back to the call site inside LABEL1. The second PERFORM statement sets the return at the end of LABEL4 but does not modify the continuation address of LABEL3, expecting it to be the default continuation. Thus, when the inner invocation arrives at the end of LABEL3, it jumps back to the outer PERFORM statement, and the program stops having printed just "1 2 3". On the other hand, in some COBOL implementations like the open-source TinyCOBOL compiler, the two PERFORM statements do not interfere with each other and the output is indeed "1 2 3 4 3". Therefore, the behavior in such cases is not only (perhaps) surprising, it is also not portable.
A special consequence of this limitation is that PERFORM cannot be used to write recursive code. Another simple example to illustrate this (slightly simplified from Veerman & Verhoeven 2006):
One might expect that the output is "1 2 3 END END END", and in fact that is what some COBOL compilers will produce. But other compilers, like IBM COBOL, will produce code that prints "1 2 3 END END END END ..." and so on, printing "END" over and over in an endless loop. Since there is limited space to store backup continuation addresses, the backups get overwritten in the course of recursive invocations, and all that can be restored is the jump back to DISPLAY 'END'.
COBOL 2014 has 47 statements (also called verbs), which can be grouped into the following broad categories: control flow, I/O, data manipulation and the report writer. The report writer statements are covered in the report writer section.
COBOL's conditional statements are IF and EVALUATE. EVALUATE is a switch-like statement with the added capability of evaluating multiple values and conditions. This can be used to implement decision tables. For example, the following might be used to control a CNC lathe:
The PERFORM statement is used to define loops which are executed until a condition is true (not while true, which is more common in other languages). It is also used to call procedures or ranges of procedures (see the procedures section for more details). CALL and INVOKE call subprograms and methods, respectively. The name of the subprogram/method is contained in a string which may be a literal or a data item. Parameters can be passed by reference, by content (where a copy is passed by reference) or by value (but only if a prototype is available). CANCEL unloads subprograms from memory. GO TO causes the program to jump to a specified procedure.
The GOBACK statement is a return statement and the STOP statement stops the program. The EXIT statement has six different formats: it can be used as a return statement, a break statement, a continue statement, an end marker or to leave a procedure.
Exceptions are raised by a RAISE statement and caught with a handler, or declarative, defined in the DECLARATIVES portion of the procedure division. Declaratives are sections beginning with a USE statement which specify the errors to handle. Exceptions can be names or objects. RESUME is used in a declarative to jump to the statement after the one that raised the exception or to a procedure outside the DECLARATIVES. Unlike other languages, uncaught exceptions may not terminate the program and the program can proceed unaffected.
File I/O is handled by the self-describing OPEN, CLOSE, READ, and WRITE statements along with a further three: REWRITE, which updates a record; START, which selects subsequent records to access by finding a record with a certain key; and UNLOCK, which releases a lock on the last record accessed.
User interaction is done using ACCEPT and DISPLAY.
The following verbs manipulate data:
Files and tables are sorted using SORT and the MERGE verb merges and sorts files. The RELEASE verb provides records to sort and RETURN retrieves sorted records in order.
Some statements, such as IF and READ, may themselves contain statements. Such statements may be terminated in two ways: by a period (implicit termination), which terminates all unterminated statements contained, or by a scope terminator, which terminates the nearest matching open statement.
Nested statements terminated with a period are a common source of bugs. For example, examine the following code:
Here, the intent is to display y and z if condition x is true. However, z will be displayed whatever the value of x because the IF statement is terminated by an erroneous period after DISPLAY y.
Another bug is a result of the dangling else problem, when two IF statements can associate with an ELSE.
In the above fragment, the ELSE associates with the IF y statement instead of the IF x statement, causing a bug. Prior to the introduction of explicit scope terminators, preventing it would require ELSE NEXT SENTENCE to be placed after the inner IF.
The original (1959) COBOL specification supported the infamous ALTER X TO PROCEED TO Y statement, for which many compilers generated self-modifying code. X and Y are procedure labels, and the single GO TO statement in procedure X executed after such an ALTER statement means GO TO Y instead. Many compilers still support it, but it was deemed obsolete in the COBOL 1985 standard and deleted in 2002.
The ALTER statement was poorly regarded because it undermined "locality of context" and made a program's overall logic difficult to comprehend. As textbook author Daniel D. McCracken wrote in 1976, when "someone who has never seen the program before must become familiar with it as quickly as possible, sometimes under critical time pressure because the program has failed ... the sight of a GO TO statement in a paragraph by itself, signaling as it does the existence of an unknown number of ALTER statements at unknown locations throughout the program, strikes fear in the heart of the bravest programmer."
A "Hello, world" program in COBOL:
When the now famous "Hello, World!" program example in The C Programming Language was first published in 1978 a similar mainframe COBOL program sample would have been submitted through JCL, very likely using a punch card reader, and 80 column punch cards. The listing below, with an empty DATA DIVISION, was tested using Linux and the System/370 Hercules emulator running MVS 3.8J. The JCL, written in July 2015, is derived from the Hercules tutorials and samples hosted by Jay Moseley. In keeping with COBOL programming of that era, HELLO, WORLD is displayed in all capital letters.
After submitting the JCL, the MVS console displayed:
Line 10 of the console listing above is highlighted for effect, the highlighting is not part of the actual console output.
The associated compiler listing generated over four pages of technical detail and job run information, for the single line of output from the 14 lines of COBOL.
In the 1970s, adoption of the structured programming paradigm was becoming increasingly widespread. Edsger Dijkstra, a preeminent computer scientist, wrote a letter to the editor of Communications of the ACM, published in 1975 entitled "How do we tell truths that might hurt?", in which he was critical of COBOL and several other contemporary languages; remarking that "the use of COBOL cripples the mind".
In a published dissent to Dijkstra's remarks, the computer scientist Howard E. Tompkins claimed that unstructured COBOL tended to be "written by programmers that have never had the benefit of structured COBOL taught well", arguing that the issue was primarily one of training.
One cause of spaghetti code was the GO TO statement. Attempts to remove GO TOs from COBOL code, however, resulted in convoluted programs and reduced code quality. GO TOs were largely replaced by the PERFORM statement and procedures, which promoted modular programming and gave easy access to powerful looping facilities. However, PERFORM could be used only with procedures so loop bodies were not located where they were used, making programs harder to understand.
COBOL programs were infamous for being monolithic and lacking modularization. COBOL code could be modularized only through procedures, which were found to be inadequate for large systems. It was impossible to restrict access to data, meaning a procedure could access and modify any data item. Furthermore, there was no way to pass parameters to a procedure, an omission Jean Sammet regarded as the committee's biggest mistake.
Another complication stemmed from the ability to PERFORM THRU a specified sequence of procedures. This meant that control could jump to and return from any procedure, creating convoluted control flow and permitting a programmer to break the single-entry single-exit rule.
This situation improved as COBOL adopted more features. COBOL-74 added subprograms, giving programmers the ability to control the data each part of the program could access. COBOL-85 then added nested subprograms, allowing programmers to hide subprograms. Further control over data and code came in 2002 when object-oriented programming, user-defined functions and user-defined data types were included.
Nevertheless, much important legacy COBOL software uses unstructured code, which has become practically unmaintainable. It can be too risky and costly to modify even a simple section of code, since it may be used from unknown places in unknown ways.
COBOL was intended to be a highly portable, "common" language. However, by 2001, around 300 dialects had been created. One source of dialects was the standard itself: the 1974 standard was composed of one mandatory nucleus and eleven functional modules, each containing two or three levels of support. This permitted 104,976 possible variants.
COBOL-85 was not fully compatible with earlier versions, and its development was controversial. Joseph T. Brophy, the CIO of Travelers Insurance, spearheaded an effort to inform COBOL users of the heavy reprogramming costs of implementing the new standard. As a result, the ANSI COBOL Committee received more than 2,200 letters from the public, mostly negative, requiring the committee to make changes. On the other hand, conversion to COBOL-85 was thought to increase productivity in future years, thus justifying the conversion costs.
A weak, verbose, and flabby language used by code grinders to do boring mindless things on dinosaur mainframes. [...] Its very name is seldom uttered without ritual expressions of disgust or horror.
The Jargon File 4.4.8.
COBOL syntax has often been criticized for its verbosity. Proponents say that this was intended to make the code self-documenting, easing program maintenance. COBOL was also intended to be easy for programmers to learn and use, while still being readable to non-technical staff such as managers.
The desire for readability led to the use of English-like syntax and structural elements, such as nouns, verbs, clauses, sentences, sections, and divisions. Yet by 1984, maintainers of COBOL programs were struggling to deal with "incomprehensible" code and the main changes in COBOL-85 were there to help ease maintenance.
Jean Sammet, a short-range committee member, noted that "little attempt was made to cater to the professional programmer, in fact people whose main interest is programming tend to be very unhappy with COBOL" which she attributed to COBOL's verbose syntax.
The COBOL community has always been isolated from the computer science community. No academic computer scientists participated in the design of COBOL: all of those on the committee came from commerce or government. Computer scientists at the time were more interested in fields like numerical analysis, physics and system programming than the commercial file-processing problems which COBOL development tackled. Jean Sammet attributed COBOL's unpopularity to an initial "snob reaction" due to its inelegance, the lack of influential computer scientists participating in the design process and a disdain for business data processing. The COBOL specification used a unique "notation", or metalanguage, to define its syntax rather than the new Backus–Naur form which the committee did not know of. This resulted in "severe" criticism.
The academic world tends to regard COBOL as verbose, clumsy and inelegant, and tries to ignore it, although there are probably more COBOL programs and programmers in the world than there are for FORTRAN, ALGOL and PL/I combined. For the most part, only schools with an immediate vocational objective provide instruction in COBOL.
Richard Conway and David Gries, 1973
Later, COBOL suffered from a shortage of material covering it; it took until 1963 for introductory books to appear (with Richard D. Irwin publishing a college textbook on COBOL in 1966). By 1985, there were twice as many books on FORTRAN and four times as many on BASIC as on COBOL in the Library of Congress. University professors taught more modern, state-of-the-art languages and techniques instead of COBOL which was said to have a "trade school" nature. Donald Nelson, chair of the CODASYL COBOL committee, said in 1984 that "academics ... hate COBOL" and that computer science graduates "had 'hate COBOL' drilled into them".
By the mid-1980s, there was also significant condescension towards COBOL in the business community from users of other languages, for example FORTRAN or assembler, implying that COBOL could be used only for non-challenging problems.
In 2003, COBOL featured in 80% of information systems curricula in the United States, the same proportion as C++ and Java. Ten years later, a poll by Micro Focus found that 20% of university academics thought COBOL was outdated or dead and that 55% believed their students thought COBOL was outdated or dead. The same poll also found that only 25% of academics had COBOL programming on their curriculum even though 60% thought they should teach it.
Doubts have been raised about the competence of the standards committee. Short-term committee member Howard Bromberg said that there was "little control" over the development process and that it was "plagued by discontinuity of personnel and ... a lack of talent." Jean Sammet and Jerome Garfunkel also noted that changes introduced in one revision of the standard would be reverted in the next, due as much to changes in who was in the standard committee as to objective evidence.
COBOL standards have repeatedly suffered from delays: COBOL-85 arrived five years later than hoped, COBOL 2002 was five years late, and COBOL 2014 was six years late. To combat delays, the standard committee allowed the creation of optional addenda which would add features more quickly than by waiting for the next standard revision. However, some committee members raised concerns about incompatibilities between implementations and frequent modifications of the standard.
COBOL's data structures influenced subsequent programming languages. Its record and file structure influenced PL/I and Pascal, and the REDEFINES clause was a predecessor to Pascal's variant records. Explicit file structure definitions preceded the development of database management systems and aggregated data was a significant advance over Fortran's arrays.
PICTURE data declarations were incorporated into PL/I, with minor changes.
COBOL's COPY facility, although considered "primitive", influenced the development of include directives.
The focus on portability and standardization meant programs written in COBOL could be portable and facilitated the spread of the language to a wide variety of hardware platforms and operating systems. Additionally, the well-defined division structure restricts the definition of external references to the Environment Division, which simplifies platform changes in particular. | [
{
"paragraph_id": 0,
"text": "COBOL (/ˈkoʊbɒl, -bɔːl/; an acronym for \"common business-oriented language\") is a compiled English-like computer programming language designed for business use. It is an imperative, procedural and, since 2002, object-oriented language. COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still widely used in applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs. Many large financial institutions were developing new systems in the language as late as 2006, but most programming in COBOL today is purely to maintain existing applications. Programs are being moved to new platforms, rewritten in modern languages or replaced with other software.",
"title": ""
},
{
"paragraph_id": 1,
"text": "COBOL was designed in 1959 by CODASYL and was partly based on the programming language FLOW-MATIC designed by Grace Hopper. It was created as part of a U.S. Department of Defense effort to create a portable programming language for data processing. It was originally seen as a stopgap, but the Defense Department promptly forced computer manufacturers to provide it, resulting in its widespread adoption. It was standardized in 1968 and has been revised five times. Expansions include support for structured and object-oriented programming. The current standard is ISO/IEC 1989:2023.",
"title": ""
},
{
"paragraph_id": 2,
"text": "COBOL statements have prose syntax such as MOVE x TO y, which was designed to be self-documenting and highly readable. However, it is verbose and uses over 300 reserved words. This contrasts with the succinct and mathematically-inspired syntax of other languages (in this case, y = x;).",
"title": ""
},
{
"paragraph_id": 3,
"text": "COBOL code is split into four divisions (identification, environment, data, and procedure) containing a rigid hierarchy of sections, paragraphs and sentences. Lacking a large standard library, the standard specifies 43 statements, 87 functions and just one class.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Academic computer scientists were generally uninterested in business applications when COBOL was created and were not involved in its design; it was (effectively) designed from the ground up as a computer language for business, with an emphasis on inputs and outputs, whose only data types were numbers and strings of text.",
"title": ""
},
{
"paragraph_id": 5,
"text": "COBOL has been criticized for its verbosity, design process, and poor support for structured programming. These weaknesses result in monolithic programs that are hard to comprehend as a whole, despite their local readability.",
"title": ""
},
{
"paragraph_id": 6,
"text": "For years, COBOL has been assumed as a programming language for business operations in mainframes, although in recent years, many COBOL operations have been moved to cloud computing.",
"title": ""
},
{
"paragraph_id": 7,
"text": "In the late 1950s, computer users and manufacturers were becoming concerned about the rising cost of programming. A 1959 survey had found that in any data processing installation, the programming cost US$800,000 on average and that translating programs to run on new hardware would cost $600,000. At a time when new programming languages were proliferating, the same survey suggested that if a common business-oriented language were used, conversion would be far cheaper and faster.",
"title": "History and specification"
},
{
"paragraph_id": 8,
"text": "On 8 April 1959, Mary K. Hawes, a computer scientist at Burroughs Corporation, called a meeting of representatives from academia, computer users, and manufacturers at the University of Pennsylvania to organize a formal meeting on common business languages. Representatives included Grace Hopper (inventor of the English-like data processing language FLOW-MATIC), Jean Sammet and Saul Gorn.",
"title": "History and specification"
},
{
"paragraph_id": 9,
"text": "At the April meeting, the group asked the Department of Defense (DoD) to sponsor an effort to create a common business language. The delegation impressed Charles A. Phillips, director of the Data System Research Staff at the DoD, who thought that they \"thoroughly understood\" the DoD's problems. The DoD operated 225 computers, had 175 more on order and had spent over $200 million on implementing programs to run on them. Portable programs would save time, reduce costs and ease modernization.",
"title": "History and specification"
},
{
"paragraph_id": 10,
"text": "Charles Phillips agreed to sponsor the meeting and tasked the delegation with drafting the agenda.",
"title": "History and specification"
},
{
"paragraph_id": 11,
"text": "On 28 and 29 May 1959 (exactly one year after the Zürich ALGOL 58 meeting), a meeting was held at the Pentagon to discuss the creation of a common programming language for business. It was attended by 41 people and was chaired by Phillips. The Department of Defense was concerned about whether it could run the same data processing programs on different computers. FORTRAN, the only mainstream language at the time, lacked the features needed to write such programs.",
"title": "History and specification"
},
{
"paragraph_id": 12,
"text": "Representatives enthusiastically described a language that could work in a wide variety of environments, from banking and insurance to utilities and inventory control. They agreed unanimously that more people should be able to program and that the new language should not be restricted by the limitations of contemporary technology. A majority agreed that the language should make maximal use of English, be capable of change, be machine-independent and be easy to use, even at the expense of power.",
"title": "History and specification"
},
{
"paragraph_id": 13,
"text": "The meeting resulted in the creation of a steering committee and short, intermediate and long-range committees. The short-range committee was given to September (three months) to produce specifications for an interim language, which would then be improved upon by the other committees. Their official mission, however, was to identify the strengths and weaknesses of existing programming languages and did not explicitly direct them to create a new language.",
"title": "History and specification"
},
{
"paragraph_id": 14,
"text": "The deadline was met with disbelief by the short-range committee. One member, Betty Holberton, described the three-month deadline as \"gross optimism\" and doubted that the language really would be a stopgap.",
"title": "History and specification"
},
{
"paragraph_id": 15,
"text": "The steering committee met on 4 June and agreed to name the entire activity as the Committee on Data Systems Languages, or CODASYL, and to form an executive committee.",
"title": "History and specification"
},
{
"paragraph_id": 16,
"text": "The short-range committee members represented six computer manufacturers and three government agencies. The computer manufacturers were Burroughs Corporation, IBM, Minneapolis-Honeywell (Honeywell Labs), RCA, Sperry Rand, and Sylvania Electric Products. The government agencies were the U.S. Air Force, the Navy's David Taylor Model Basin, and the National Bureau of Standards (now the National Institute of Standards and Technology). The committee was chaired by Joseph Wegstein of the U.S. National Bureau of Standards. Work began by investigating data description, statements, existing applications and user experiences.",
"title": "History and specification"
},
{
"paragraph_id": 17,
"text": "The committee mainly examined the FLOW-MATIC, AIMACO and COMTRAN programming languages. The FLOW-MATIC language was particularly influential because it had been implemented and because AIMACO was a derivative of it with only minor changes. FLOW-MATIC's inventor, Grace Hopper, also served as a technical adviser to the committee. FLOW-MATIC's major contributions to COBOL were long variable names, English words for commands and the separation of data descriptions and instructions.",
"title": "History and specification"
},
{
"paragraph_id": 18,
"text": "Hopper is sometimes called \"the mother of COBOL\" or \"the grandmother of COBOL\", although Jean Sammet, a lead designer of COBOL, said Hopper \"was not the mother, creator or developer of Cobol\".",
"title": "History and specification"
},
{
"paragraph_id": 19,
"text": "IBM's COMTRAN language, invented by Bob Bemer, was regarded as a competitor to FLOW-MATIC by a short-range committee made up of colleagues of Grace Hopper. Some of its features were not incorporated into COBOL so that it would not look like IBM had dominated the design process, and Jean Sammet said in 1981 that there had been a \"strong anti-IBM bias\" from some committee members (herself included). In one case, after Roy Goldfinger, author of the COMTRAN manual and intermediate-range committee member, attended a subcommittee meeting to support his language and encourage the use of algebraic expressions, Grace Hopper sent a memo to the short-range committee reiterating Sperry Rand's efforts to create a language based on English.",
"title": "History and specification"
},
{
"paragraph_id": 20,
"text": "In 1980, Grace Hopper commented that \"COBOL 60 is 95% FLOW-MATIC\" and that COMTRAN had had an \"extremely small\" influence. Furthermore, she said that she would claim that work was influenced by both FLOW-MATIC and COMTRAN only to \"keep other people happy [so they] wouldn't try to knock us out\".",
"title": "History and specification"
},
{
"paragraph_id": 21,
"text": "Features from COMTRAN incorporated into COBOL included formulas, the PICTURE clause, an improved IF statement, which obviated the need for GO TOs, and a more robust file management system.",
"title": "History and specification"
},
{
"paragraph_id": 22,
"text": "The usefulness of the committee's work was subject of great debate. While some members thought the language had too many compromises and was the result of design by committee, others felt it was better than the three languages examined. Some felt the language was too complex; others, too simple.",
"title": "History and specification"
},
{
"paragraph_id": 23,
"text": "Controversial features included those some considered useless or too advanced for data processing users. Such features included Boolean expressions, formulas and table subscripts (indices). Another point of controversy was whether to make keywords context-sensitive and the effect that would have on readability. Although context-sensitive keywords were rejected, the approach was later used in PL/I and partially in COBOL from 2002. Little consideration was given to interactivity, interaction with operating systems (few existed at that time) and functions (thought of as purely mathematical and of no use in data processing).",
"title": "History and specification"
},
{
"paragraph_id": 24,
"text": "The specifications were presented to the executive committee on 4 September. They fell short of expectations: Joseph Wegstein noted that \"it contains rough spots and requires some additions\", and Bob Bemer later described them as a \"hodgepodge\". The subcommittee was given until December to improve it.",
"title": "History and specification"
},
{
"paragraph_id": 25,
"text": "At a mid-September meeting, the committee discussed the new language's name. Suggestions included \"BUSY\" (Business System), \"INFOSYL\" (Information System Language) and \"COCOSYL\" (Common Computer Systems Language). It is unclear who coined the name \"COBOL\", although Bob Bemer later claimed it had been his suggestion.",
"title": "History and specification"
},
{
"paragraph_id": 26,
"text": "In October, the intermediate-range committee received copies of the FACT language specification created by Roy Nutt. Its features impressed the committee so much that they passed a resolution to base COBOL on it.",
"title": "History and specification"
},
{
"paragraph_id": 27,
"text": "This was a blow to the short-range committee, who had made good progress on the specification. Despite being technically superior, FACT had not been created with portability in mind or through manufacturer and user consensus. It also lacked a demonstrable implementation, allowing supporters of a FLOW-MATIC-based COBOL to overturn the resolution. RCA representative Howard Bromberg also blocked FACT, so that RCA's work on a COBOL implementation would not go to waste.",
"title": "History and specification"
},
{
"paragraph_id": 28,
"text": "It soon became apparent that the committee was too large for any further progress to be made quickly. A frustrated Howard Bromberg bought a $15 tombstone with \"COBOL\" engraved on it and sent it to Charles Phillips to demonstrate his displeasure.",
"title": "History and specification"
},
{
"paragraph_id": 29,
"text": "A sub-committee was formed to analyze existing languages and was made up of six individuals:",
"title": "History and specification"
},
{
"paragraph_id": 30,
"text": "The sub-committee did most of the work creating the specification, leaving the short-range committee to review and modify their work before producing the finished specification.",
"title": "History and specification"
},
{
"paragraph_id": 31,
"text": "The specifications were approved by the executive committee on 8 January 1960, and sent to the government printing office, which printed them as COBOL 60. The language's stated objectives were to allow efficient, portable programs to be easily written, to allow users to move to new systems with minimal effort and cost, and to be suitable for inexperienced programmers.",
"title": "History and specification"
},
{
"paragraph_id": 32,
"text": "The CODASYL Executive Committee later created the COBOL Maintenance Committee to answer questions from users and vendors and to improve and expand the specifications.",
"title": "History and specification"
},
{
"paragraph_id": 33,
"text": "During 1960, the list of manufacturers planning to build COBOL compilers grew. By September, five more manufacturers had joined CODASYL (Bendix, Control Data Corporation, General Electric (GE), National Cash Register and Philco), and all represented manufacturers had announced COBOL compilers. GE and IBM planned to integrate COBOL into their own languages, GECOM and COMTRAN, respectively. In contrast, International Computers and Tabulators planned to replace their language, CODEL, with COBOL.",
"title": "History and specification"
},
{
"paragraph_id": 34,
"text": "Meanwhile, RCA and Sperry Rand worked on creating COBOL compilers. The first COBOL program ran on 17 August on an RCA 501. On 6 and 7 December, the same COBOL program (albeit with minor changes) ran on an RCA computer and a Remington-Rand Univac computer, demonstrating that compatibility could be achieved.",
"title": "History and specification"
},
{
"paragraph_id": 35,
"text": "The relative influences of which languages were used continues to this day in the recommended advisory printed in all COBOL reference manuals:",
"title": "History and specification"
},
{
"paragraph_id": 36,
"text": "COBOL is an industry language and is not the property of any company or group of companies, or of any organization or group of organizations.",
"title": "History and specification"
},
{
"paragraph_id": 37,
"text": "No warranty, expressed or implied, is made by any contributor or by the CODASYL COBOL Committee as to the accuracy and functioning of the programming system and language. Moreover, no responsibility is assumed by any contributor, or by the committee, in connection therewith. The authors and copyright holders of the copyrighted material used herein are as follows:",
"title": "History and specification"
},
{
"paragraph_id": 38,
"text": "They have specifically authorized the use of this material, in whole or in part, in the COBOL specifications. Such authorization extends to the reproduction and use of COBOL specifications in programming manuals or similar publications.",
"title": "History and specification"
},
{
"paragraph_id": 39,
"text": "It is rather unlikely that Cobol will be around by the end of the decade.",
"title": "History and specification"
},
{
"paragraph_id": 40,
"text": "Anonymous, June 1960",
"title": "History and specification"
},
{
"paragraph_id": 41,
"text": "Many logical flaws were found in COBOL 60, leading General Electric's Charles Katz to warn that it could not be interpreted unambiguously. A reluctant short-term committee performed a total cleanup and, by March 1963, it was reported that COBOL's syntax was as definable as ALGOL's, although semantic ambiguities remained.",
"title": "History and specification"
},
{
"paragraph_id": 42,
"text": "COBOL is a difficult language to write a compiler for, due to the large syntax and many optional elements within syntactic constructs as well as to the need to generate efficient code for a language with many possible data representations, implicit type conversions, and necessary set-ups for I/O operations. Early COBOL compilers were primitive and slow. A 1962 US Navy evaluation found compilation speeds of 3–11 statements per minute. By mid-1964, they had increased to 11–1000 statements per minute. It was observed that increasing memory would drastically increase speed and that compilation costs varied wildly: costs per statement were between $0.23 and $18.91.",
"title": "History and specification"
},
{
"paragraph_id": 43,
"text": "In late 1962, IBM announced that COBOL would be their primary development language and that development of COMTRAN would cease.",
"title": "History and specification"
},
{
"paragraph_id": 44,
"text": "The COBOL specification was revised three times in the five years after its publication. COBOL-60 was replaced in 1961 by COBOL-61. This was then replaced by the COBOL-61 Extended specifications in 1963, which introduced the sort and report writer facilities. The added facilities corrected flaws identified by Honeywell in late 1959 in a letter to the short-range committee. COBOL Edition 1965 brought further clarifications to the specifications and introduced facilities for handling mass storage files and tables.",
"title": "History and specification"
},
{
"paragraph_id": 45,
"text": "Efforts began to standardize COBOL to overcome incompatibilities between versions. In late 1962, both ISO and the United States of America Standards Institute (now ANSI) formed groups to create standards. ANSI produced USA Standard COBOL X3.23 in August 1968, which became the cornerstone for later versions. This version was known as American National Standard (ANS) COBOL and was adopted by ISO in 1972.",
"title": "History and specification"
},
{
"paragraph_id": 46,
"text": "By 1970, COBOL had become the most widely used programming language in the world.",
"title": "History and specification"
},
{
"paragraph_id": 47,
"text": "Independently of the ANSI committee, the CODASYL Programming Language Committee was working on improving the language. They described new versions in 1968, 1969, 1970 and 1973, including changes such as new inter-program communication, debugging and file merging facilities as well as improved string-handling and library inclusion features.",
"title": "History and specification"
},
{
"paragraph_id": 48,
"text": "Although CODASYL was independent of the ANSI committee, the CODASYL Journal of Development was used by ANSI to identify features that were popular enough to warrant implementing. The Programming Language Committee also liaised with ECMA and the Japanese COBOL Standard committee.",
"title": "History and specification"
},
{
"paragraph_id": 49,
"text": "The Programming Language Committee was not well-known, however. The vice-president, William Rinehuls, complained that two-thirds of the COBOL community did not know of the committee's existence. It also lacked the funds to make public documents, such as minutes of meetings and change proposals, freely available.",
"title": "History and specification"
},
{
"paragraph_id": 50,
"text": "In 1974, ANSI published a revised version of (ANS) COBOL, containing new features such as file organizations, the DELETE statement and the segmentation module. Deleted features included the NOTE statement, the EXAMINE statement (which was replaced by INSPECT) and the implementer-defined random access module (which was superseded by the new sequential and relative I/O modules). These made up 44 changes, which rendered existing statements incompatible with the new standard. The report writer was slated to be removed from COBOL, but was reinstated before the standard was published. ISO later adopted the updated standard in 1978.",
"title": "History and specification"
},
{
"paragraph_id": 51,
"text": "In June 1978, work began on revising COBOL-74. The proposed standard (commonly called COBOL-80) differed significantly from the previous one, causing concerns about incompatibility and conversion costs. In January 1981, Joseph T. Brophy, Senior Vice-president of Travelers Insurance, threatened to sue the standard committee because it was not upwards compatible with COBOL-74. Mr. Brophy described previous conversions of their 40-million-line code base as \"non-productive\" and a \"complete waste of our programmer resources\". Later that year, the Data Processing Management Association (DPMA) said it was \"strongly opposed\" to the new standard, citing \"prohibitive\" conversion costs and enhancements that were \"forced on the user\".",
"title": "History and specification"
},
{
"paragraph_id": 52,
"text": "During the first public review period, the committee received 2,200 responses, of which 1,700 were negative form letters. Other responses were detailed analyses of the effect COBOL-80 would have on their systems; conversion costs were predicted to be at least 50 cents per line of code. Fewer than a dozen of the responses were in favor of the proposed standard.",
"title": "History and specification"
},
{
"paragraph_id": 53,
"text": "ISO TC97-SC5 installed in 1979 the international COBOL Experts Group, on initiative of Wim Ebbinkhuijsen. The group consisted of COBOL experts from many countries, including the United States. Its goal was to achieve mutual understanding and respect between ANSI and the rest of the world with regard to the need of new COBOL features. After three years, ISO changed the status of the group to a formal Working Group: WG 4 COBOL. The group took primary ownership and development of the COBOL standard, where ANSI made most of the proposals.",
"title": "History and specification"
},
{
"paragraph_id": 54,
"text": "In 1983, the DPMA withdrew its opposition to the standard, citing the responsiveness of the committee to public concerns. In the same year, a National Bureau of Standards study concluded that the proposed standard would present few problems. A year later, DEC released a VAX/VMS COBOL-80, and noted that conversion of COBOL-74 programs posed few problems. The new EVALUATE statement and inline PERFORM were particularly well received and improved productivity, thanks to simplified control flow and debugging.",
"title": "History and specification"
},
{
"paragraph_id": 55,
"text": "The second public review drew another 1,000 (mainly negative) responses, while the last drew just 25, by which time many concerns had been addressed.",
"title": "History and specification"
},
{
"paragraph_id": 56,
"text": "In 1985, the ISO Working Group 4 accepted the then-version of the ANSI proposed standard, made several changes and set it as the new ISO standard COBOL 85. It was published in late 1985.",
"title": "History and specification"
},
{
"paragraph_id": 57,
"text": "Sixty features were changed or deprecated and 115 were added, such as:",
"title": "History and specification"
},
{
"paragraph_id": 58,
"text": "The new standard was adopted by all national standard bodies, including ANSI.",
"title": "History and specification"
},
{
"paragraph_id": 59,
"text": "Two amendments followed in 1989 and 1993, the first introducing intrinsic functions and the other providing corrections.",
"title": "History and specification"
},
{
"paragraph_id": 60,
"text": "In 1997, Gartner Group estimated that there were a total of 200 billion lines of COBOL in existence, which ran 80% of all business programs.",
"title": "History and specification"
},
{
"paragraph_id": 61,
"text": "In the early 1990s, work began on adding object-orientation in the next full revision of COBOL. Object-oriented features were taken from C++ and Smalltalk.",
"title": "History and specification"
},
{
"paragraph_id": 62,
"text": "The initial estimate was to have this revision completed by 1997, and an ISO Committee Draft (CD) was available by 1997. Some vendors (including Micro Focus, Fujitsu, and IBM) introduced object-oriented syntax based on drafts of the full revision. The final approved ISO standard was approved and published in late 2002.",
"title": "History and specification"
},
{
"paragraph_id": 63,
"text": "Fujitsu/GTSoftware, Micro Focus and RainCode introduced object-oriented COBOL compilers targeting the .NET Framework.",
"title": "History and specification"
},
{
"paragraph_id": 64,
"text": "There were many other new features, many of which had been in the CODASYL COBOL Journal of Development since 1978 and had missed the opportunity to be included in COBOL-85. These other features included:",
"title": "History and specification"
},
{
"paragraph_id": 65,
"text": "Three corrigenda were published for the standard: two in 2006 and one in 2009.",
"title": "History and specification"
},
{
"paragraph_id": 66,
"text": "Between 2003 and 2009, three technical reports were produced describing object finalization, XML processing and collection classes for COBOL.",
"title": "History and specification"
},
{
"paragraph_id": 67,
"text": "COBOL 2002 suffered from poor support: no compilers completely supported the standard. Micro Focus found that it was due to a lack of user demand for the new features and due to the abolition of the NIST test suite, which had been used to test compiler conformance. The standardization process was also found to be slow and under-resourced.",
"title": "History and specification"
},
{
"paragraph_id": 68,
"text": "COBOL 2014 includes the following changes:",
"title": "History and specification"
},
{
"paragraph_id": 69,
"text": "The COBOL 2023 standard added a few new features:",
"title": "History and specification"
},
{
"paragraph_id": 70,
"text": "There is as yet no known complete implementation of this standard.",
"title": "History and specification"
},
{
"paragraph_id": 71,
"text": "COBOL programs are used globally in governments and businesses and are running on diverse operating systems such as z/OS, z/VSE, VME, Unix, NonStop OS, OpenVMS and Windows. In 1997, the Gartner Group reported that 80% of the world's business ran on COBOL with over 200 billion lines of code and 5 billion lines more being written annually.",
"title": "History and specification"
},
{
"paragraph_id": 72,
"text": "Near the end of the 20th century, the year 2000 problem (Y2K) was the focus of significant COBOL programming effort, sometimes by the same programmers who had designed the systems decades before. The particular level of effort required to correct COBOL code has been attributed to the large amount of business-oriented COBOL, as business applications use dates heavily, and to fixed-length data fields. Some studies attribute as much as \"24% of Y2K software repair costs to Cobol\". After the clean-up effort put into these programs for Y2K, a 2003 survey found that many remained in use. The authors said that the survey data suggest \"a gradual decline in the importance of COBOL in application development over the [following] 10 years unless ... integration with other languages and technologies can be adopted\".",
"title": "History and specification"
},
{
"paragraph_id": 73,
"text": "In 2006 and 2012, Computerworld surveys (of 352 readers) found that over 60% of organizations used COBOL (more than C++ and Visual Basic .NET) and that for half of those, COBOL was used for the majority of their internal software. 36% of managers said they planned to migrate from COBOL, and 25% said that they would do so if not for the expense of rewriting legacy code. Alternatively, some businesses have migrated their COBOL programs from mainframes to cheaper, faster hardware.",
"title": "History and specification"
},
{
"paragraph_id": 74,
"text": "Testimony before the House of Representatives in 2016 indicated that COBOL is still in use by many federal agencies. Reuters reported in 2017 that 43% of banking systems still used COBOL with over 220 billion lines of COBOL code in use.",
"title": "History and specification"
},
{
"paragraph_id": 75,
"text": "By 2019, the number of COBOL programmers was shrinking fast due to retirements, leading to an impending skills gap in business and government organizations which still use mainframe systems for high-volume transaction processing. Efforts to rewrite systems in newer languages have proven expensive and problematic, as has the outsourcing of code maintenance, thus proposals to train more people in COBOL are advocated.",
"title": "History and specification"
},
{
"paragraph_id": 76,
"text": "During the COVID-19 pandemic and the ensuing surge of unemployment, several US states reported a shortage of skilled COBOL programmers to support the legacy systems used for unemployment benefit management. Many of these systems had been in the process of conversion to more modern programming languages prior to the pandemic, but the process was put on hold. Similarly, the US Internal Revenue Service rushed to patch its COBOL-based Individual Master File in order to disburse the tens of millions of payments mandated by the Coronavirus Aid, Relief, and Economic Security Act.",
"title": "History and specification"
},
{
"paragraph_id": 77,
"text": "COBOL has an English-like syntax, which is used to describe nearly everything in a program. For example, a condition can be expressed as x IS GREATER THAN y or more concisely as x GREATER y or x > y. More complex conditions can be \"abbreviated\" by removing repeated conditions and variables. For example, a > b AND a > c OR a = d can be shortened to a > b AND c OR = d. To support this syntax, COBOL has over 300 keywords. Some of the keywords are simple alternative or pluralized spellings of the same word, which provides for more grammatically appropriate statements and clauses; e.g., the IN and OF keywords can be used interchangeably, as can TIME and TIMES, and VALUE and VALUES.",
"title": "Features"
},
{
"paragraph_id": 78,
"text": "Each COBOL program is made up of four basic lexical items: words, literals, picture character-strings (see § PICTURE clause) and separators. Words include reserved words and user-defined identifiers. They are up to 31 characters long and may include letters, digits, hyphens and underscores. Literals include numerals (e.g. 12) and strings (e.g. 'Hello!'). Separators include the space character and commas and semi-colons followed by a space.",
"title": "Features"
},
{
"paragraph_id": 79,
"text": "A COBOL program is split into four divisions: the identification division, the environment division, the data division and the procedure division. The identification division specifies the name and type of the source element and is where classes and interfaces are specified. The environment division specifies any program features that depend on the system running it, such as files and character sets. The data division is used to declare variables and parameters. The procedure division contains the program's statements. Each division is sub-divided into sections, which are made up of paragraphs.",
"title": "Features"
},
{
"paragraph_id": 80,
"text": "COBOL's syntax is usually described with a unique metalanguage using braces, brackets, bars and underlining. The metalanguage was developed for the original COBOL specifications. Although Backus–Naur form did exist at the time, the committee had not heard of it.",
"title": "Features"
},
{
"paragraph_id": 81,
"text": "As an example, consider the following description of an ADD statement:",
"title": "Features"
},
{
"paragraph_id": 82,
"text": "This description permits the following variants:",
"title": "Features"
},
{
"paragraph_id": 83,
"text": "The height of COBOL's popularity coincided with the era of keypunch machines and punched cards. The program itself was written onto punched cards, then read in and compiled, and the data fed into the program was sometimes on cards as well.",
"title": "Features"
},
{
"paragraph_id": 84,
"text": "COBOL can be written in two formats: fixed (the default) or free. In fixed-format, code must be aligned to fit in certain areas (a hold-over from using punched cards). Until COBOL 2002, these were:",
"title": "Features"
},
{
"paragraph_id": 85,
"text": "In COBOL 2002, Areas A and B were merged to form the program-text area, which now ends at an implementor-defined column.",
"title": "Features"
},
{
"paragraph_id": 86,
"text": "COBOL 2002 also introduced free-format code. Free-format code can be placed in any column of the file, as in newer programming languages. Comments are specified using *>, which can be placed anywhere and can also be used in fixed-format source code. Continuation lines are not present, and the >>PAGE directive replaces the / indicator.",
"title": "Features"
},
{
"paragraph_id": 87,
"text": "The identification division identifies the following code entity and contains the definition of a class or interface.",
"title": "Features"
},
{
"paragraph_id": 88,
"text": "Classes and interfaces have been in COBOL since 2002. Classes have factory objects, containing class methods and variables, and instance objects, containing instance methods and variables. Inheritance and interfaces provide polymorphism. Support for generic programming is provided through parameterized classes, which can be instantiated to use any class or interface. Objects are stored as references which may be restricted to a certain type. There are two ways of calling a method: the INVOKE statement, which acts similarly to CALL, or through inline method invocation, which is analogous to using functions.",
"title": "Features"
},
{
"paragraph_id": 89,
"text": "COBOL does not provide a way to hide methods. Class data can be hidden, however, by declaring it without a PROPERTY clause, which leaves external code no way to access it. Method overloading was added in COBOL 2014.",
"title": "Features"
},
{
"paragraph_id": 90,
"text": "The environment division contains the configuration section and the input-output section. The configuration section is used to specify variable features such as currency signs, locales and character sets. The input-output section contains file-related information.",
"title": "Features"
},
{
"paragraph_id": 91,
"text": "COBOL supports three file formats, or organizations: sequential, indexed and relative. In sequential files, records are contiguous and must be traversed sequentially, similarly to a linked list. Indexed files have one or more indexes which allow records to be randomly accessed and which can be sorted on them. Each record must have a unique key, but other, alternate, record keys need not be unique. Implementations of indexed files vary between vendors, although common implementations, such as C-ISAM and VSAM, are based on IBM's ISAM. other implementations are Record Management Services on OpenVMS and Enscribe on HPE NonStop (Tandem). Relative files, like indexed files, have a unique record key, but they do not have alternate keys. A relative record's key is its ordinal position; for example, the 10th record has a key of 10. This means that creating a record with a key of 5 may require the creation of (empty) preceding records. Relative files also allow for both sequential and random access.",
"title": "Features"
},
{
"paragraph_id": 92,
"text": "A common non-standard extension is the line sequential organization, used to process text files. Records in a file are terminated by a newline and may be of varying length.",
"title": "Features"
},
{
"paragraph_id": 93,
"text": "The data division is split into six sections which declare different items: the file section, for file records; the working-storage section, for static variables; the local-storage section, for automatic variables; the linkage section, for parameters and the return value; the report section and the screen section, for text-based user interfaces.",
"title": "Features"
},
{
"paragraph_id": 94,
"text": "Data items in COBOL are declared hierarchically through the use of level-numbers which indicate if a data item is part of another. An item with a higher level-number is subordinate to an item with a lower one. Top-level data items, with a level-number of 1, are called records. Items that have subordinate aggregate data are called group items; those that do not are called elementary items. Level-numbers used to describe standard data items are between 1 and 49.",
"title": "Features"
},
{
"paragraph_id": 95,
"text": "In the above example, elementary item num and group item the-date are subordinate to the record some-record, while elementary items the-year, the-month, and the-day are part of the group item the-date.",
"title": "Features"
},
{
"paragraph_id": 96,
"text": "Subordinate items can be disambiguated with the IN (or OF) keyword. For example, consider the example code above along with the following example:",
"title": "Features"
},
{
"paragraph_id": 97,
"text": "The names the-year, the-month, and the-day are ambiguous by themselves, since more than one data item is defined with those names. To specify a particular data item, for instance one of the items contained within the sale-date group, the programmer would use the-year IN sale-date (or the equivalent the-year OF sale-date). This syntax is similar to the \"dot notation\" supported by most contemporary languages.",
"title": "Features"
},
{
"paragraph_id": 98,
"text": "A level-number of 66 is used to declare a re-grouping of previously defined items, irrespective of how those items are structured. This data level, also referred to by the associated RENAMES clause, is rarely used and, circa 1988, was usually found in old programs. Its ability to ignore the hierarchical and logical structure data meant its use was not recommended and many installations forbade its use.",
"title": "Features"
},
{
"paragraph_id": 99,
"text": "A 77 level-number indicates the item is stand-alone, and in such situations is equivalent to the level-number 01. For example, the following code declares two 77-level data items, property-name and sales-region, which are non-group data items that are independent of (not subordinate to) any other data items:",
"title": "Features"
},
{
"paragraph_id": 100,
"text": "An 88 level-number declares a condition name (a so-called 88-level) which is true when its parent data item contains one of the values specified in its VALUE clause. For example, the following code defines two 88-level condition-name items that are true or false depending on the current character data value of the wage-type data item. When the data item contains a value of 'H', the condition-name wage-is-hourly is true, whereas when it contains a value of 'S' or 'Y', the condition-name wage-is-yearly is true. If the data item contains some other value, both of the condition-names are false.",
"title": "Features"
},
{
"paragraph_id": 101,
"text": "Standard COBOL provides the following data types:",
"title": "Features"
},
{
"paragraph_id": 102,
"text": "Type safety is variable in COBOL. Numeric data is converted between different representations and sizes silently and alphanumeric data can be placed in any data item that can be stored as a string, including numeric and group data. In contrast, object references and pointers may only be assigned from items of the same type and their values may be restricted to a certain type.",
"title": "Features"
},
{
"paragraph_id": 103,
"text": "A PICTURE (or PIC) clause is a string of characters, each of which represents a portion of the data item and what it may contain. Some picture characters specify the type of the item and how many characters or digits it occupies in memory. For example, a 9 indicates a decimal digit, and an S indicates that the item is signed. Other picture characters (called insertion and editing characters) specify how an item should be formatted. For example, a series of + characters define character positions as well as how a leading sign character is to be positioned within the final character data; the rightmost non-numeric character will contain the item's sign, while other character positions corresponding to a + to the left of this position will contain a space. Repeated characters can be specified more concisely by specifying a number in parentheses after a picture character; for example, 9(7) is equivalent to 9999999. Picture specifications containing only digit (9) and sign (S) characters define purely numeric data items, while picture specifications containing alphabetic (A) or alphanumeric (X) characters define alphanumeric data items. The presence of other formatting characters define edited numeric or edited alphanumeric data items.",
"title": "Features"
},
{
"paragraph_id": 104,
"text": "The USAGE clause declares the format in which data is stored. Depending on the data type, it can either complement or be used instead of a PICTURE clause. While it can be used to declare pointers and object references, it is mostly geared towards specifying numeric types. These numeric formats are:",
"title": "Features"
},
{
"paragraph_id": 105,
"text": "The report writer is a declarative facility for creating reports. The programmer need only specify the report layout and the data required to produce it, freeing them from having to write code to handle things like page breaks, data formatting, and headings and footings.",
"title": "Features"
},
{
"paragraph_id": 106,
"text": "Reports are associated with report files, which are files which may only be written to through report writer statements.",
"title": "Features"
},
{
"paragraph_id": 107,
"text": "Each report is defined in the report section of the data division. A report is split into report groups which define the report's headings, footings and details. Reports work around hierarchical control breaks. Control breaks occur when a key variable changes it value; for example, when creating a report detailing customers' orders, a control break could occur when the program reaches a different customer's orders. Here is an example report description for a report which gives a salesperson's sales and which warns of any invalid records:",
"title": "Features"
},
{
"paragraph_id": 108,
"text": "The above report description describes the following layout:",
"title": "Features"
},
{
"paragraph_id": 109,
"text": "Four statements control the report writer: INITIATE, which prepares the report writer for printing; GENERATE, which prints a report group; SUPPRESS, which suppresses the printing of a report group; and TERMINATE, which terminates report processing. For the above sales report example, the procedure division might look like this:",
"title": "Features"
},
{
"paragraph_id": 110,
"text": "Use of the Report Writer facility tends to vary considerably; some organizations use it extensively and some not at all. In addition, implementations of Report Writer ranged in quality, with those at the lower end sometimes using excessive amounts of memory at runtime.",
"title": "Features"
},
{
"paragraph_id": 111,
"text": "The sections and paragraphs in the procedure division (collectively called procedures) can be used as labels and as simple subroutines. Unlike in other divisions, paragraphs do not need to be in sections.",
"title": "Features"
},
{
"paragraph_id": 112,
"text": "Execution goes down through the procedures of a program until it is terminated. To use procedures as subroutines, the PERFORM verb is used.",
"title": "Features"
},
{
"paragraph_id": 113,
"text": "A PERFORM statement somewhat resembles a procedure call in a newer languages in the sense that execution returns to the code following the PERFORM statement at the end of the called code; however, it does not provide a mechanism for parameter passing or for returning a result value. If a subroutine is invoked using a simple statement like PERFORM subroutine, then control returns at the end of the called procedure. However, PERFORM is unusual in that it may be used to call a range spanning a sequence of several adjacent procedures. This is done with the PERFORM sub-1 THRU sub-n construct:",
"title": "Features"
},
{
"paragraph_id": 114,
"text": "The output of this program will be: \"A A B C\".",
"title": "Features"
},
{
"paragraph_id": 115,
"text": "PERFORM also differs from conventional procedure calls in that there is, at least traditionally, no notion of a call stack. As a consequence, nested invocations are possible (a sequence of code being PERFORM'ed may execute a PERFORM statement itself), but require extra care if parts of the same code are executed by both invocations. The problem arises when the code in the inner invocation reaches the exit point of the outer invocation. More formally, if control passes through the exit point of a PERFORM invocation that was called earlier but has not yet completed, the COBOL 2002 standard stipulates that the behavior is undefined.",
"title": "Features"
},
{
"paragraph_id": 116,
"text": "The reason is that COBOL, rather than a \"return address\", operates with what may be called a continuation address. When control flow reaches the end of any procedure, the continuation address is looked up and control is transferred to that address. Before the program runs, the continuation address for every procedure is initialized to the start address of the procedure that comes next in the program text so that, if no PERFORM statements happen, control flows from top to bottom through the program. But when a PERFORM statement executes, it modifies the continuation address of the called procedure (or the last procedure of the called range, if PERFORM THRU was used), so that control will return to the call site at the end. The original value is saved and is restored afterwards, but there is only one storage position. If two nested invocations operate on overlapping code, they may interfere which each other's management of the continuation address in several ways.",
"title": "Features"
},
{
"paragraph_id": 117,
"text": "The following example (taken from Veerman & Verhoeven 2006) illustrates the problem:",
"title": "Features"
},
{
"paragraph_id": 118,
"text": "One might expect that the output of this program would be \"1 2 3 4 3\": After displaying \"2\", the second PERFORM causes \"3\" and \"4\" to be displayed, and then the first invocation continues on with \"3\". In traditional COBOL implementations, this is not the case. Rather, the first PERFORM statement sets the continuation address at the end of LABEL3 so that it will jump back to the call site inside LABEL1. The second PERFORM statement sets the return at the end of LABEL4 but does not modify the continuation address of LABEL3, expecting it to be the default continuation. Thus, when the inner invocation arrives at the end of LABEL3, it jumps back to the outer PERFORM statement, and the program stops having printed just \"1 2 3\". On the other hand, in some COBOL implementations like the open-source TinyCOBOL compiler, the two PERFORM statements do not interfere with each other and the output is indeed \"1 2 3 4 3\". Therefore, the behavior in such cases is not only (perhaps) surprising, it is also not portable.",
"title": "Features"
},
{
"paragraph_id": 119,
"text": "A special consequence of this limitation is that PERFORM cannot be used to write recursive code. Another simple example to illustrate this (slightly simplified from Veerman & Verhoeven 2006):",
"title": "Features"
},
{
"paragraph_id": 120,
"text": "One might expect that the output is \"1 2 3 END END END\", and in fact that is what some COBOL compilers will produce. But other compilers, like IBM COBOL, will produce code that prints \"1 2 3 END END END END ...\" and so on, printing \"END\" over and over in an endless loop. Since there is limited space to store backup continuation addresses, the backups get overwritten in the course of recursive invocations, and all that can be restored is the jump back to DISPLAY 'END'.",
"title": "Features"
},
{
"paragraph_id": 121,
"text": "COBOL 2014 has 47 statements (also called verbs), which can be grouped into the following broad categories: control flow, I/O, data manipulation and the report writer. The report writer statements are covered in the report writer section.",
"title": "Features"
},
{
"paragraph_id": 122,
"text": "COBOL's conditional statements are IF and EVALUATE. EVALUATE is a switch-like statement with the added capability of evaluating multiple values and conditions. This can be used to implement decision tables. For example, the following might be used to control a CNC lathe:",
"title": "Features"
},
{
"paragraph_id": 123,
"text": "The PERFORM statement is used to define loops which are executed until a condition is true (not while true, which is more common in other languages). It is also used to call procedures or ranges of procedures (see the procedures section for more details). CALL and INVOKE call subprograms and methods, respectively. The name of the subprogram/method is contained in a string which may be a literal or a data item. Parameters can be passed by reference, by content (where a copy is passed by reference) or by value (but only if a prototype is available). CANCEL unloads subprograms from memory. GO TO causes the program to jump to a specified procedure.",
"title": "Features"
},
{
"paragraph_id": 124,
"text": "The GOBACK statement is a return statement and the STOP statement stops the program. The EXIT statement has six different formats: it can be used as a return statement, a break statement, a continue statement, an end marker or to leave a procedure.",
"title": "Features"
},
{
"paragraph_id": 125,
"text": "Exceptions are raised by a RAISE statement and caught with a handler, or declarative, defined in the DECLARATIVES portion of the procedure division. Declaratives are sections beginning with a USE statement which specify the errors to handle. Exceptions can be names or objects. RESUME is used in a declarative to jump to the statement after the one that raised the exception or to a procedure outside the DECLARATIVES. Unlike other languages, uncaught exceptions may not terminate the program and the program can proceed unaffected.",
"title": "Features"
},
{
"paragraph_id": 126,
"text": "File I/O is handled by the self-describing OPEN, CLOSE, READ, and WRITE statements along with a further three: REWRITE, which updates a record; START, which selects subsequent records to access by finding a record with a certain key; and UNLOCK, which releases a lock on the last record accessed.",
"title": "Features"
},
{
"paragraph_id": 127,
"text": "User interaction is done using ACCEPT and DISPLAY.",
"title": "Features"
},
{
"paragraph_id": 128,
"text": "The following verbs manipulate data:",
"title": "Features"
},
{
"paragraph_id": 129,
"text": "Files and tables are sorted using SORT and the MERGE verb merges and sorts files. The RELEASE verb provides records to sort and RETURN retrieves sorted records in order.",
"title": "Features"
},
{
"paragraph_id": 130,
"text": "Some statements, such as IF and READ, may themselves contain statements. Such statements may be terminated in two ways: by a period (implicit termination), which terminates all unterminated statements contained, or by a scope terminator, which terminates the nearest matching open statement.",
"title": "Features"
},
{
"paragraph_id": 131,
"text": "Nested statements terminated with a period are a common source of bugs. For example, examine the following code:",
"title": "Features"
},
{
"paragraph_id": 132,
"text": "Here, the intent is to display y and z if condition x is true. However, z will be displayed whatever the value of x because the IF statement is terminated by an erroneous period after DISPLAY y.",
"title": "Features"
},
{
"paragraph_id": 133,
"text": "Another bug is a result of the dangling else problem, when two IF statements can associate with an ELSE.",
"title": "Features"
},
{
"paragraph_id": 134,
"text": "In the above fragment, the ELSE associates with the IF y statement instead of the IF x statement, causing a bug. Prior to the introduction of explicit scope terminators, preventing it would require ELSE NEXT SENTENCE to be placed after the inner IF.",
"title": "Features"
},
{
"paragraph_id": 135,
"text": "The original (1959) COBOL specification supported the infamous ALTER X TO PROCEED TO Y statement, for which many compilers generated self-modifying code. X and Y are procedure labels, and the single GO TO statement in procedure X executed after such an ALTER statement means GO TO Y instead. Many compilers still support it, but it was deemed obsolete in the COBOL 1985 standard and deleted in 2002.",
"title": "Features"
},
{
"paragraph_id": 136,
"text": "The ALTER statement was poorly regarded because it undermined \"locality of context\" and made a program's overall logic difficult to comprehend. As textbook author Daniel D. McCracken wrote in 1976, when \"someone who has never seen the program before must become familiar with it as quickly as possible, sometimes under critical time pressure because the program has failed ... the sight of a GO TO statement in a paragraph by itself, signaling as it does the existence of an unknown number of ALTER statements at unknown locations throughout the program, strikes fear in the heart of the bravest programmer.\"",
"title": "Features"
},
{
"paragraph_id": 137,
"text": "A \"Hello, world\" program in COBOL:",
"title": "Features"
},
{
"paragraph_id": 138,
"text": "When the now famous \"Hello, World!\" program example in The C Programming Language was first published in 1978 a similar mainframe COBOL program sample would have been submitted through JCL, very likely using a punch card reader, and 80 column punch cards. The listing below, with an empty DATA DIVISION, was tested using Linux and the System/370 Hercules emulator running MVS 3.8J. The JCL, written in July 2015, is derived from the Hercules tutorials and samples hosted by Jay Moseley. In keeping with COBOL programming of that era, HELLO, WORLD is displayed in all capital letters.",
"title": "Features"
},
{
"paragraph_id": 139,
"text": "After submitting the JCL, the MVS console displayed:",
"title": "Features"
},
{
"paragraph_id": 140,
"text": "Line 10 of the console listing above is highlighted for effect, the highlighting is not part of the actual console output.",
"title": "Features"
},
{
"paragraph_id": 141,
"text": "The associated compiler listing generated over four pages of technical detail and job run information, for the single line of output from the 14 lines of COBOL.",
"title": "Features"
},
{
"paragraph_id": 142,
"text": "In the 1970s, adoption of the structured programming paradigm was becoming increasingly widespread. Edsger Dijkstra, a preeminent computer scientist, wrote a letter to the editor of Communications of the ACM, published in 1975 entitled \"How do we tell truths that might hurt?\", in which he was critical of COBOL and several other contemporary languages; remarking that \"the use of COBOL cripples the mind\".",
"title": "Reception"
},
{
"paragraph_id": 143,
"text": "In a published dissent to Dijkstra's remarks, the computer scientist Howard E. Tompkins claimed that unstructured COBOL tended to be \"written by programmers that have never had the benefit of structured COBOL taught well\", arguing that the issue was primarily one of training.",
"title": "Reception"
},
{
"paragraph_id": 144,
"text": "One cause of spaghetti code was the GO TO statement. Attempts to remove GO TOs from COBOL code, however, resulted in convoluted programs and reduced code quality. GO TOs were largely replaced by the PERFORM statement and procedures, which promoted modular programming and gave easy access to powerful looping facilities. However, PERFORM could be used only with procedures so loop bodies were not located where they were used, making programs harder to understand.",
"title": "Reception"
},
{
"paragraph_id": 145,
"text": "COBOL programs were infamous for being monolithic and lacking modularization. COBOL code could be modularized only through procedures, which were found to be inadequate for large systems. It was impossible to restrict access to data, meaning a procedure could access and modify any data item. Furthermore, there was no way to pass parameters to a procedure, an omission Jean Sammet regarded as the committee's biggest mistake.",
"title": "Reception"
},
{
"paragraph_id": 146,
"text": "Another complication stemmed from the ability to PERFORM THRU a specified sequence of procedures. This meant that control could jump to and return from any procedure, creating convoluted control flow and permitting a programmer to break the single-entry single-exit rule.",
"title": "Reception"
},
{
"paragraph_id": 147,
"text": "This situation improved as COBOL adopted more features. COBOL-74 added subprograms, giving programmers the ability to control the data each part of the program could access. COBOL-85 then added nested subprograms, allowing programmers to hide subprograms. Further control over data and code came in 2002 when object-oriented programming, user-defined functions and user-defined data types were included.",
"title": "Reception"
},
{
"paragraph_id": 148,
"text": "Nevertheless, much important legacy COBOL software uses unstructured code, which has become practically unmaintainable. It can be too risky and costly to modify even a simple section of code, since it may be used from unknown places in unknown ways.",
"title": "Reception"
},
{
"paragraph_id": 149,
"text": "COBOL was intended to be a highly portable, \"common\" language. However, by 2001, around 300 dialects had been created. One source of dialects was the standard itself: the 1974 standard was composed of one mandatory nucleus and eleven functional modules, each containing two or three levels of support. This permitted 104,976 possible variants.",
"title": "Reception"
},
{
"paragraph_id": 150,
"text": "COBOL-85 was not fully compatible with earlier versions, and its development was controversial. Joseph T. Brophy, the CIO of Travelers Insurance, spearheaded an effort to inform COBOL users of the heavy reprogramming costs of implementing the new standard. As a result, the ANSI COBOL Committee received more than 2,200 letters from the public, mostly negative, requiring the committee to make changes. On the other hand, conversion to COBOL-85 was thought to increase productivity in future years, thus justifying the conversion costs.",
"title": "Reception"
},
{
"paragraph_id": 151,
"text": "A weak, verbose, and flabby language used by code grinders to do boring mindless things on dinosaur mainframes. [...] Its very name is seldom uttered without ritual expressions of disgust or horror.",
"title": "Reception"
},
{
"paragraph_id": 152,
"text": "The Jargon File 4.4.8.",
"title": "Reception"
},
{
"paragraph_id": 153,
"text": "COBOL syntax has often been criticized for its verbosity. Proponents say that this was intended to make the code self-documenting, easing program maintenance. COBOL was also intended to be easy for programmers to learn and use, while still being readable to non-technical staff such as managers.",
"title": "Reception"
},
{
"paragraph_id": 154,
"text": "The desire for readability led to the use of English-like syntax and structural elements, such as nouns, verbs, clauses, sentences, sections, and divisions. Yet by 1984, maintainers of COBOL programs were struggling to deal with \"incomprehensible\" code and the main changes in COBOL-85 were there to help ease maintenance.",
"title": "Reception"
},
{
"paragraph_id": 155,
"text": "Jean Sammet, a short-range committee member, noted that \"little attempt was made to cater to the professional programmer, in fact people whose main interest is programming tend to be very unhappy with COBOL\" which she attributed to COBOL's verbose syntax.",
"title": "Reception"
},
{
"paragraph_id": 156,
"text": "The COBOL community has always been isolated from the computer science community. No academic computer scientists participated in the design of COBOL: all of those on the committee came from commerce or government. Computer scientists at the time were more interested in fields like numerical analysis, physics and system programming than the commercial file-processing problems which COBOL development tackled. Jean Sammet attributed COBOL's unpopularity to an initial \"snob reaction\" due to its inelegance, the lack of influential computer scientists participating in the design process and a disdain for business data processing. The COBOL specification used a unique \"notation\", or metalanguage, to define its syntax rather than the new Backus–Naur form which the committee did not know of. This resulted in \"severe\" criticism.",
"title": "Reception"
},
{
"paragraph_id": 157,
"text": "The academic world tends to regard COBOL as verbose, clumsy and inelegant, and tries to ignore it, although there are probably more COBOL programs and programmers in the world than there are for FORTRAN, ALGOL and PL/I combined. For the most part, only schools with an immediate vocational objective provide instruction in COBOL.",
"title": "Reception"
},
{
"paragraph_id": 158,
"text": "Richard Conway and David Gries, 1973",
"title": "Reception"
},
{
"paragraph_id": 159,
"text": "Later, COBOL suffered from a shortage of material covering it; it took until 1963 for introductory books to appear (with Richard D. Irwin publishing a college textbook on COBOL in 1966). By 1985, there were twice as many books on FORTRAN and four times as many on BASIC as on COBOL in the Library of Congress. University professors taught more modern, state-of-the-art languages and techniques instead of COBOL which was said to have a \"trade school\" nature. Donald Nelson, chair of the CODASYL COBOL committee, said in 1984 that \"academics ... hate COBOL\" and that computer science graduates \"had 'hate COBOL' drilled into them\".",
"title": "Reception"
},
{
"paragraph_id": 160,
"text": "By the mid-1980s, there was also significant condescension towards COBOL in the business community from users of other languages, for example FORTRAN or assembler, implying that COBOL could be used only for non-challenging problems.",
"title": "Reception"
},
{
"paragraph_id": 161,
"text": "In 2003, COBOL featured in 80% of information systems curricula in the United States, the same proportion as C++ and Java. Ten years later, a poll by Micro Focus found that 20% of university academics thought COBOL was outdated or dead and that 55% believed their students thought COBOL was outdated or dead. The same poll also found that only 25% of academics had COBOL programming on their curriculum even though 60% thought they should teach it.",
"title": "Reception"
},
{
"paragraph_id": 162,
"text": "Doubts have been raised about the competence of the standards committee. Short-term committee member Howard Bromberg said that there was \"little control\" over the development process and that it was \"plagued by discontinuity of personnel and ... a lack of talent.\" Jean Sammet and Jerome Garfunkel also noted that changes introduced in one revision of the standard would be reverted in the next, due as much to changes in who was in the standard committee as to objective evidence.",
"title": "Reception"
},
{
"paragraph_id": 163,
"text": "COBOL standards have repeatedly suffered from delays: COBOL-85 arrived five years later than hoped, COBOL 2002 was five years late, and COBOL 2014 was six years late. To combat delays, the standard committee allowed the creation of optional addenda which would add features more quickly than by waiting for the next standard revision. However, some committee members raised concerns about incompatibilities between implementations and frequent modifications of the standard.",
"title": "Reception"
},
{
"paragraph_id": 164,
"text": "COBOL's data structures influenced subsequent programming languages. Its record and file structure influenced PL/I and Pascal, and the REDEFINES clause was a predecessor to Pascal's variant records. Explicit file structure definitions preceded the development of database management systems and aggregated data was a significant advance over Fortran's arrays.",
"title": "Reception"
},
{
"paragraph_id": 165,
"text": "PICTURE data declarations were incorporated into PL/I, with minor changes.",
"title": "Reception"
},
{
"paragraph_id": 166,
"text": "COBOL's COPY facility, although considered \"primitive\", influenced the development of include directives.",
"title": "Reception"
},
{
"paragraph_id": 167,
"text": "The focus on portability and standardization meant programs written in COBOL could be portable and facilitated the spread of the language to a wide variety of hardware platforms and operating systems. Additionally, the well-defined division structure restricts the definition of external references to the Environment Division, which simplifies platform changes in particular.",
"title": "Reception"
}
] | COBOL is a compiled English-like computer programming language designed for business use. It is an imperative, procedural and, since 2002, object-oriented language. COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still widely used in applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs. Many large financial institutions were developing new systems in the language as late as 2006, but most programming in COBOL today is purely to maintain existing applications. Programs are being moved to new platforms, rewritten in modern languages or replaced with other software. COBOL was designed in 1959 by CODASYL and was partly based on the programming language FLOW-MATIC designed by Grace Hopper. It was created as part of a U.S. Department of Defense effort to create a portable programming language for data processing. It was originally seen as a stopgap, but the Defense Department promptly forced computer manufacturers to provide it, resulting in its widespread adoption. It was standardized in 1968 and has been revised five times. Expansions include support for structured and object-oriented programming. The current standard is ISO/IEC 1989:2023. COBOL statements have prose syntax such as MOVE x TO y, which was designed to be self-documenting and highly readable. However, it is verbose and uses over 300 reserved words. This contrasts with the succinct and mathematically-inspired syntax of other languages. COBOL code is split into four divisions containing a rigid hierarchy of sections, paragraphs and sentences. Lacking a large standard library, the standard specifies 43 statements, 87 functions and just one class. Academic computer scientists were generally uninterested in business applications when COBOL was created and were not involved in its design; it was (effectively) designed from the ground up as a computer language for business, with an emphasis on inputs and outputs, whose only data types were numbers and strings of text. COBOL has been criticized for its verbosity, design process, and poor support for structured programming. These weaknesses result in monolithic programs that are hard to comprehend as a whole, despite their local readability. For years, COBOL has been assumed as a programming language for business operations in mainframes, although in recent years, many COBOL operations have been moved to cloud computing. | 2001-10-25T22:07:42Z | 2023-12-23T19:39:09Z | [
"Template:Dfn",
"Template:Harvnb",
"Template:Notelist",
"Template:Cite news",
"Template:Curlie",
"Template:Sfn",
"Template:Refend",
"Template:Webarchive",
"Template:Major programming languages",
"Template:Cite conference",
"Template:Citation needed",
"Template:Pipe",
"Template:Cite book",
"Template:Refbegin",
"Template:Authority control",
"Template:Efn",
"Template:Use American English",
"Template:Underline",
"Template:Missing information",
"Template:Portal",
"Template:Cite encyclopedia",
"Template:Good article",
"Template:Slink",
"Template:Cite magazine",
"Template:Cite report",
"Template:List of IEC standards",
"Template:Infobox programming language",
"Template:IPAc-en",
"Template:Code",
"Template:N/a",
"Template:Quotebox",
"Template:Em",
"Template:Reflist",
"Template:Cite journal",
"Template:Short description",
"Template:Cite press release",
"Template:Blockquote",
"Template:Cite web",
"Template:ISO standards",
"Template:Use dmy dates",
"Template:Sisterlinks"
] | https://en.wikipedia.org/wiki/COBOL |
6,801 | Crew | A crew is a body or a class of people who work at a common activity, generally in a structured or hierarchical organization. A location in which a crew works is called a crewyard or a workyard. The word has nautical resonances: the tasks involved in operating a ship, particularly a sailing ship, providing numerous specialities within a ship's crew, often organised with a chain of command. Traditional nautical usage strongly distinguishes officers from crew, though the two groups combined form the ship's company. Members of a crew are often referred to by the title crewman or crew-member.
Crew also refers to the sport of rowing, where teams row competitively in racing shells. | [
{
"paragraph_id": 0,
"text": "A crew is a body or a class of people who work at a common activity, generally in a structured or hierarchical organization. A location in which a crew works is called a crewyard or a workyard. The word has nautical resonances: the tasks involved in operating a ship, particularly a sailing ship, providing numerous specialities within a ship's crew, often organised with a chain of command. Traditional nautical usage strongly distinguishes officers from crew, though the two groups combined form the ship's company. Members of a crew are often referred to by the title crewman or crew-member.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Crew also refers to the sport of rowing, where teams row competitively in racing shells.",
"title": ""
}
] | A crew is a body or a class of people who work at a common activity, generally in a structured or hierarchical organization. A location in which a crew works is called a crewyard or a workyard. The word has nautical resonances: the tasks involved in operating a ship, particularly a sailing ship, providing numerous specialities within a ship's crew, often organised with a chain of command. Traditional nautical usage strongly distinguishes officers from crew, though the two groups combined form the ship's company. Members of a crew are often referred to by the title crewman or crew-member. Crew also refers to the sport of rowing, where teams row competitively in racing shells. | 2001-10-15T21:06:29Z | 2023-08-13T13:22:40Z | [
"Template:Cite web",
"Template:Short description",
"Template:Hatgrp",
"Template:Reflist",
"Template:Cite book"
] | https://en.wikipedia.org/wiki/Crew |
6,803 | CCD | CCD may refer to: | [
{
"paragraph_id": 0,
"text": "CCD may refer to:",
"title": ""
}
] | CCD may refer to: | 2001-10-16T04:16:40Z | 2023-09-27T07:03:36Z | [
"Template:Wiktionary",
"Template:TOC right",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/CCD |
6,804 | Charge-coupled device | A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging.
In a CCD image sensor, pixels are represented by p-doped metal–oxide–semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD, are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges.
Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required.
In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used.
However, the large quality advantage CCDs enjoyed early on has narrowed over time and since the late 2010s CMOS sensors are the dominant technology, having largely if not completely replaced CCD image sensors.
The basis for the CCD is the metal–oxide–semiconductor (MOS) structure, with MOS capacitors being the basic building blocks of a CCD, and a depleted MOS structure used as the photodetector in early CCD devices.
In the late 1960s, Willard Boyle and George E. Smith at Bell Labs were researching MOS technology while working on semiconductor bubble memory. They realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. This led to the invention of the charge-coupled device by Boyle and Smith in 1969. They conceived of the design of what they termed, in their notebook, "Charge 'Bubble' Devices".
The initial paper describing the concept in April 1970 listed possible uses as memory, a delay line, and an imaging device. The device could also be used as a shift register. The essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device (BBD), which was developed at Philips Research Labs during the late 1960s.
The first experimental device demonstrating the principle was a row of closely spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. It was demonstrated by Gil Amelio, Michael Francis Tompsett and George Smith in April 1970. This was the first experimental application of the CCD in image sensor technology, and used a depleted MOS structure as the photodetector. The first patent (U.S. Patent 4,085,456) on the application of CCDs to imaging was assigned to Tompsett, who filed the application in 1971.
The first working CCD made with integrated circuit technology was a simple 8-bit shift register, reported by Tompsett, Amelio and Smith in August 1970. This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, and by 1974 had a linear 500-element device and a 2D 100 × 100 pixel device. Peter Dillon, a scientist at Kodak Research Labs, invented the first color CCD image sensor by overlaying a color filter array on this Fairchild 100 x 100 pixel Interline CCD starting in 1974. Steven Sasson, an electrical engineer working for the Kodak Apparatus Division, invented a digital still camera using this same Fairchild 100 × 100 CCD in 1975.
The interline transfer (ILT) CCD device was proposed by L. Walsh and R. Dyck at Fairchild in 1973 to reduce smear and eliminate a mechanical shutter. To further reduce smear from bright light sources, the frame-interline-transfer (FIT) CCD architecture was developed by K. Horii, T. Kuroda and T. Kunii at Matsushita (now Panasonic) in 1981.
The first KH-11 KENNEN reconnaissance satellite equipped with charge-coupled device array (800 × 800 pixels) technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama, Sony started a large development effort on CCDs involving a significant investment. Eventually, Sony managed to mass-produce CCDs for their camcorders. Before this happened, Iwama died in August 1982. Subsequently a CCD chip was placed on his tombstone to acknowledge his contribution. The first mass-produced consumer CCD video camera, the CCD-G5, was released by Sony in 1983, based on a prototype developed by Yoshiaki Hagiwara in 1981.
Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. They recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented at NEC was given the name "pinned photodiode" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.
In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize, and in 2009 they were awarded the Nobel Prize for Physics for their invention of the CCD concept. Michael Tompsett was awarded the 2010 National Medal of Technology and Innovation, for pioneering work and electronic technologies including the design and development of the first CCD imagers. He was also awarded the 2012 IEEE Edison Medal for "pioneering contributions to imaging devices including CCD Imagers, cameras and thermal imagers".
In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking).
An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter), which is then processed and fed out to other circuits for transmission, recording, or other processing.
Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly p-doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of an n channel below the gate as in a MOSFET. However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature. Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion. Then, when electron–hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified:
The last three processes are known as dark-current generation, and add noise to the image; they can limit the total usable integration time. The accumulation of electrons at or near the surface can proceed either until image integration is over and charge begins to be transferred, or thermal equilibrium is reached. In this case, the well is said to be full. The maximum capacity of each well is known as the well depth, typically about 10 electrons per pixel.
The photoactive region of a CCD is, generally, an epitaxial layer of silicon. It is lightly p doped (usually with boron) and is grown upon a substrate material, often p++. In buried-channel devices, the type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion implanted with phosphorus, giving them an n-doped designation. This region defines the channel in which the photogenerated charge packets will travel. Simon Sze details the advantages of a buried-channel device:
This thin layer (= 0.2–0.3 micron) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD.
The gate oxide, i.e. the capacitor dielectric, is grown on top of the epitaxial layer and substrate.
Later in the process, polysilicon gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region.
Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or "charge carrying", regions.
Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible).
The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p–n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device.
CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices.
Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system. The peristaltic CCD has an additional implant that keeps the charge away from the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets.
The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of these architectures is their approach to the problem of shuttering.
In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out.
With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much.
The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design.
The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device.
CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light.
Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers.
Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels.
The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness.
The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as "vertical smear" and cause a strong light source to create a vertical line above and below its exact location. In addition, the CCD cannot be used to collect light while it is being read out. A faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level.
A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures.
The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed.
An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD.
An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens.
An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called gating and therefore ICCDs are also called gateable CCD cameras.
Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds.
ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand, EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around 170 K (−103 °C). This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application.
ICCDs are used in night vision devices and in various scientific applications.
An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, a product commercialized by e2v Ltd., GB, L3CCD or Impactron CCD, a now-discontinued product offered in the past by Texas Instruments) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode. The gain probability at every stage of the register is small (P < 2%), but as the number of elements is large (N > 500), the overall gain can be very high ( g = ( 1 + P ) N {\displaystyle g=(1+P)^{N}} ), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. The use of avalanche breakdown for amplification of photo charges had already been described in the U.S. Patent 3,761,744 in 1973 by George E. Smith/Bell Telephone Laboratories.
EMCCDs show a similar sensitivity to intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the exact gain that has been applied to a pixel's charge is impossible to know. At high gains (> 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. This effect is referred to as the Excess Noise Factor (ENF). However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron—or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are essential. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation:
where P is the probability of getting n output electrons given m input electrons and a total mean multiplication register gain of g. For very large numbers of input electrons, this complex distribution function converges towards a Gaussian.
Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging. EMCCD cameras indispensably need a cooling system—using either thermoelectric cooling or liquid nitrogen—to cool the chip down to temperatures in the range of −65 to −95 °C (−85 to −139 °F). This cooling system adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues.
The low-light capabilities of EMCCDs find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging, single-molecule imaging, Raman spectroscopy, super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs.
In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent of cooling) that together lead to an effective readout noise ranging from 0.01 to 1 electrons per pixel read. However, recent improvements in EMCCD technology have led to a new generation of cameras capable of producing significantly less CIC, higher charge transfer efficiency and an EM gain 5 times higher than what was previously available. These advances in low-light detection lead to an effective total background noise of 0.001 electrons per pixel read, a noise floor unmatched by any other low-light imaging device.
Due to the high quantum efficiencies of charge-coupled device (CCD) (the ideal quantum efficiency is 100%, one generated electron per incident photon), linearity of their outputs, ease of use compared to photographic plates, and a variety of other reasons, CCDs were very rapidly adopted by astronomers for nearly all UV-to-infrared applications.
Thermal noise and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the dark frame average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects (dead pixels, hot pixels, etc.) in the CCD. Newer Skipper CCDs counter noise by collecting data with the same collected charge multiple times and has applications in precision light Dark Matter searches and neutrino measurements.
The Hubble Space Telescope, in particular, has a highly developed series of steps (“data reduction pipeline”) to convert the raw CCD data to useful images.
CCD cameras used in astrophotography often require sturdy mounts to cope with vibrations from wind and other sources, along with the tremendous weight of most imaging platforms. To take long exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding. Most autoguiders use a second CCD chip to monitor deviations during imaging. This chip can rapidly detect errors in tracking and command the mount motors to correct for them.
An unusual astronomical application of CCDs, called drift-scanning, uses a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view. The Sloan Digital Sky Survey is the most famous example of this, using the technique to produce a survey of over a quarter of the sky. The Gaia space telescope is another instrument operating in this mode, rotating about its axis at a constant rate of 1 revolution in 6 hours and scanning a 360° by 0.5° strip on the sky during this time; a star traverses the entire focal plane in about 40 seconds (effective exposure time).
In addition to imagers, CCDs are also used in an array of analytical instrumentation including spectrometers and interferometers.
Digital color cameras, including the digital color cameras in smartphones, generally use a integral color image sensor, which has a color filter array fabricated on top of the monochrome pixels of the CCD. The most popular CFA pattern is known as the Bayer filter, which is named for its inventor, Kodak scientist Bryce Bayer. In the Bayer pattern, each square of four pixels has one filtered red, one blue, and two green pixels (the human eye has greater acuity for luminance, which is more heavily weighted in green than in either red or blue). As a result, the luminance information is collected in each row and column using a checkerboard pattern, and the color resolution is lower than the luminance resolution.
Better color separation can be reached by three-CCD devices (3CCD) and a dichroic beam splitter prism, that splits the image into red, green and blue components. Each of the three CCDs is arranged to respond to a particular color. Many professional video camcorders, and some semi-professional camcorders, use this technique, although developments in competing CMOS technology have made CMOS sensors, both with beam-splitters and Bayer filters, increasingly popular in high-end video and digital cinema cameras. Another advantage of 3CCD over a Bayer mask device is higher quantum efficiency (higher light sensitivity), because most of the light from the lens enters one of the silicon sensors, while a Bayer mask absorbs a high proportion (more than 2/3) of the light falling on each pixel location.
For still scenes, for instance in microscopy, the resolution of a Bayer mask device can be enhanced by microscanning technology. During the process of color co-site sampling, several frames of the scene are produced. Between acquisitions, the sensor is moved in pixel dimensions, so that each point in the visual field is acquired consecutively by elements of the mask that are sensitive to the red, green, and blue components of its color. Eventually every pixel in the image has been scanned at least once in each color and the resolution of the three channels become equivalent (the resolutions of red and blue channels are quadrupled while the green channel is doubled).
Sensors (CCD / CMOS) come in various sizes, or image sensor formats. These sizes are often referred to with an inch fraction designation such as 1/1.8″ or 2/3″ called the optical format. This measurement originates back in the 1950s and the time of Vidicon tubes.
When a CCD exposure is long enough, eventually the electrons that collect in the "bins" in the brightest part of the image will overflow the bin, resulting in blooming. The structure of the CCD allows the electrons to flow more easily in one direction than another, resulting in vertical streaking.
Some anti-blooming features that can be built into a CCD reduce its sensitivity to light by using some of the pixel area for a drain structure. James M. Early developed a vertical anti-blooming drain that would not detract from the light collection area, and so did not reduce light sensitivity. | [
{
"paragraph_id": 0,
"text": "A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging.",
"title": ""
},
{
"paragraph_id": 1,
"text": "In a CCD image sensor, pixels are represented by p-doped metal–oxide–semiconductor (MOS) capacitors. These MOS capacitors, the basic building blocks of a CCD, are biased above the threshold for inversion when image acquisition begins, allowing the conversion of incoming photons into electron charges at the semiconductor-oxide interface; the CCD is then used to read out these charges.",
"title": "Overview"
},
{
"paragraph_id": 2,
"text": "Although CCDs are not the only technology to allow for light detection, CCD image sensors are widely used in professional, medical, and scientific applications where high-quality image data are required.",
"title": "Overview"
},
{
"paragraph_id": 3,
"text": "In applications with less exacting quality demands, such as consumer and professional digital cameras, active pixel sensors, also known as CMOS sensors (complementary MOS sensors), are generally used.",
"title": "Overview"
},
{
"paragraph_id": 4,
"text": "However, the large quality advantage CCDs enjoyed early on has narrowed over time and since the late 2010s CMOS sensors are the dominant technology, having largely if not completely replaced CCD image sensors.",
"title": "Overview"
},
{
"paragraph_id": 5,
"text": "The basis for the CCD is the metal–oxide–semiconductor (MOS) structure, with MOS capacitors being the basic building blocks of a CCD, and a depleted MOS structure used as the photodetector in early CCD devices.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "In the late 1960s, Willard Boyle and George E. Smith at Bell Labs were researching MOS technology while working on semiconductor bubble memory. They realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. This led to the invention of the charge-coupled device by Boyle and Smith in 1969. They conceived of the design of what they termed, in their notebook, \"Charge 'Bubble' Devices\".",
"title": "History"
},
{
"paragraph_id": 7,
"text": "The initial paper describing the concept in April 1970 listed possible uses as memory, a delay line, and an imaging device. The device could also be used as a shift register. The essence of the design was the ability to transfer charge along the surface of a semiconductor from one storage capacitor to the next. The concept was similar in principle to the bucket-brigade device (BBD), which was developed at Philips Research Labs during the late 1960s.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The first experimental device demonstrating the principle was a row of closely spaced metal squares on an oxidized silicon surface electrically accessed by wire bonds. It was demonstrated by Gil Amelio, Michael Francis Tompsett and George Smith in April 1970. This was the first experimental application of the CCD in image sensor technology, and used a depleted MOS structure as the photodetector. The first patent (U.S. Patent 4,085,456) on the application of CCDs to imaging was assigned to Tompsett, who filed the application in 1971.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The first working CCD made with integrated circuit technology was a simple 8-bit shift register, reported by Tompsett, Amelio and Smith in August 1970. This device had input and output circuits and was used to demonstrate its use as a shift register and as a crude eight pixel linear imaging device. Development of the device progressed at a rapid rate. By 1971, Bell researchers led by Michael Tompsett were able to capture images with simple linear devices. Several companies, including Fairchild Semiconductor, RCA and Texas Instruments, picked up on the invention and began development programs. Fairchild's effort, led by ex-Bell researcher Gil Amelio, was the first with commercial devices, and by 1974 had a linear 500-element device and a 2D 100 × 100 pixel device. Peter Dillon, a scientist at Kodak Research Labs, invented the first color CCD image sensor by overlaying a color filter array on this Fairchild 100 x 100 pixel Interline CCD starting in 1974. Steven Sasson, an electrical engineer working for the Kodak Apparatus Division, invented a digital still camera using this same Fairchild 100 × 100 CCD in 1975.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The interline transfer (ILT) CCD device was proposed by L. Walsh and R. Dyck at Fairchild in 1973 to reduce smear and eliminate a mechanical shutter. To further reduce smear from bright light sources, the frame-interline-transfer (FIT) CCD architecture was developed by K. Horii, T. Kuroda and T. Kunii at Matsushita (now Panasonic) in 1981.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The first KH-11 KENNEN reconnaissance satellite equipped with charge-coupled device array (800 × 800 pixels) technology for imaging was launched in December 1976. Under the leadership of Kazuo Iwama, Sony started a large development effort on CCDs involving a significant investment. Eventually, Sony managed to mass-produce CCDs for their camcorders. Before this happened, Iwama died in August 1982. Subsequently a CCD chip was placed on his tombstone to acknowledge his contribution. The first mass-produced consumer CCD video camera, the CCD-G5, was released by Sony in 1983, based on a prototype developed by Yoshiaki Hagiwara in 1981.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "Early CCD sensors suffered from shutter lag. This was largely resolved with the invention of the pinned photodiode (PPD). It was invented by Nobukazu Teranishi, Hiromitsu Shiraki and Yasuo Ishihara at NEC in 1980. They recognized that lag can be eliminated if the signal carriers could be transferred from the photodiode to the CCD. This led to their invention of the pinned photodiode, a photodetector structure with low lag, low noise, high quantum efficiency and low dark current. It was first publicly reported by Teranishi and Ishihara with A. Kohono, E. Oda and K. Arai in 1982, with the addition of an anti-blooming structure. The new photodetector structure invented at NEC was given the name \"pinned photodiode\" (PPD) by B.C. Burkey at Kodak in 1984. In 1987, the PPD began to be incorporated into most CCD devices, becoming a fixture in consumer electronic video cameras and then digital still cameras. Since then, the PPD has been used in nearly all CCD sensors and then CMOS sensors.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In January 2006, Boyle and Smith were awarded the National Academy of Engineering Charles Stark Draper Prize, and in 2009 they were awarded the Nobel Prize for Physics for their invention of the CCD concept. Michael Tompsett was awarded the 2010 National Medal of Technology and Innovation, for pioneering work and electronic technologies including the design and development of the first CCD imagers. He was also awarded the 2012 IEEE Edison Medal for \"pioneering contributions to imaging devices including CCD Imagers, cameras and thermal imagers\".",
"title": "History"
},
{
"paragraph_id": 14,
"text": "In a CCD for capturing images, there is a photoactive region (an epitaxial layer of silicon), and a transmission region made out of a shift register (the CCD, properly speaking).",
"title": "Basics of operation"
},
{
"paragraph_id": 15,
"text": "An image is projected through a lens onto the capacitor array (the photoactive region), causing each capacitor to accumulate an electric charge proportional to the light intensity at that location. A one-dimensional array, used in line-scan cameras, captures a single slice of the image, whereas a two-dimensional array, used in video and still cameras, captures a two-dimensional picture corresponding to the scene projected onto the focal plane of the sensor. Once the array has been exposed to the image, a control circuit causes each capacitor to transfer its contents to its neighbor (operating as a shift register). The last capacitor in the array dumps its charge into a charge amplifier, which converts the charge into a voltage. By repeating this process, the controlling circuit converts the entire contents of the array in the semiconductor to a sequence of voltages. In a digital device, these voltages are then sampled, digitized, and usually stored in memory; in an analog device (such as an analog video camera), they are processed into a continuous analog signal (e.g. by feeding the output of the charge amplifier into a low-pass filter), which is then processed and fed out to other circuits for transmission, recording, or other processing.",
"title": "Basics of operation"
},
{
"paragraph_id": 16,
"text": "Before the MOS capacitors are exposed to light, they are biased into the depletion region; in n-channel CCDs, the silicon under the bias gate is slightly p-doped or intrinsic. The gate is then biased at a positive potential, above the threshold for strong inversion, which will eventually result in the creation of an n channel below the gate as in a MOSFET. However, it takes time to reach this thermal equilibrium: up to hours in high-end scientific cameras cooled at low temperature. Initially after biasing, the holes are pushed far into the substrate, and no mobile electrons are at or near the surface; the CCD thus operates in a non-equilibrium state called deep depletion. Then, when electron–hole pairs are generated in the depletion region, they are separated by the electric field, the electrons move toward the surface, and the holes move toward the substrate. Four pair-generation processes can be identified:",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 17,
"text": "The last three processes are known as dark-current generation, and add noise to the image; they can limit the total usable integration time. The accumulation of electrons at or near the surface can proceed either until image integration is over and charge begins to be transferred, or thermal equilibrium is reached. In this case, the well is said to be full. The maximum capacity of each well is known as the well depth, typically about 10 electrons per pixel.",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 18,
"text": "The photoactive region of a CCD is, generally, an epitaxial layer of silicon. It is lightly p doped (usually with boron) and is grown upon a substrate material, often p++. In buried-channel devices, the type of design utilized in most modern CCDs, certain areas of the surface of the silicon are ion implanted with phosphorus, giving them an n-doped designation. This region defines the channel in which the photogenerated charge packets will travel. Simon Sze details the advantages of a buried-channel device:",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 19,
"text": "This thin layer (= 0.2–0.3 micron) is fully depleted and the accumulated photogenerated charge is kept away from the surface. This structure has the advantages of higher transfer efficiency and lower dark current, from reduced surface recombination. The penalty is smaller charge capacity, by a factor of 2–3 compared to the surface-channel CCD.",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 20,
"text": "The gate oxide, i.e. the capacitor dielectric, is grown on top of the epitaxial layer and substrate.",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 21,
"text": "Later in the process, polysilicon gates are deposited by chemical vapor deposition, patterned with photolithography, and etched in such a way that the separately phased gates lie perpendicular to the channels. The channels are further defined by utilization of the LOCOS process to produce the channel stop region.",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 22,
"text": "Channel stops are thermally grown oxides that serve to isolate the charge packets in one column from those in another. These channel stops are produced before the polysilicon gates are, as the LOCOS process utilizes a high-temperature step that would destroy the gate material. The channel stops are parallel to, and exclusive of, the channel, or \"charge carrying\", regions.",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 23,
"text": "Channel stops often have a p+ doped region underlying them, providing a further barrier to the electrons in the charge packets (this discussion of the physics of CCD devices assumes an electron transfer device, though hole transfer is possible).",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 24,
"text": "The clocking of the gates, alternately high and low, will forward and reverse bias the diode that is provided by the buried channel (n-doped) and the epitaxial layer (p-doped). This will cause the CCD to deplete, near the p–n junction and will collect and move the charge packets beneath the gates—and within the channels—of the device.",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 25,
"text": "CCD manufacturing and operation can be optimized for different uses. The above process describes a frame transfer CCD. While CCDs may be manufactured on a heavily doped p++ wafer it is also possible to manufacture a device inside p-wells that have been placed on an n-wafer. This second method, reportedly, reduces smear, dark current, and infrared and red response. This method of manufacture is used in the construction of interline-transfer devices.",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 26,
"text": "Another version of CCD is called a peristaltic CCD. In a peristaltic charge-coupled device, the charge-packet transfer operation is analogous to the peristaltic contraction and dilation of the digestive system. The peristaltic CCD has an additional implant that keeps the charge away from the silicon/silicon dioxide interface and generates a large lateral electric field from one gate to the next. This provides an additional driving force to aid in transfer of the charge packets.",
"title": "Detailed physics of operation"
},
{
"paragraph_id": 27,
"text": "The CCD image sensors can be implemented in several different architectures. The most common are full-frame, frame-transfer, and interline. The distinguishing characteristic of each of these architectures is their approach to the problem of shuttering.",
"title": "Architecture"
},
{
"paragraph_id": 28,
"text": "In a full-frame device, all of the image area is active, and there is no electronic shutter. A mechanical shutter must be added to this type of sensor or the image smears as the device is clocked or read out.",
"title": "Architecture"
},
{
"paragraph_id": 29,
"text": "With a frame-transfer CCD, half of the silicon area is covered by an opaque mask (typically aluminum). The image can be quickly transferred from the image area to the opaque area or storage region with acceptable smear of a few percent. That image can then be read out slowly from the storage region while a new image is integrating or exposing in the active area. Frame-transfer devices typically do not require a mechanical shutter and were a common architecture for early solid-state broadcast cameras. The downside to the frame-transfer architecture is that it requires twice the silicon real estate of an equivalent full-frame device; hence, it costs roughly twice as much.",
"title": "Architecture"
},
{
"paragraph_id": 30,
"text": "The interline architecture extends this concept one step further and masks every other column of the image sensor for storage. In this device, only one pixel shift has to occur to transfer from image area to storage area; thus, shutter times can be less than a microsecond and smear is essentially eliminated. The advantage is not free, however, as the imaging area is now covered by opaque strips dropping the fill factor to approximately 50 percent and the effective quantum efficiency by an equivalent amount. Modern designs have addressed this deleterious characteristic by adding microlenses on the surface of the device to direct light away from the opaque regions and on the active area. Microlenses can bring the fill factor back up to 90 percent or more depending on pixel size and the overall system's optical design.",
"title": "Architecture"
},
{
"paragraph_id": 31,
"text": "The choice of architecture comes down to one of utility. If the application cannot tolerate an expensive, failure-prone, power-intensive mechanical shutter, an interline device is the right choice. Consumer snap-shot cameras have used interline devices. On the other hand, for those applications that require the best possible light collection and issues of money, power and time are less important, the full-frame device is the right choice. Astronomers tend to prefer full-frame devices. The frame-transfer falls in between and was a common choice before the fill-factor issue of interline devices was addressed. Today, frame-transfer is usually chosen when an interline architecture is not available, such as in a back-illuminated device.",
"title": "Architecture"
},
{
"paragraph_id": 32,
"text": "CCDs containing grids of pixels are used in digital cameras, optical scanners, and video cameras as light-sensing devices. They commonly respond to 70 percent of the incident light (meaning a quantum efficiency of about 70 percent) making them far more efficient than photographic film, which captures only about 2 percent of the incident light.",
"title": "Architecture"
},
{
"paragraph_id": 33,
"text": "Most common types of CCDs are sensitive to near-infrared light, which allows infrared photography, night-vision devices, and zero lux (or near zero lux) video-recording/photography. For normal silicon-based detectors, the sensitivity is limited to 1.1 μm. One other consequence of their sensitivity to infrared is that infrared from remote controls often appears on CCD-based digital cameras or camcorders if they do not have infrared blockers.",
"title": "Architecture"
},
{
"paragraph_id": 34,
"text": "Cooling reduces the array's dark current, improving the sensitivity of the CCD to low light intensities, even for ultraviolet and visible wavelengths. Professional observatories often cool their detectors with liquid nitrogen to reduce the dark current, and therefore the thermal noise, to negligible levels.",
"title": "Architecture"
},
{
"paragraph_id": 35,
"text": "The frame transfer CCD imager was the first imaging structure proposed for CCD Imaging by Michael Tompsett at Bell Laboratories. A frame transfer CCD is a specialized CCD, often used in astronomy and some professional video cameras, designed for high exposure efficiency and correctness.",
"title": "Architecture"
},
{
"paragraph_id": 36,
"text": "The normal functioning of a CCD, astronomical or otherwise, can be divided into two phases: exposure and readout. During the first phase, the CCD passively collects incoming photons, storing electrons in its cells. After the exposure time is passed, the cells are read out one line at a time. During the readout phase, cells are shifted down the entire area of the CCD. While they are shifted, they continue to collect light. Thus, if the shifting is not fast enough, errors can result from light that falls on a cell holding charge during the transfer. These errors are referred to as \"vertical smear\" and cause a strong light source to create a vertical line above and below its exact location. In addition, the CCD cannot be used to collect light while it is being read out. A faster shifting requires a faster readout, and a faster readout can introduce errors in the cell charge measurement, leading to a higher noise level.",
"title": "Architecture"
},
{
"paragraph_id": 37,
"text": "A frame transfer CCD solves both problems: it has a shielded, not light sensitive, area containing as many cells as the area exposed to light. Typically, this area is covered by a reflective material such as aluminium. When the exposure time is up, the cells are transferred very rapidly to the hidden area. Here, safe from any incoming light, cells can be read out at any speed one deems necessary to correctly measure the cells' charge. At the same time, the exposed part of the CCD is collecting light again, so no delay occurs between successive exposures.",
"title": "Architecture"
},
{
"paragraph_id": 38,
"text": "The disadvantage of such a CCD is the higher cost: the cell area is basically doubled, and more complex control electronics are needed.",
"title": "Architecture"
},
{
"paragraph_id": 39,
"text": "An intensified charge-coupled device (ICCD) is a CCD that is optically connected to an image intensifier that is mounted in front of the CCD.",
"title": "Architecture"
},
{
"paragraph_id": 40,
"text": "An image intensifier includes three functional elements: a photocathode, a micro-channel plate (MCP) and a phosphor screen. These three elements are mounted one close behind the other in the mentioned sequence. The photons which are coming from the light source fall onto the photocathode, thereby generating photoelectrons. The photoelectrons are accelerated towards the MCP by an electrical control voltage, applied between photocathode and MCP. The electrons are multiplied inside of the MCP and thereafter accelerated towards the phosphor screen. The phosphor screen finally converts the multiplied electrons back to photons which are guided to the CCD by a fiber optic or a lens.",
"title": "Architecture"
},
{
"paragraph_id": 41,
"text": "An image intensifier inherently includes a shutter functionality: If the control voltage between the photocathode and the MCP is reversed, the emitted photoelectrons are not accelerated towards the MCP but return to the photocathode. Thus, no electrons are multiplied and emitted by the MCP, no electrons are going to the phosphor screen and no light is emitted from the image intensifier. In this case no light falls onto the CCD, which means that the shutter is closed. The process of reversing the control voltage at the photocathode is called gating and therefore ICCDs are also called gateable CCD cameras.",
"title": "Architecture"
},
{
"paragraph_id": 42,
"text": "Besides the extremely high sensitivity of ICCD cameras, which enable single photon detection, the gateability is one of the major advantages of the ICCD over the EMCCD cameras. The highest performing ICCD cameras enable shutter times as short as 200 picoseconds.",
"title": "Architecture"
},
{
"paragraph_id": 43,
"text": "ICCD cameras are in general somewhat higher in price than EMCCD cameras because they need the expensive image intensifier. On the other hand, EMCCD cameras need a cooling system to cool the EMCCD chip down to temperatures around 170 K (−103 °C). This cooling system adds additional costs to the EMCCD camera and often yields heavy condensation problems in the application.",
"title": "Architecture"
},
{
"paragraph_id": 44,
"text": "ICCDs are used in night vision devices and in various scientific applications.",
"title": "Architecture"
},
{
"paragraph_id": 45,
"text": "An electron-multiplying CCD (EMCCD, also known as an L3Vision CCD, a product commercialized by e2v Ltd., GB, L3CCD or Impactron CCD, a now-discontinued product offered in the past by Texas Instruments) is a charge-coupled device in which a gain register is placed between the shift register and the output amplifier. The gain register is split up into a large number of stages. In each stage, the electrons are multiplied by impact ionization in a similar way to an avalanche diode. The gain probability at every stage of the register is small (P < 2%), but as the number of elements is large (N > 500), the overall gain can be very high ( g = ( 1 + P ) N {\\displaystyle g=(1+P)^{N}} ), with single input electrons giving many thousands of output electrons. Reading a signal from a CCD gives a noise background, typically a few electrons. In an EMCCD, this noise is superimposed on many thousands of electrons rather than a single electron; the devices' primary advantage is thus their negligible readout noise. The use of avalanche breakdown for amplification of photo charges had already been described in the U.S. Patent 3,761,744 in 1973 by George E. Smith/Bell Telephone Laboratories.",
"title": "Architecture"
},
{
"paragraph_id": 46,
"text": "EMCCDs show a similar sensitivity to intensified CCDs (ICCDs). However, as with ICCDs, the gain that is applied in the gain register is stochastic and the exact gain that has been applied to a pixel's charge is impossible to know. At high gains (> 30), this uncertainty has the same effect on the signal-to-noise ratio (SNR) as halving the quantum efficiency (QE) with respect to operation with a gain of unity. This effect is referred to as the Excess Noise Factor (ENF). However, at very low light levels (where the quantum efficiency is most important), it can be assumed that a pixel either contains an electron—or not. This removes the noise associated with the stochastic multiplication at the risk of counting multiple electrons in the same pixel as a single electron. To avoid multiple counts in one pixel due to coincident photons in this mode of operation, high frame rates are essential. The dispersion in the gain is shown in the graph on the right. For multiplication registers with many elements and large gains it is well modelled by the equation:",
"title": "Architecture"
},
{
"paragraph_id": 47,
"text": "where P is the probability of getting n output electrons given m input electrons and a total mean multiplication register gain of g. For very large numbers of input electrons, this complex distribution function converges towards a Gaussian.",
"title": "Architecture"
},
{
"paragraph_id": 48,
"text": "Because of the lower costs and better resolution, EMCCDs are capable of replacing ICCDs in many applications. ICCDs still have the advantage that they can be gated very fast and thus are useful in applications like range-gated imaging. EMCCD cameras indispensably need a cooling system—using either thermoelectric cooling or liquid nitrogen—to cool the chip down to temperatures in the range of −65 to −95 °C (−85 to −139 °F). This cooling system adds additional costs to the EMCCD imaging system and may yield condensation problems in the application. However, high-end EMCCD cameras are equipped with a permanent hermetic vacuum system confining the chip to avoid condensation issues.",
"title": "Architecture"
},
{
"paragraph_id": 49,
"text": "The low-light capabilities of EMCCDs find use in astronomy and biomedical research, among other fields. In particular, their low noise at high readout speeds makes them very useful for a variety of astronomical applications involving low light sources and transient events such as lucky imaging of faint stars, high speed photon counting photometry, Fabry-Pérot spectroscopy and high-resolution spectroscopy. More recently, these types of CCDs have broken into the field of biomedical research in low-light applications including small animal imaging, single-molecule imaging, Raman spectroscopy, super resolution microscopy as well as a wide variety of modern fluorescence microscopy techniques thanks to greater SNR in low-light conditions in comparison with traditional CCDs and ICCDs.",
"title": "Architecture"
},
{
"paragraph_id": 50,
"text": "In terms of noise, commercial EMCCD cameras typically have clock-induced charge (CIC) and dark current (dependent on the extent of cooling) that together lead to an effective readout noise ranging from 0.01 to 1 electrons per pixel read. However, recent improvements in EMCCD technology have led to a new generation of cameras capable of producing significantly less CIC, higher charge transfer efficiency and an EM gain 5 times higher than what was previously available. These advances in low-light detection lead to an effective total background noise of 0.001 electrons per pixel read, a noise floor unmatched by any other low-light imaging device.",
"title": "Architecture"
},
{
"paragraph_id": 51,
"text": "Due to the high quantum efficiencies of charge-coupled device (CCD) (the ideal quantum efficiency is 100%, one generated electron per incident photon), linearity of their outputs, ease of use compared to photographic plates, and a variety of other reasons, CCDs were very rapidly adopted by astronomers for nearly all UV-to-infrared applications.",
"title": "Use in astronomy"
},
{
"paragraph_id": 52,
"text": "Thermal noise and cosmic rays may alter the pixels in the CCD array. To counter such effects, astronomers take several exposures with the CCD shutter closed and opened. The average of images taken with the shutter closed is necessary to lower the random noise. Once developed, the dark frame average image is then subtracted from the open-shutter image to remove the dark current and other systematic defects (dead pixels, hot pixels, etc.) in the CCD. Newer Skipper CCDs counter noise by collecting data with the same collected charge multiple times and has applications in precision light Dark Matter searches and neutrino measurements.",
"title": "Use in astronomy"
},
{
"paragraph_id": 53,
"text": "The Hubble Space Telescope, in particular, has a highly developed series of steps (“data reduction pipeline”) to convert the raw CCD data to useful images.",
"title": "Use in astronomy"
},
{
"paragraph_id": 54,
"text": "CCD cameras used in astrophotography often require sturdy mounts to cope with vibrations from wind and other sources, along with the tremendous weight of most imaging platforms. To take long exposures of galaxies and nebulae, many astronomers use a technique known as auto-guiding. Most autoguiders use a second CCD chip to monitor deviations during imaging. This chip can rapidly detect errors in tracking and command the mount motors to correct for them.",
"title": "Use in astronomy"
},
{
"paragraph_id": 55,
"text": "An unusual astronomical application of CCDs, called drift-scanning, uses a CCD to make a fixed telescope behave like a tracking telescope and follow the motion of the sky. The charges in the CCD are transferred and read in a direction parallel to the motion of the sky, and at the same speed. In this way, the telescope can image a larger region of the sky than its normal field of view. The Sloan Digital Sky Survey is the most famous example of this, using the technique to produce a survey of over a quarter of the sky. The Gaia space telescope is another instrument operating in this mode, rotating about its axis at a constant rate of 1 revolution in 6 hours and scanning a 360° by 0.5° strip on the sky during this time; a star traverses the entire focal plane in about 40 seconds (effective exposure time).",
"title": "Use in astronomy"
},
{
"paragraph_id": 56,
"text": "In addition to imagers, CCDs are also used in an array of analytical instrumentation including spectrometers and interferometers.",
"title": "Use in astronomy"
},
{
"paragraph_id": 57,
"text": "Digital color cameras, including the digital color cameras in smartphones, generally use a integral color image sensor, which has a color filter array fabricated on top of the monochrome pixels of the CCD. The most popular CFA pattern is known as the Bayer filter, which is named for its inventor, Kodak scientist Bryce Bayer. In the Bayer pattern, each square of four pixels has one filtered red, one blue, and two green pixels (the human eye has greater acuity for luminance, which is more heavily weighted in green than in either red or blue). As a result, the luminance information is collected in each row and column using a checkerboard pattern, and the color resolution is lower than the luminance resolution.",
"title": "Color cameras"
},
{
"paragraph_id": 58,
"text": "Better color separation can be reached by three-CCD devices (3CCD) and a dichroic beam splitter prism, that splits the image into red, green and blue components. Each of the three CCDs is arranged to respond to a particular color. Many professional video camcorders, and some semi-professional camcorders, use this technique, although developments in competing CMOS technology have made CMOS sensors, both with beam-splitters and Bayer filters, increasingly popular in high-end video and digital cinema cameras. Another advantage of 3CCD over a Bayer mask device is higher quantum efficiency (higher light sensitivity), because most of the light from the lens enters one of the silicon sensors, while a Bayer mask absorbs a high proportion (more than 2/3) of the light falling on each pixel location.",
"title": "Color cameras"
},
{
"paragraph_id": 59,
"text": "For still scenes, for instance in microscopy, the resolution of a Bayer mask device can be enhanced by microscanning technology. During the process of color co-site sampling, several frames of the scene are produced. Between acquisitions, the sensor is moved in pixel dimensions, so that each point in the visual field is acquired consecutively by elements of the mask that are sensitive to the red, green, and blue components of its color. Eventually every pixel in the image has been scanned at least once in each color and the resolution of the three channels become equivalent (the resolutions of red and blue channels are quadrupled while the green channel is doubled).",
"title": "Color cameras"
},
{
"paragraph_id": 60,
"text": "Sensors (CCD / CMOS) come in various sizes, or image sensor formats. These sizes are often referred to with an inch fraction designation such as 1/1.8″ or 2/3″ called the optical format. This measurement originates back in the 1950s and the time of Vidicon tubes.",
"title": "Color cameras"
},
{
"paragraph_id": 61,
"text": "When a CCD exposure is long enough, eventually the electrons that collect in the \"bins\" in the brightest part of the image will overflow the bin, resulting in blooming. The structure of the CCD allows the electrons to flow more easily in one direction than another, resulting in vertical streaking.",
"title": "Blooming"
},
{
"paragraph_id": 62,
"text": "Some anti-blooming features that can be built into a CCD reduce its sensitivity to light by using some of the pixel area for a drain structure. James M. Early developed a vertical anti-blooming drain that would not detract from the light collection area, and so did not reduce light sensitivity.",
"title": "Blooming"
}
] | A charge-coupled device (CCD) is an integrated circuit containing an array of linked, or coupled, capacitors. Under the control of an external circuit, each capacitor can transfer its electric charge to a neighboring capacitor. CCD sensors are a major technology used in digital imaging. | 2001-10-16T04:17:38Z | 2023-12-24T05:36:04Z | [
"Template:Short description",
"Template:Nowrap",
"Template:Cn",
"Template:US patent",
"Template:Photography",
"Template:Authority control",
"Template:Reflist",
"Template:Cite news",
"Template:Commons category",
"Template:Portal bar",
"Template:Main",
"Template:Cmn",
"Template:Cite book",
"Template:Cite journal",
"Template:Cite web",
"Template:Webarchive",
"Template:US Patent",
"Template:Convert"
] | https://en.wikipedia.org/wiki/Charge-coupled_device |
6,806 | Computer memory | Computer memory stores information, such as data and programs for immediate use in the computer. The term memory is often synonymous with the term primary storage or main memory. An archaic synonym for memory is store.
Computer memory operates at a high speed compared to storage which is slower but less expensive and higher in capacity. Besides storing opened programs, computer memory serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as not needed by running software. If needed, contents of the computer memory can be transferred to storage; a common way of doing this is through a memory management technique called virtual memory.
Modern computer memory is implemented as semiconductor memory, where data is stored within memory cells built from MOS transistors and other components on an integrated circuit. There are two main kinds of semiconductor memory: volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM and EEPROM memory. Examples of volatile memory are dynamic random-access memory (DRAM) used for primary storage, and static random-access memory (SRAM) used for CPU cache.
Most semiconductor memory is organized into memory cells each storing one bit (0 or 1). Flash memory organization includes both one bit per memory cell and multi-level cell capable of storing multiple bits per cell. The memory cells are grouped into words of fixed word length, for example, 1, 2, 4, 8, 16, 32, 64 or 128 bits. Each word can be accessed by a binary address of N bits, making it possible to store 2 words in the memory.
In the early 1940s, memory technology often permitted a capacity of a few bytes. The first electronic programmable digital computer, the ENIAC, using thousands of vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits stored in the vacuum tubes.
The next significant advance in computer memory came with acoustic delay-line memory, developed by J. Presper Eckert in the early 1940s. Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through the mercury, with the quartz crystals acting as transducers to read and write bits. Delay-line memory was limited to a capacity of up to a few thousand bits.
Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode-ray tubes, Fred Williams invented the Williams tube, which was the first random-access computer memory. The Williams tube was able to store more information than the Selectron tube (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and less expensive. The Williams tube was nevertheless frustratingly sensitive to environmental disturbances.
Efforts began in the late 1940s to find non-volatile memory. Magnetic-core memory allowed for recall of memory after power loss. It was developed by Frederick W. Viehe and An Wang in the late 1940s, and improved by Jay Forrester and Jan A. Rajchman in the early 1950s, before being commercialized with the Whirlwind computer in 1953. Magnetic-core memory was the dominant form of memory until the development of MOS semiconductor memory in the 1960s.
The first semiconductor memory was implemented as a flip-flop circuit in the early 1960s using bipolar transistors. Semiconductor memory made from discrete devices was first shipped by Texas Instruments to the United States Air Force in 1961. The same year, the concept of solid-state memory on an integrated circuit (IC) chip was proposed by applications engineer Bob Norman at Fairchild Semiconductor. The first bipolar semiconductor memory IC chip was the SP95 introduced by IBM in 1965. While semiconductor memory offered improved performance over magnetic-core memory, it remain larger and more expensive and did not displace magnetic-core memory until the late 1960s.
The invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements. MOS memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher performance, MOS semiconductor memory was cheaper and consumed less power than magnetic core memory. In 1965, J. Wood and R. Ball of the Royal Radar Establishment proposed digital storage systems that use CMOS (complementary MOS) memory cells, in addition to MOSFET power devices for the power supply, switched cross-coupling, switches and delay-line storage. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. NMOS memory was commercialized by IBM in the early 1970s. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s.
The two main types of volatile random-access memory (RAM) are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Bipolar SRAM was invented by Robert Norman at Fairchild Semiconductor in 1963, followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but requires six transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip for the System/360 Model 95.
Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic calculator in 1965. While it offered improved performance, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory. MOS technology is the basis for modern DRAM. In 1966, Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was possible to build capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent for a single-transistor DRAM memory cell based on MOS technology. This led to the first commercial DRAM IC chip, the Intel 1103 in October 1970. Synchronous dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000 chip in 1992.
The term memory is also often used to refer to non-volatile memory including read-only memory (ROM) through modern flash memory. Programmable read-only memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma Division of the American Bosch Arma Corporation. In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable ROM, which led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971. EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972. Flash memory was invented by Fujio Masuoka at Toshiba in the early 1980s. Masuoka and colleagues presented the invention of NOR flash in 1984, and then NAND flash in 1987. Toshiba commercialized NAND flash memory in 1987.
Developments in technology and economies of scale have made possible so-called very large memory (VLM) computers.
Volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM (SRAM) or dynamic RAM (DRAM). DRAM dominates for desktop system memory. SRAM is used for CPU cache. SRAM is also found in small embedded systems requiring little memory.
SRAM retains its contents as long as the power is connected and may use a simpler interface, but requires six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs.
Non-volatile memory can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory, flash memory, most types of magnetic computer storage devices (e.g. hard disk drives, floppy disks and magnetic tape), optical discs, and early computer storage methods such as paper tape and punched cards.
Non-volatile memory technologies under development include ferroelectric RAM, programmable metallization cell, Spin-transfer torque magnetic RAM, SONOS, resistive random-access memory, racetrack memory, Nano-RAM, 3D XPoint, and millipede memory.
A third category of memory is semi-volatile. The term is used to describe a memory that has some limited non-volatile duration after power is removed, but then data is ultimately lost. A typical goal when using a semi-volatile memory is to provide the high performance and durability associated with volatile memories while providing some benefits of non-volatile memory.
For example, some non-volatile memory types experience wear when written. A worn cell has increased volatility but otherwise continues to work. Data locations which are written frequently can thus be directed to use worn circuits. As long as the location is updated within some known retention time, the data stays valid. After a period of time without update, the value is copied to a less-worn circuit with longer retention. Writing first to the worn area allows a high write rate while avoiding wear on the not-worn circuits.
As a second example, an STT-RAM can be made non-volatile by building large cells, but doing so raises the cost per bit and power requirements and reduces the write speed. Using small cells improves cost, power, and speed, but leads to semi-volatile behavior. In some applications, the increased volatility can be managed to provide many benefits of a non-volatile memory, for example by removing power but forcing a wake-up before data is lost; or by caching read-only data and discarding the cached data if the power-off time exceeds the non-volatile threshold.
The term semi-volatile is also used to describe semi-volatile behavior constructed from other memory types. For example, a volatile and a non-volatile memory may be combined, where an external signal copies data from the volatile memory to the non-volatile memory, but if power is removed before the copy occurs, the data is lost. Or, a battery-backed volatile memory, and if external power is lost there is some known period where the battery can continue to power the volatile memory, but if power is off for an extended time, the battery runs down and data is lost.
Proper management of memory is vital for a computer system to operate properly. Modern operating systems have complex systems to properly manage memory. Failure to do so can lead to bugs or slow performance.
Improper management of memory is a common cause of bugs, including the following types:
Virtual memory is a system where physical memory is managed by the operating system typically with assistance from a memory management unit, which is part of many modern CPUs. It allows multiple types of memory to be used. For example, some data can be stored in RAM while other data is stored on a hard drive (e.g. in a swapfile), functioning as an extension of the cache hierarchy. This offers several advantages. Computer programmers no longer need to worry about where their data is physically stored or whether the user's computer will have enough memory. The operating system will place actively used data in RAM, which is much faster than hard disks. When the amount of RAM is not sufficient to run all the current programs, it can result in a situation where the computer spends more time moving data from RAM to disk and back than it does accomplishing tasks; this is known as thrashing.
Protected memory is a system where each program is given an area of memory to use and is prevented from going outside that range. If the operating system detects that a program has tried to alter memory that does not belong to it, the program is terminated (or otherwise restricted or redirected). This way, only the offending program crashes, and other programs are not affected by the misbehavior (whether accidental or intentional). Use of protected memory greatly enhances both the reliability and security of a computer system.
Without protected memory, it is possible that a bug in one program will alter the memory used by another program. This will cause that other program to run off of corrupted memory with unpredictable results. If the operating system's memory is corrupted, the entire computer system may crash and need to be rebooted. At times programs intentionally alter the memory used by other programs. This is done by viruses and malware to take over computers. It may also be used benignly by desirable programs which are intended to modify other programs, debuggers, for example, to insert breakpoints or hooks. | [
{
"paragraph_id": 0,
"text": "Computer memory stores information, such as data and programs for immediate use in the computer. The term memory is often synonymous with the term primary storage or main memory. An archaic synonym for memory is store.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Computer memory operates at a high speed compared to storage which is slower but less expensive and higher in capacity. Besides storing opened programs, computer memory serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as not needed by running software. If needed, contents of the computer memory can be transferred to storage; a common way of doing this is through a memory management technique called virtual memory.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Modern computer memory is implemented as semiconductor memory, where data is stored within memory cells built from MOS transistors and other components on an integrated circuit. There are two main kinds of semiconductor memory: volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM and EEPROM memory. Examples of volatile memory are dynamic random-access memory (DRAM) used for primary storage, and static random-access memory (SRAM) used for CPU cache.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Most semiconductor memory is organized into memory cells each storing one bit (0 or 1). Flash memory organization includes both one bit per memory cell and multi-level cell capable of storing multiple bits per cell. The memory cells are grouped into words of fixed word length, for example, 1, 2, 4, 8, 16, 32, 64 or 128 bits. Each word can be accessed by a binary address of N bits, making it possible to store 2 words in the memory.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the early 1940s, memory technology often permitted a capacity of a few bytes. The first electronic programmable digital computer, the ENIAC, using thousands of vacuum tubes, could perform simple calculations involving 20 numbers of ten decimal digits stored in the vacuum tubes.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The next significant advance in computer memory came with acoustic delay-line memory, developed by J. Presper Eckert in the early 1940s. Through the construction of a glass tube filled with mercury and plugged at each end with a quartz crystal, delay lines could store bits of information in the form of sound waves propagating through the mercury, with the quartz crystals acting as transducers to read and write bits. Delay-line memory was limited to a capacity of up to a few thousand bits.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Two alternatives to the delay line, the Williams tube and Selectron tube, originated in 1946, both using electron beams in glass tubes as means of storage. Using cathode-ray tubes, Fred Williams invented the Williams tube, which was the first random-access computer memory. The Williams tube was able to store more information than the Selectron tube (the Selectron was limited to 256 bits, while the Williams tube could store thousands) and less expensive. The Williams tube was nevertheless frustratingly sensitive to environmental disturbances.",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Efforts began in the late 1940s to find non-volatile memory. Magnetic-core memory allowed for recall of memory after power loss. It was developed by Frederick W. Viehe and An Wang in the late 1940s, and improved by Jay Forrester and Jan A. Rajchman in the early 1950s, before being commercialized with the Whirlwind computer in 1953. Magnetic-core memory was the dominant form of memory until the development of MOS semiconductor memory in the 1960s.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "The first semiconductor memory was implemented as a flip-flop circuit in the early 1960s using bipolar transistors. Semiconductor memory made from discrete devices was first shipped by Texas Instruments to the United States Air Force in 1961. The same year, the concept of solid-state memory on an integrated circuit (IC) chip was proposed by applications engineer Bob Norman at Fairchild Semiconductor. The first bipolar semiconductor memory IC chip was the SP95 introduced by IBM in 1965. While semiconductor memory offered improved performance over magnetic-core memory, it remain larger and more expensive and did not displace magnetic-core memory until the late 1960s.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "The invention of the metal–oxide–semiconductor field-effect transistor (MOSFET) enabled the practical use of metal–oxide–semiconductor (MOS) transistors as memory cell storage elements. MOS memory was developed by John Schmidt at Fairchild Semiconductor in 1964. In addition to higher performance, MOS semiconductor memory was cheaper and consumed less power than magnetic core memory. In 1965, J. Wood and R. Ball of the Royal Radar Establishment proposed digital storage systems that use CMOS (complementary MOS) memory cells, in addition to MOSFET power devices for the power supply, switched cross-coupling, switches and delay-line storage. The development of silicon-gate MOS integrated circuit (MOS IC) technology by Federico Faggin at Fairchild in 1968 enabled the production of MOS memory chips. NMOS memory was commercialized by IBM in the early 1970s. MOS memory overtook magnetic core memory as the dominant memory technology in the early 1970s.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The two main types of volatile random-access memory (RAM) are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Bipolar SRAM was invented by Robert Norman at Fairchild Semiconductor in 1963, followed by the development of MOS SRAM by John Schmidt at Fairchild in 1964. SRAM became an alternative to magnetic-core memory, but requires six transistors for each bit of data. Commercial use of SRAM began in 1965, when IBM introduced their SP95 SRAM chip for the System/360 Model 95.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "Toshiba introduced bipolar DRAM memory cells for its Toscal BC-1411 electronic calculator in 1965. While it offered improved performance, bipolar DRAM could not compete with the lower price of the then dominant magnetic-core memory. MOS technology is the basis for modern DRAM. In 1966, Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was possible to build capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent for a single-transistor DRAM memory cell based on MOS technology. This led to the first commercial DRAM IC chip, the Intel 1103 in October 1970. Synchronous dynamic random-access memory (SDRAM) later debuted with the Samsung KM48SL2000 chip in 1992.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The term memory is also often used to refer to non-volatile memory including read-only memory (ROM) through modern flash memory. Programmable read-only memory (PROM) was invented by Wen Tsing Chow in 1956, while working for the Arma Division of the American Bosch Arma Corporation. In 1967, Dawon Kahng and Simon Sze of Bell Labs proposed that the floating gate of a MOS semiconductor device could be used for the cell of a reprogrammable ROM, which led to Dov Frohman of Intel inventing EPROM (erasable PROM) in 1971. EEPROM (electrically erasable PROM) was developed by Yasuo Tarui, Yutaka Hayashi and Kiyoko Naga at the Electrotechnical Laboratory in 1972. Flash memory was invented by Fujio Masuoka at Toshiba in the early 1980s. Masuoka and colleagues presented the invention of NOR flash in 1984, and then NAND flash in 1987. Toshiba commercialized NAND flash memory in 1987.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "Developments in technology and economies of scale have made possible so-called very large memory (VLM) computers.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "Volatile memory is computer memory that requires power to maintain the stored information. Most modern semiconductor volatile memory is either static RAM (SRAM) or dynamic RAM (DRAM). DRAM dominates for desktop system memory. SRAM is used for CPU cache. SRAM is also found in small embedded systems requiring little memory.",
"title": "Volatile memory"
},
{
"paragraph_id": 15,
"text": "SRAM retains its contents as long as the power is connected and may use a simpler interface, but requires six transistors per bit. Dynamic RAM is more complicated for interfacing and control, needing regular refresh cycles to prevent losing its contents, but uses only one transistor and one capacitor per bit, allowing it to reach much higher densities and much cheaper per-bit costs.",
"title": "Volatile memory"
},
{
"paragraph_id": 16,
"text": "Non-volatile memory can retain the stored information even when not powered. Examples of non-volatile memory include read-only memory, flash memory, most types of magnetic computer storage devices (e.g. hard disk drives, floppy disks and magnetic tape), optical discs, and early computer storage methods such as paper tape and punched cards.",
"title": "Non-volatile memory"
},
{
"paragraph_id": 17,
"text": "Non-volatile memory technologies under development include ferroelectric RAM, programmable metallization cell, Spin-transfer torque magnetic RAM, SONOS, resistive random-access memory, racetrack memory, Nano-RAM, 3D XPoint, and millipede memory.",
"title": "Non-volatile memory"
},
{
"paragraph_id": 18,
"text": "A third category of memory is semi-volatile. The term is used to describe a memory that has some limited non-volatile duration after power is removed, but then data is ultimately lost. A typical goal when using a semi-volatile memory is to provide the high performance and durability associated with volatile memories while providing some benefits of non-volatile memory.",
"title": "Semi-volatile memory"
},
{
"paragraph_id": 19,
"text": "For example, some non-volatile memory types experience wear when written. A worn cell has increased volatility but otherwise continues to work. Data locations which are written frequently can thus be directed to use worn circuits. As long as the location is updated within some known retention time, the data stays valid. After a period of time without update, the value is copied to a less-worn circuit with longer retention. Writing first to the worn area allows a high write rate while avoiding wear on the not-worn circuits.",
"title": "Semi-volatile memory"
},
{
"paragraph_id": 20,
"text": "As a second example, an STT-RAM can be made non-volatile by building large cells, but doing so raises the cost per bit and power requirements and reduces the write speed. Using small cells improves cost, power, and speed, but leads to semi-volatile behavior. In some applications, the increased volatility can be managed to provide many benefits of a non-volatile memory, for example by removing power but forcing a wake-up before data is lost; or by caching read-only data and discarding the cached data if the power-off time exceeds the non-volatile threshold.",
"title": "Semi-volatile memory"
},
{
"paragraph_id": 21,
"text": "The term semi-volatile is also used to describe semi-volatile behavior constructed from other memory types. For example, a volatile and a non-volatile memory may be combined, where an external signal copies data from the volatile memory to the non-volatile memory, but if power is removed before the copy occurs, the data is lost. Or, a battery-backed volatile memory, and if external power is lost there is some known period where the battery can continue to power the volatile memory, but if power is off for an extended time, the battery runs down and data is lost.",
"title": "Semi-volatile memory"
},
{
"paragraph_id": 22,
"text": "Proper management of memory is vital for a computer system to operate properly. Modern operating systems have complex systems to properly manage memory. Failure to do so can lead to bugs or slow performance.",
"title": "Management"
},
{
"paragraph_id": 23,
"text": "Improper management of memory is a common cause of bugs, including the following types:",
"title": "Management"
},
{
"paragraph_id": 24,
"text": "Virtual memory is a system where physical memory is managed by the operating system typically with assistance from a memory management unit, which is part of many modern CPUs. It allows multiple types of memory to be used. For example, some data can be stored in RAM while other data is stored on a hard drive (e.g. in a swapfile), functioning as an extension of the cache hierarchy. This offers several advantages. Computer programmers no longer need to worry about where their data is physically stored or whether the user's computer will have enough memory. The operating system will place actively used data in RAM, which is much faster than hard disks. When the amount of RAM is not sufficient to run all the current programs, it can result in a situation where the computer spends more time moving data from RAM to disk and back than it does accomplishing tasks; this is known as thrashing.",
"title": "Management"
},
{
"paragraph_id": 25,
"text": "Protected memory is a system where each program is given an area of memory to use and is prevented from going outside that range. If the operating system detects that a program has tried to alter memory that does not belong to it, the program is terminated (or otherwise restricted or redirected). This way, only the offending program crashes, and other programs are not affected by the misbehavior (whether accidental or intentional). Use of protected memory greatly enhances both the reliability and security of a computer system.",
"title": "Management"
},
{
"paragraph_id": 26,
"text": "Without protected memory, it is possible that a bug in one program will alter the memory used by another program. This will cause that other program to run off of corrupted memory with unpredictable results. If the operating system's memory is corrupted, the entire computer system may crash and need to be rebooted. At times programs intentionally alter the memory used by other programs. This is done by viruses and malware to take over computers. It may also be used benignly by desirable programs which are intended to modify other programs, debuggers, for example, to insert breakpoints or hooks.",
"title": "Management"
}
] | Computer memory stores information, such as data and programs for immediate use in the computer. The term memory is often synonymous with the term primary storage or main memory. An archaic synonym for memory is store. Computer memory operates at a high speed compared to storage which is slower but less expensive and higher in capacity. Besides storing opened programs, computer memory serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as not needed by running software. If needed, contents of the computer memory can be transferred to storage; a common way of doing this is through a memory management technique called virtual memory. Modern computer memory is implemented as semiconductor memory, where data is stored within memory cells built from MOS transistors and other components on an integrated circuit. There are two main kinds of semiconductor memory: volatile and non-volatile. Examples of non-volatile memory are flash memory and ROM, PROM, EPROM and EEPROM memory. Examples of volatile memory are dynamic random-access memory (DRAM) used for primary storage, and static random-access memory (SRAM) used for CPU cache. Most semiconductor memory is organized into memory cells each storing one bit. Flash memory organization includes both one bit per memory cell and multi-level cell capable of storing multiple bits per cell. The memory cells are grouped into words of fixed word length, for example, 1, 2, 4, 8, 16, 32, 64 or 128 bits. Each word can be accessed by a binary address of N bits, making it possible to store 2N words in the memory. | 2001-10-16T06:37:52Z | 2023-12-08T10:47:24Z | [
"Template:Cite book",
"Template:Basic computer components",
"Template:Short description",
"Template:Main",
"Template:Vanchor",
"Template:Efn",
"Template:Notelist",
"Template:Cite web",
"Template:Benchmark",
"Template:Memory types",
"Template:Circa",
"Template:Nbsp",
"Template:Cite news",
"Template:Cite conference",
"Template:Citation",
"Template:As of",
"Template:Reflist",
"Template:Webarchive",
"Template:Patent",
"Template:Use American English",
"Template:Commons category",
"Template:Cite journal",
"Template:Authority control"
] | https://en.wikipedia.org/wiki/Computer_memory |
6,809 | CDC (disambiguation) | The Centers for Disease Control and Prevention is the national public health agency of the United States.
CDC may also refer to: | [
{
"paragraph_id": 0,
"text": "The Centers for Disease Control and Prevention is the national public health agency of the United States.",
"title": ""
},
{
"paragraph_id": 1,
"text": "CDC may also refer to:",
"title": ""
}
] | The Centers for Disease Control and Prevention is the national public health agency of the United States. CDC may also refer to: | 2001-10-18T10:43:09Z | 2023-12-11T10:29:34Z | [
"Template:TOC right",
"Template:Lang-ca",
"Template:Lang",
"Template:Disambiguation"
] | https://en.wikipedia.org/wiki/CDC_(disambiguation) |
6,811 | Centers for Disease Control and Prevention | The Centers for Disease Control and Prevention (CDC) is the national public health agency of the United States. It is a United States federal agency under the Department of Health and Human Services, and is headquartered in Atlanta, Georgia.
The agency's main goal is the protection of public health and safety through the control and prevention of disease, injury, and disability in the US and worldwide. The CDC focuses national attention on developing and applying disease control and prevention. It especially focuses its attention on infectious disease, food borne pathogens, environmental health, occupational safety and health, health promotion, injury prevention, and educational activities designed to improve the health of United States citizens. The CDC also conducts research and provides information on non-infectious diseases, such as obesity and diabetes, and is a founding member of the International Association of National Public Health Institutes.
The CDC's current Director is Mandy Cohen who assumed office on July 10, 2023.
The Communicable Disease Center was founded July 1, 1946, as the successor to the World War II Malaria Control in War Areas program of the Office of National Defense Malaria Control Activities.
Preceding its founding, organizations with global influence in malaria control were the Malaria Commission of the League of Nations and the Rockefeller Foundation. The Rockefeller Foundation greatly supported malaria control, sought to have the governments take over some of its efforts, and collaborated with the agency.
The new agency was a branch of the U.S. Public Health Service and Atlanta was chosen as the location because malaria was endemic in the Southern United States. The agency changed names (see infobox on top) before adopting the name Communicable Disease Center in 1946. Offices were located on the sixth floor of the Volunteer Building on Peachtree Street.
With a budget at the time of about $1 million, 59 percent of its personnel were engaged in mosquito abatement and habitat control with the objective of control and eradication of malaria in the United States (see National Malaria Eradication Program).
Among its 369 employees, the main jobs at CDC were originally entomology and engineering. In CDC's initial years, more than six and a half million homes were sprayed, mostly with DDT. In 1946, there were only seven medical officers on duty and an early organization chart was drawn, somewhat fancifully, in the shape of a mosquito. Under Joseph Walter Mountin, the CDC continued to be an advocate for public health issues and pushed to extend its responsibilities to many other communicable diseases.
In 1947, the CDC made a token payment of $10 to Emory University for 15 acres (61,000 m) of land on Clifton Road in DeKalb County, still the home of CDC headquarters as of 2019. CDC employees collected the money to make the purchase. The benefactor behind the "gift" was Robert W. Woodruff, chairman of the board of The Coca-Cola Company. Woodruff had a long-time interest in malaria control, which had been a problem in areas where he went hunting. The same year, the PHS transferred its San Francisco based plague laboratory into the CDC as the Epidemiology Division, and a new Veterinary Diseases Division was established.
An Epidemic Intelligence Service (EIS) was established in 1951, originally due to biological warfare concerns arising from the Korean War; EIS evolved into two-year postgraduate training program in epidemiology, and a prototype for Field Epidemiology Training Programs (FETP), which began in 1980. The FETP is a large operation that has trained more than 18,000 disease detectives in over 80 countries. In 2020 FETP celebrated the 40th anniversary of the CDC's support for Thailand's Field Epidemiology Training Program. Thailand was the first FETP site created outside of North America and is found in numerous countries, reflecting CDC's influence in promoting this model internationally. The Training Programs in Epidemiology and Public Health Interventions Network (TEPHINET) has graduated 950 students.
The mission of the CDC expanded beyond its original focus on malaria to include sexually transmitted diseases when the Venereal Disease Division of the U.S. Public Health Service (PHS) was transferred to the CDC in 1957. Shortly thereafter, Tuberculosis Control was transferred (in 1960) to the CDC from PHS, and then in 1963 the Immunization program was established.
It became the National Communicable Disease Center effective July 1, 1967, and the Center for Disease Control on June 24, 1970. At the end of the Public Health Service reorganizations of 1966–1973, it was promoted to being a principal operating agency of PHS.
It was renamed to the plural Centers for Disease Control effective October 14, 1980, as the modern organization of having multiple constituent centers was established. By 1990, it had four centers formed in the 1980s: the Center for Infectious Diseases, Center for Chronic Disease Prevention and Health Promotion, the Center for Environmental Health and Injury Control, and the Center for Prevention Services; as well as two centers that had been absorbed by CDC from outside: the National Institute for Occupational Safety and Health in 1973, and the National Center for Health Statistics in 1987.
An act of the United States Congress appended the words "and Prevention" to the name effective October 27, 1992. However, Congress directed that the initialism CDC be retained because of its name recognition. Since the 1990s, the CDC focus has broadened to include chronic diseases, disabilities, injury control, workplace hazards, environmental health threats, and terrorism preparedness. CDC combats emerging diseases and other health risks, including birth defects, West Nile virus, obesity, avian, swine, and pandemic flu, E. coli, and bioterrorism, to name a few. The organization would also prove to be an important factor in preventing the abuse of penicillin. In May 1994 the CDC admitted having sent samples of communicable diseases to the Iraqi government from 1984 through 1989 which were subsequently repurposed for biological warfare, including Botulinum toxin, West Nile virus, Yersinia pestis and Dengue fever virus.
On April 21, 2005, then–CDC Director Julie Gerberding formally announced the reorganization of CDC to "confront the challenges of 21st-century health threats". She established four Coordinating Centers. In 2009 the Obama Administration re-evaluated this change and ordered them cut as an unnecessary management layer.
As of 2013, the CDC's Biosafety Level 4 laboratories were among the few that exist in the world. They included one of only two official repositories of smallpox in the world, with the other one located at the State Research Center of Virology and Biotechnology VECTOR in the Russian Federation. In 2014, the CDC revealed they had discovered several misplaced smallpox samples while their lab workers were "potentially infected" with anthrax.
The city of Atlanta annexed the property of the CDC headquarters effective January 1, 2018, as a part of the city's largest annexation within a period of 65 years; the Atlanta City Council had voted to do so the prior December. The CDC and Emory University had requested that the Atlanta city government annex the area, paving the way for a MARTA expansion through the Emory campus, funded by city tax dollars. The headquarters were located in an unincorporated area, statistically in the Druid Hills census-designated place.
On August 17, 2022, Dr. Walensky said the CDC would make drastic changes in the wake of mistakes during the COVID-19 pandemic. She outlined an overhaul of how the CDC would analyze and share data and how they would communicate information to the general public. In her statement to all CDC employees, she said: "For 75 years, CDC and public health have been preparing for COVID-19, and in our big moment, our performance did not reliably meet expectations." Based on the findings of an internal report, Walensky concluded that "The CDC must refocus itself on public health needs, respond much faster to emergencies and outbreaks of disease, and provide information in a way that ordinary people and state and local health authorities can understand and put to use" (as summarized by the New York Times).
The CDC is organized into "Centers, Institutes, and Offices" (CIOs), with each organizational unit implementing the agency's activities in a particular area of expertise while also providing intra-agency support and resource-sharing for cross-cutting issues and specific health threats.
As of the most recent reorganization in February 2023, the CIOs are:
The Office of Public Health Preparedness was created during the 2001 anthrax attacks shortly after the terrorist attacks of September 11, 2001. Its purpose was to coordinate among the government the response to a range of biological terrorism threats.
Most CDC centers are located in Atlanta. Building 18, which opened in 2005 at the CDC's main Roybal campus (named in honor of the late Representative Edward R. Roybal), contains the premier BSL4 laboratory in the United States.
A few of the centers are based in or operate other domestic locations:
In addition, CDC operates quarantine facilities in 20 cities in the U.S.
CDC's budget for fiscal year 2018 was $11.9 billion. The CDC offers grants to help organizations advance health, safety and awareness at the community level in the United States. The CDC awards over 85 percent of its annual budget through these grants.
As of 2021, CDC staff numbered approximately 15,000 personnel (including 6,000 contractors and 840 United States Public Health Service Commissioned Corps officers) in 170 occupations. Eighty percent held bachelor's degrees or higher; almost half had advanced degrees (a master's degree or a doctorate such as a PhD, D.O., or M.D.).
Common CDC job titles include engineer, entomologist, epidemiologist, biologist, physician, veterinarian, behavioral scientist, nurse, medical technologist, economist, public health advisor, health communicator, toxicologist, chemist, computer scientist, and statistician. The CDC also operates a number of notable training and fellowship programs, including those indicated below.
The Epidemic Intelligence Service (EIS) is composed of "boots-on-the-ground disease detectives" who investigate public health problems domestically and globally. When called upon by a governmental body, EIS officers may embark on short-term epidemiological assistance assignments, or "Epi-Aids", to provide technical expertise in containing and investigating disease outbreaks. The EIS program is a model for the international Field Epidemiology Training Program.
The CDC also operates the Public Health Associate Program (PHAP), a two-year paid fellowship for recent college graduates to work in public health agencies all over the United States. PHAP was founded in 2007 and currently has 159 associates in 34 states.
The Director of CDC is a Senior Executive Service position that may be filled either by a career employee, or as a political appointment that does not require Senate confirmation, with the latter method typically being used. The director serves at the pleasure of the President and may be fired at any time. On January 20, 2025, the CDC Director position will change to require Senate confirmation, due to a provision in the Consolidated Appropriations Act, 2023. The CDC Director concurrently serves as the Administrator of the Agency for Toxic Substances and Disease Registry.
Twenty directors have served the CDC or its predecessor agencies, including three who have served during the Trump administration (including Anne Schuchat who twice served as acting director) and three who have served during the Carter administration (including one acting director not shown here). Two served under Bill Clinton, but only one under the Nixon to Ford terms.
The CDC's programs address more than 400 diseases, health threats, and conditions that are major causes of death, disease, and disability. The CDC's website has information on various infectious (and noninfectious) diseases, including smallpox, measles, and others.
The CDC targets the transmission of influenza, including the H1N1 swine flu, and launched websites to educate people about hygiene.
Within the division are two programs: the Federal Select Agent Program (FSAP) and the Import Permit Program. The FSAP is run jointly with an office within the U.S. Department of Agriculture, regulating agents that can cause disease in humans, animals, and plants. The Import Permit Program regulates the importation of "infectious biological materials."
The CDC runs a program that protects the public from rare and dangerous substances such as anthrax and the Ebola virus. The program, called the Federal Select Agent Program, calls for inspections of labs in the U.S. that work with dangerous pathogens.
During the 2014 Ebola outbreak in West Africa, the CDC helped coordinate the return of two infected American aid workers for treatment at Emory University Hospital, the home of a special unit to handle highly infectious diseases.
As a response to the 2014 Ebola outbreak, Congress passed a Continuing Appropriations Resolution allocating $30,000,000 towards CDC's efforts to fight the virus.
The CDC also works on non-communicable diseases, including chronic diseases caused by obesity, physical inactivity and tobacco-use. The work of the Division for Cancer Prevention and Control, led from 2010 by Lisa C. Richardson, is also within this remit.
The CDC implemented their National Action Plan for Combating Antibiotic Resistant Bacteria as a measure against the spread of antibiotic resistance in the United States. This initiative has a budget of $161 million and includes the development of the Antibiotic Resistance Lab Network.
Globally, the CDC works with other organizations to address global health challenges and contain disease threats at their source. They work with many international organizations such as the World Health Organization (WHO) as well as ministries of health and other groups on the front lines of outbreaks. The agency maintains staff in more than 60 countries, including some from the U.S. but more from the countries in which they operate. The agency's global divisions include the Division of Global HIV and TB (DGHT), the Division of Parasitic Diseases and Malaria (DPDM), the Division of Global Health Protection (DGHP), and the Global Immunization Division (GID).
The CDC has been working with the WHO to implement the International Health Regulations (IHR), an agreement between 196 countries to prevent, control, and report on the international spread of disease, through initiatives including the Global Disease Detection Program (GDD).
The CDC has also been involved in implementing the U.S. global health initiatives President's Emergency Plan for AIDS Relief (PEPFAR) and President's Malaria Initiative.
The CDC collects and publishes health information for travelers in a comprehensive book, CDC Health Information for International Travel, which is commonly known as the "yellow book." The book is available online and in print as a new edition every other year and includes current travel health guidelines, vaccine recommendations, and information on specific travel destinations. The CDC also issues travel health notices on its website, consisting of three levels:
The CDC uses a number of tools to monitor the safety of vaccines. The Vaccine Adverse Event Reporting System (VAERS), a national vaccine safety surveillance program run by CDC and the FDA. "VAERS detects possible safety issues with U.S. vaccines by collecting information about adverse events (possible side effects or health problems) after vaccination." The CDC's Safety Information by Vaccine page provides a list of the latest safety information, side effects, and answers to common questions about CDC recommended vaccines.
The Vaccine Safety Datalink (VSD) works with a network of healthcare organizations to share data on vaccine safety and adverse events. The Clinical Immunization Safety Assessment (CISA) project is a network of vaccine experts and health centers that research and assist the CDC in the area of vaccine safety.
CDC also runs a program called V-safe, a smartphone web application that allows COVID-19 vaccine recipients to be surveyed in detail about their health in response to getting the shot.
The CDC Foundation operates independently from CDC as a private, nonprofit 501(c)(3) organization incorporated in the State of Georgia. The creation of the Foundation was authorized by section 399F of the Public Health Service Act to support the mission of CDC in partnership with the private sector, including organizations, foundations, businesses, educational groups, and individuals. From 1995 to 2022, the Foundation raised over $1.6 billion and launched more than 1,200 health programs. Bill Cosby formerly served as a member of the Foundation's Board of Directors, continuing as an honorary member after completing his term.
The Foundation engages in research projects and health programs in more than 160 countries every year, including in focus areas such as cardiovascular disease, cancer, emergency response, and infectious diseases, particularly HIV/AIDS, Ebola, rotavirus, and COVID-19.
In 2015, BMJ associate editor Jeanne Lenzer raised concerns that the CDC's recommendations and publications may be influenced by donations received through the Foundation, which includes pharmaceutical companies.
For 15 years, the CDC had direct oversight over the Tuskegee syphilis experiment. In the study, which lasted from 1932 to 1972, a group of Black men (nearly 400 of whom had syphilis) were studied to learn more about the disease. The disease was left untreated in the men, who had not given their informed consent to serve as research subjects. The Tuskegee Study was initiated in 1932 by the Public Health Service, with the CDC taking over the Tuskegee Health Benefit Program in 1995.
An area of partisan dispute related to CDC funding is studying firearms effectiveness. Although the CDC was one of the first government agencies to study gun related data, in 1996 the Dickey Amendment, passed with the support of the National Rifle Association of America, states "none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control". Advocates for gun control oppose the amendment and have tried to overturn it.
Looking at the history of the passage of the Dickey Amendment, in 1992, Mark L. Rosenberg and five CDC colleagues founded the CDC's National Center for Injury Prevention and Control, with an annual budget of approximately $260,000. They focused on "identifying causes of firearm deaths, and methods to prevent them". Their first report, published in the New England Journal of Medicine in 1993 entitled "Guns are a Risk Factor for Homicide in the Home", reported "mere presence of a gun in a home increased the risk of a firearm-related death by 2.7 percent, and suicide fivefold—a "huge" increase." In response, the NRA launched a "campaign to shut down the Injury Center." Two conservative pro-gun groups, Doctors for Responsible Gun Ownership and Doctors for Integrity and Policy Research joined the pro-gun effort, and, by 1995, politicians also supported the pro-gun initiative. In 1996, Jay Dickey (R) Arkansas introduced the Dickey Amendment statement stating "none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control" as a rider. in the 1996 appropriations bill." In 1997, "Congress re-directed all of the money for gun research to the study of traumatic brain injury." David Satcher, CDC head 1993-98 advocated for firearms research. In 2016 over a dozen "public health insiders, including current and former CDC senior leaders" told The Trace interviewers that CDC senior leaders took a cautious stance in their interpretation of the Dickey Amendment and that they could do more but were afraid of political and personal retribution.
In 2013, the American Medical Association, the American Psychological Association, and the American Academy of Pediatrics sent a letter to the leaders of the Senate Appropriations Committee asking them "to support at least $10 million within the Centers for Disease Control and Prevention (CDC) in FY 2014 along with sufficient new taxes at the National Institutes of Health to support research into the causes and prevention of violence. Furthermore, we urge Members to oppose any efforts to reduce, eliminate, or condition CDC funding related to violence prevention research." Congress maintained the ban in subsequent budgets.
In October 2014, the CDC gave a nurse with a fever who was later diagnosed with Ebola permission to board a commercial flight to Cleveland.
The CDC has been widely criticized for its handling of the COVID-19 pandemic. In 2022, CDC director Rochelle Walensky acknowledged "some pretty dramatic, pretty public mistakes, from testing to data to communications", based on the findings of an internal examination.
The first confirmed case of COVID-19 was discovered in the U.S. on January 20, 2020. However, widespread COVID-19 testing in the United States was effectively stalled until February 28, when federal officials revised a faulty CDC test, and days afterward, when the Food and Drug Administration began loosening rules that had restricted other labs from developing tests. In February 2020, as the CDC's early coronavirus test malfunctioned nationwide, CDC Director Robert R. Redfield reassured fellow officials on the White House Coronavirus Task Force that the problem would be quickly solved, according to White House officials. It took about three weeks to sort out the failed test kits, which may have been contaminated during their processing in a CDC lab. Later investigations by the FDA and the Department of Health and Human Services found that the CDC had violated its own protocols in developing its tests. In November 2020, NPR reported that an internal review document they obtained revealed that the CDC was aware that the first batch of tests which were issued in early January had a chance of being wrong 33 percent of the time, but they released them anyway.
In May 2020, The Atlantic reported that the CDC was conflating the results of two different types of coronavirus tests — tests that diagnose current coronavirus infections, and tests that measure whether someone has ever had the virus. The magazine said this distorted several important metrics, provided the country with an inaccurate picture of the state of the pandemic, and overstated the country's testing ability.
In July 2020, the Trump administration ordered hospitals to bypass the CDC and instead send all COVID-19 patient information to a database at the Department of Health and Human Services. Some health experts opposed the order and warned that the data might become politicized or withheld from the public. On July 15, the CDC alarmed health care groups by temporarily removing COVID-19 dashboards from its website. It restored the data a day later.
In August 2020, the CDC recommended that people showing no COVID-19 symptoms do not need testing. The new guidelines alarmed many public health experts. The guidelines were crafted by the White House Coronavirus Task Force without the sign-off of Anthony Fauci of the NIH. Objections by other experts at the CDC went unheard. Officials said that a CDC document in July arguing for "the importance of reopening schools" was also crafted outside the CDC. On August 16, the chief of staff, Kyle McGowan, and his deputy, Amanda Campbell, resigned from the agency. The testing guidelines were reversed on September 18, 2020, after public controversy.
In September 2020, the CDC drafted an order requiring masks on all public transportation in the United States, but the White House Coronavirus Task Force blocked the order, refusing to discuss it, according to two federal health officials.
In October 2020, it was disclosed that White House advisers had repeatedly altered the writings of CDC scientists about COVID-19, including recommendations on church choirs, social distancing in bars and restaurants, and summaries of public-health reports.
In the lead up to 2020 Thanksgiving, the CDC advised Americans not to travel for the holiday saying, "It's not a requirement. It's a recommendation for the American public to consider." The White House coronavirus task force had its first public briefing in months on that date but travel was not mentioned.
The New York Times later concluded that the CDC's decisions to "ben[d] to political pressure from the Trump White House to alter key public health guidance or withhold it from the public [...] cost it a measure of public trust that experts say it still has not recaptured" as of 2022.
In May 2021, following criticism by scientists, the CDC updated its COVID-19 guidance to acknowledge airborne transmission of COVID-19, after having previously claimed that the majority of infections occurred via "close contact, not airborne transmission".
In December 2021, CDC shortened its recommended isolation period for asymptomatic individuals infected with Covid-19 from 10 days to five.
Until 2022, the CDC withheld critical data about COVID-19 vaccine boosters, hospitalizations and wastewater data.
On June 10, 2022, the Biden Administration ordered the CDC to remove the COVID-19 testing requirement for air travelers entering the United States.
In January 2022, it was revealed that the CDC had communicated with moderators at Facebook and Instagram over COVID-19 information and discussion on the platforms, including information that the CDC considered false or misleading and that might influence people not to get the COVID-19 vaccines.
During the pandemic, the CDC Morbidity and Mortality Weekly Report (MMWR) came under pressure from political appointees at the Department of Health and Human Services (HHS) to modify its reporting so as not to conflict with what Trump was saying about the pandemic.
Starting in June 2020, Michael Caputo, the HHS assistant secretary for public affairs, and his chief advisor Paul Alexander tried to delay, suppress, change, and retroactively edit MMR releases about the effectiveness of potential treatments for COVID-19, the transmissibility of the virus, and other issues where the president had taken a public stance. Alexander tried unsuccessfully to get personal approval of all issues of MMWR before they went out.
Caputo claimed this oversight was necessary because MMWR reports were being tainted by "political content"; he demanded to know the political leanings of the scientists who reported that hydroxychloroquine had little benefit as a treatment while Trump was saying the opposite. In emails Alexander accused CDC scientists of attempting to "hurt the president" and writing "hit pieces on the administration".
In October 2020, emails obtained by Politico showed that Alexander requested multiple alterations in a report. The published alterations included a title being changed from "Children, Adolescents, and Young Adults" to "Persons." One current and two former CDC officials who reviewed the email exchanges said they were troubled by the "intervention to alter scientific reports viewed as untouchable prior to the Trump administration" that "appeared to minimize the risks of the coronavirus to children by making the report's focus on children less clear."
A poll conducted in September 2020 found that nearly 8 in 10 Americans trusted the CDC, a decrease from 87 percent in April 2020. Another poll showed an even larger drop in trust with the results dropping 16 percentage points. By January 2022, according to an NBC News poll, only 44% of Americans trusted the CDC compared to 69% at the beginning of the pandemic. As the trustworthiness eroded, so too did the information it disseminates. The diminishing level of trust in the CDC and the information releases also incited "vaccine hesitancy" with the result that "just 53 percent of Americans said they would be somewhat or extremely likely to get a vaccine."
In September 2020, amid the accusations and the faltering image of the CDC, the agency's leadership was called into question. Former acting director at the CDC, Richard Besser, said of Redfield that "I find it concerning that the CDC director has not been outspoken when there have been instances of clear political interference in the interpretation of science." In addition, Mark Rosenberg, the first director of CDC's National Center for Injury Prevention and Control, also questioned Redfield's leadership and his lack of defense of the science.
Historically, the CDC has not been a political agency; however, the COVID-19 pandemic, and specifically the Trump Administration's handling of the pandemic, resulted in a "dangerous shift" according to a previous CDC director and others. Four previous directors claim that the agency's voice was "muted for political reasons." Politicization of the agency has continued into the Biden administration as COVID-19 guidance is contradicted by State guidance and the agency is criticized as "CDC's credibility is eroding".
In 2021, the CDC, then under the leadership of the Biden Administration, received criticism for its mixed messaging surrounding COVID-19 vaccines, mask-wearing guidance, and the state of the pandemic.
On May 16, 2011, the Centers for Disease Control and Prevention's blog published an article instructing the public on what to do to prepare for a zombie invasion. While the article did not claim that such a scenario was possible, it did use the popular culture appeal as a means of urging citizens to prepare for all potential hazards, such as earthquakes, tornadoes, and floods.
According to David Daigle, the associate director for Communications, Public Health Preparedness and Response, the idea arose when his team was discussing their upcoming hurricane-information campaign and Daigle mused that "we say pretty much the same things every year, in the same way, and I just wonder how many people are paying attention." A social-media employee mentioned that the subject of zombies had come up a lot on Twitter when she had been tweeting about the Fukushima Daiichi nuclear disaster and radiation. The team realized that a campaign like this would most likely reach a different audience from the one that normally pays attention to hurricane-preparedness warnings and went to work on the zombie campaign, launching it right before hurricane season began. "The whole idea was, if you're prepared for a zombie apocalypse, you're prepared for pretty much anything," said Daigle.
Once the blog article was posted, the CDC announced an open contest for YouTube submissions of the most creative and effective videos covering preparedness for a zombie apocalypse (or apocalypse of any kind), to be judged by the "CDC Zombie Task Force". Submissions were open until October 11, 2011. They also released a zombie-themed graphic novella available on their website. Zombie-themed educational materials for teachers are available on the site. | [
{
"paragraph_id": 0,
"text": "The Centers for Disease Control and Prevention (CDC) is the national public health agency of the United States. It is a United States federal agency under the Department of Health and Human Services, and is headquartered in Atlanta, Georgia.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The agency's main goal is the protection of public health and safety through the control and prevention of disease, injury, and disability in the US and worldwide. The CDC focuses national attention on developing and applying disease control and prevention. It especially focuses its attention on infectious disease, food borne pathogens, environmental health, occupational safety and health, health promotion, injury prevention, and educational activities designed to improve the health of United States citizens. The CDC also conducts research and provides information on non-infectious diseases, such as obesity and diabetes, and is a founding member of the International Association of National Public Health Institutes.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The CDC's current Director is Mandy Cohen who assumed office on July 10, 2023.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The Communicable Disease Center was founded July 1, 1946, as the successor to the World War II Malaria Control in War Areas program of the Office of National Defense Malaria Control Activities.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "Preceding its founding, organizations with global influence in malaria control were the Malaria Commission of the League of Nations and the Rockefeller Foundation. The Rockefeller Foundation greatly supported malaria control, sought to have the governments take over some of its efforts, and collaborated with the agency.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "The new agency was a branch of the U.S. Public Health Service and Atlanta was chosen as the location because malaria was endemic in the Southern United States. The agency changed names (see infobox on top) before adopting the name Communicable Disease Center in 1946. Offices were located on the sixth floor of the Volunteer Building on Peachtree Street.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "With a budget at the time of about $1 million, 59 percent of its personnel were engaged in mosquito abatement and habitat control with the objective of control and eradication of malaria in the United States (see National Malaria Eradication Program).",
"title": "History"
},
{
"paragraph_id": 7,
"text": "Among its 369 employees, the main jobs at CDC were originally entomology and engineering. In CDC's initial years, more than six and a half million homes were sprayed, mostly with DDT. In 1946, there were only seven medical officers on duty and an early organization chart was drawn, somewhat fancifully, in the shape of a mosquito. Under Joseph Walter Mountin, the CDC continued to be an advocate for public health issues and pushed to extend its responsibilities to many other communicable diseases.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "In 1947, the CDC made a token payment of $10 to Emory University for 15 acres (61,000 m) of land on Clifton Road in DeKalb County, still the home of CDC headquarters as of 2019. CDC employees collected the money to make the purchase. The benefactor behind the \"gift\" was Robert W. Woodruff, chairman of the board of The Coca-Cola Company. Woodruff had a long-time interest in malaria control, which had been a problem in areas where he went hunting. The same year, the PHS transferred its San Francisco based plague laboratory into the CDC as the Epidemiology Division, and a new Veterinary Diseases Division was established.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "An Epidemic Intelligence Service (EIS) was established in 1951, originally due to biological warfare concerns arising from the Korean War; EIS evolved into two-year postgraduate training program in epidemiology, and a prototype for Field Epidemiology Training Programs (FETP), which began in 1980. The FETP is a large operation that has trained more than 18,000 disease detectives in over 80 countries. In 2020 FETP celebrated the 40th anniversary of the CDC's support for Thailand's Field Epidemiology Training Program. Thailand was the first FETP site created outside of North America and is found in numerous countries, reflecting CDC's influence in promoting this model internationally. The Training Programs in Epidemiology and Public Health Interventions Network (TEPHINET) has graduated 950 students.",
"title": "History"
},
{
"paragraph_id": 10,
"text": "The mission of the CDC expanded beyond its original focus on malaria to include sexually transmitted diseases when the Venereal Disease Division of the U.S. Public Health Service (PHS) was transferred to the CDC in 1957. Shortly thereafter, Tuberculosis Control was transferred (in 1960) to the CDC from PHS, and then in 1963 the Immunization program was established.",
"title": "History"
},
{
"paragraph_id": 11,
"text": "It became the National Communicable Disease Center effective July 1, 1967, and the Center for Disease Control on June 24, 1970. At the end of the Public Health Service reorganizations of 1966–1973, it was promoted to being a principal operating agency of PHS.",
"title": "History"
},
{
"paragraph_id": 12,
"text": "It was renamed to the plural Centers for Disease Control effective October 14, 1980, as the modern organization of having multiple constituent centers was established. By 1990, it had four centers formed in the 1980s: the Center for Infectious Diseases, Center for Chronic Disease Prevention and Health Promotion, the Center for Environmental Health and Injury Control, and the Center for Prevention Services; as well as two centers that had been absorbed by CDC from outside: the National Institute for Occupational Safety and Health in 1973, and the National Center for Health Statistics in 1987.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "An act of the United States Congress appended the words \"and Prevention\" to the name effective October 27, 1992. However, Congress directed that the initialism CDC be retained because of its name recognition. Since the 1990s, the CDC focus has broadened to include chronic diseases, disabilities, injury control, workplace hazards, environmental health threats, and terrorism preparedness. CDC combats emerging diseases and other health risks, including birth defects, West Nile virus, obesity, avian, swine, and pandemic flu, E. coli, and bioterrorism, to name a few. The organization would also prove to be an important factor in preventing the abuse of penicillin. In May 1994 the CDC admitted having sent samples of communicable diseases to the Iraqi government from 1984 through 1989 which were subsequently repurposed for biological warfare, including Botulinum toxin, West Nile virus, Yersinia pestis and Dengue fever virus.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "On April 21, 2005, then–CDC Director Julie Gerberding formally announced the reorganization of CDC to \"confront the challenges of 21st-century health threats\". She established four Coordinating Centers. In 2009 the Obama Administration re-evaluated this change and ordered them cut as an unnecessary management layer.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "As of 2013, the CDC's Biosafety Level 4 laboratories were among the few that exist in the world. They included one of only two official repositories of smallpox in the world, with the other one located at the State Research Center of Virology and Biotechnology VECTOR in the Russian Federation. In 2014, the CDC revealed they had discovered several misplaced smallpox samples while their lab workers were \"potentially infected\" with anthrax.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The city of Atlanta annexed the property of the CDC headquarters effective January 1, 2018, as a part of the city's largest annexation within a period of 65 years; the Atlanta City Council had voted to do so the prior December. The CDC and Emory University had requested that the Atlanta city government annex the area, paving the way for a MARTA expansion through the Emory campus, funded by city tax dollars. The headquarters were located in an unincorporated area, statistically in the Druid Hills census-designated place.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "On August 17, 2022, Dr. Walensky said the CDC would make drastic changes in the wake of mistakes during the COVID-19 pandemic. She outlined an overhaul of how the CDC would analyze and share data and how they would communicate information to the general public. In her statement to all CDC employees, she said: \"For 75 years, CDC and public health have been preparing for COVID-19, and in our big moment, our performance did not reliably meet expectations.\" Based on the findings of an internal report, Walensky concluded that \"The CDC must refocus itself on public health needs, respond much faster to emergencies and outbreaks of disease, and provide information in a way that ordinary people and state and local health authorities can understand and put to use\" (as summarized by the New York Times).",
"title": "History"
},
{
"paragraph_id": 18,
"text": "The CDC is organized into \"Centers, Institutes, and Offices\" (CIOs), with each organizational unit implementing the agency's activities in a particular area of expertise while also providing intra-agency support and resource-sharing for cross-cutting issues and specific health threats.",
"title": "Organization"
},
{
"paragraph_id": 19,
"text": "As of the most recent reorganization in February 2023, the CIOs are:",
"title": "Organization"
},
{
"paragraph_id": 20,
"text": "The Office of Public Health Preparedness was created during the 2001 anthrax attacks shortly after the terrorist attacks of September 11, 2001. Its purpose was to coordinate among the government the response to a range of biological terrorism threats.",
"title": "Organization"
},
{
"paragraph_id": 21,
"text": "Most CDC centers are located in Atlanta. Building 18, which opened in 2005 at the CDC's main Roybal campus (named in honor of the late Representative Edward R. Roybal), contains the premier BSL4 laboratory in the United States.",
"title": "Organization"
},
{
"paragraph_id": 22,
"text": "A few of the centers are based in or operate other domestic locations:",
"title": "Organization"
},
{
"paragraph_id": 23,
"text": "In addition, CDC operates quarantine facilities in 20 cities in the U.S.",
"title": "Organization"
},
{
"paragraph_id": 24,
"text": "CDC's budget for fiscal year 2018 was $11.9 billion. The CDC offers grants to help organizations advance health, safety and awareness at the community level in the United States. The CDC awards over 85 percent of its annual budget through these grants.",
"title": "Budget"
},
{
"paragraph_id": 25,
"text": "As of 2021, CDC staff numbered approximately 15,000 personnel (including 6,000 contractors and 840 United States Public Health Service Commissioned Corps officers) in 170 occupations. Eighty percent held bachelor's degrees or higher; almost half had advanced degrees (a master's degree or a doctorate such as a PhD, D.O., or M.D.).",
"title": "Workforce"
},
{
"paragraph_id": 26,
"text": "Common CDC job titles include engineer, entomologist, epidemiologist, biologist, physician, veterinarian, behavioral scientist, nurse, medical technologist, economist, public health advisor, health communicator, toxicologist, chemist, computer scientist, and statistician. The CDC also operates a number of notable training and fellowship programs, including those indicated below.",
"title": "Workforce"
},
{
"paragraph_id": 27,
"text": "The Epidemic Intelligence Service (EIS) is composed of \"boots-on-the-ground disease detectives\" who investigate public health problems domestically and globally. When called upon by a governmental body, EIS officers may embark on short-term epidemiological assistance assignments, or \"Epi-Aids\", to provide technical expertise in containing and investigating disease outbreaks. The EIS program is a model for the international Field Epidemiology Training Program.",
"title": "Workforce"
},
{
"paragraph_id": 28,
"text": "The CDC also operates the Public Health Associate Program (PHAP), a two-year paid fellowship for recent college graduates to work in public health agencies all over the United States. PHAP was founded in 2007 and currently has 159 associates in 34 states.",
"title": "Workforce"
},
{
"paragraph_id": 29,
"text": "The Director of CDC is a Senior Executive Service position that may be filled either by a career employee, or as a political appointment that does not require Senate confirmation, with the latter method typically being used. The director serves at the pleasure of the President and may be fired at any time. On January 20, 2025, the CDC Director position will change to require Senate confirmation, due to a provision in the Consolidated Appropriations Act, 2023. The CDC Director concurrently serves as the Administrator of the Agency for Toxic Substances and Disease Registry.",
"title": "Leadership"
},
{
"paragraph_id": 30,
"text": "Twenty directors have served the CDC or its predecessor agencies, including three who have served during the Trump administration (including Anne Schuchat who twice served as acting director) and three who have served during the Carter administration (including one acting director not shown here). Two served under Bill Clinton, but only one under the Nixon to Ford terms.",
"title": "Leadership"
},
{
"paragraph_id": 31,
"text": "The CDC's programs address more than 400 diseases, health threats, and conditions that are major causes of death, disease, and disability. The CDC's website has information on various infectious (and noninfectious) diseases, including smallpox, measles, and others.",
"title": "Areas of focus"
},
{
"paragraph_id": 32,
"text": "The CDC targets the transmission of influenza, including the H1N1 swine flu, and launched websites to educate people about hygiene.",
"title": "Areas of focus"
},
{
"paragraph_id": 33,
"text": "Within the division are two programs: the Federal Select Agent Program (FSAP) and the Import Permit Program. The FSAP is run jointly with an office within the U.S. Department of Agriculture, regulating agents that can cause disease in humans, animals, and plants. The Import Permit Program regulates the importation of \"infectious biological materials.\"",
"title": "Areas of focus"
},
{
"paragraph_id": 34,
"text": "The CDC runs a program that protects the public from rare and dangerous substances such as anthrax and the Ebola virus. The program, called the Federal Select Agent Program, calls for inspections of labs in the U.S. that work with dangerous pathogens.",
"title": "Areas of focus"
},
{
"paragraph_id": 35,
"text": "During the 2014 Ebola outbreak in West Africa, the CDC helped coordinate the return of two infected American aid workers for treatment at Emory University Hospital, the home of a special unit to handle highly infectious diseases.",
"title": "Areas of focus"
},
{
"paragraph_id": 36,
"text": "As a response to the 2014 Ebola outbreak, Congress passed a Continuing Appropriations Resolution allocating $30,000,000 towards CDC's efforts to fight the virus.",
"title": "Areas of focus"
},
{
"paragraph_id": 37,
"text": "The CDC also works on non-communicable diseases, including chronic diseases caused by obesity, physical inactivity and tobacco-use. The work of the Division for Cancer Prevention and Control, led from 2010 by Lisa C. Richardson, is also within this remit.",
"title": "Areas of focus"
},
{
"paragraph_id": 38,
"text": "The CDC implemented their National Action Plan for Combating Antibiotic Resistant Bacteria as a measure against the spread of antibiotic resistance in the United States. This initiative has a budget of $161 million and includes the development of the Antibiotic Resistance Lab Network.",
"title": "Areas of focus"
},
{
"paragraph_id": 39,
"text": "Globally, the CDC works with other organizations to address global health challenges and contain disease threats at their source. They work with many international organizations such as the World Health Organization (WHO) as well as ministries of health and other groups on the front lines of outbreaks. The agency maintains staff in more than 60 countries, including some from the U.S. but more from the countries in which they operate. The agency's global divisions include the Division of Global HIV and TB (DGHT), the Division of Parasitic Diseases and Malaria (DPDM), the Division of Global Health Protection (DGHP), and the Global Immunization Division (GID).",
"title": "Areas of focus"
},
{
"paragraph_id": 40,
"text": "The CDC has been working with the WHO to implement the International Health Regulations (IHR), an agreement between 196 countries to prevent, control, and report on the international spread of disease, through initiatives including the Global Disease Detection Program (GDD).",
"title": "Areas of focus"
},
{
"paragraph_id": 41,
"text": "The CDC has also been involved in implementing the U.S. global health initiatives President's Emergency Plan for AIDS Relief (PEPFAR) and President's Malaria Initiative.",
"title": "Areas of focus"
},
{
"paragraph_id": 42,
"text": "The CDC collects and publishes health information for travelers in a comprehensive book, CDC Health Information for International Travel, which is commonly known as the \"yellow book.\" The book is available online and in print as a new edition every other year and includes current travel health guidelines, vaccine recommendations, and information on specific travel destinations. The CDC also issues travel health notices on its website, consisting of three levels:",
"title": "Areas of focus"
},
{
"paragraph_id": 43,
"text": "The CDC uses a number of tools to monitor the safety of vaccines. The Vaccine Adverse Event Reporting System (VAERS), a national vaccine safety surveillance program run by CDC and the FDA. \"VAERS detects possible safety issues with U.S. vaccines by collecting information about adverse events (possible side effects or health problems) after vaccination.\" The CDC's Safety Information by Vaccine page provides a list of the latest safety information, side effects, and answers to common questions about CDC recommended vaccines.",
"title": "Areas of focus"
},
{
"paragraph_id": 44,
"text": "The Vaccine Safety Datalink (VSD) works with a network of healthcare organizations to share data on vaccine safety and adverse events. The Clinical Immunization Safety Assessment (CISA) project is a network of vaccine experts and health centers that research and assist the CDC in the area of vaccine safety.",
"title": "Areas of focus"
},
{
"paragraph_id": 45,
"text": "CDC also runs a program called V-safe, a smartphone web application that allows COVID-19 vaccine recipients to be surveyed in detail about their health in response to getting the shot.",
"title": "Areas of focus"
},
{
"paragraph_id": 46,
"text": "The CDC Foundation operates independently from CDC as a private, nonprofit 501(c)(3) organization incorporated in the State of Georgia. The creation of the Foundation was authorized by section 399F of the Public Health Service Act to support the mission of CDC in partnership with the private sector, including organizations, foundations, businesses, educational groups, and individuals. From 1995 to 2022, the Foundation raised over $1.6 billion and launched more than 1,200 health programs. Bill Cosby formerly served as a member of the Foundation's Board of Directors, continuing as an honorary member after completing his term.",
"title": "CDC Foundation"
},
{
"paragraph_id": 47,
"text": "The Foundation engages in research projects and health programs in more than 160 countries every year, including in focus areas such as cardiovascular disease, cancer, emergency response, and infectious diseases, particularly HIV/AIDS, Ebola, rotavirus, and COVID-19.",
"title": "CDC Foundation"
},
{
"paragraph_id": 48,
"text": "In 2015, BMJ associate editor Jeanne Lenzer raised concerns that the CDC's recommendations and publications may be influenced by donations received through the Foundation, which includes pharmaceutical companies.",
"title": "CDC Foundation"
},
{
"paragraph_id": 49,
"text": "For 15 years, the CDC had direct oversight over the Tuskegee syphilis experiment. In the study, which lasted from 1932 to 1972, a group of Black men (nearly 400 of whom had syphilis) were studied to learn more about the disease. The disease was left untreated in the men, who had not given their informed consent to serve as research subjects. The Tuskegee Study was initiated in 1932 by the Public Health Service, with the CDC taking over the Tuskegee Health Benefit Program in 1995.",
"title": "Controversies"
},
{
"paragraph_id": 50,
"text": "An area of partisan dispute related to CDC funding is studying firearms effectiveness. Although the CDC was one of the first government agencies to study gun related data, in 1996 the Dickey Amendment, passed with the support of the National Rifle Association of America, states \"none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control\". Advocates for gun control oppose the amendment and have tried to overturn it.",
"title": "Controversies"
},
{
"paragraph_id": 51,
"text": "Looking at the history of the passage of the Dickey Amendment, in 1992, Mark L. Rosenberg and five CDC colleagues founded the CDC's National Center for Injury Prevention and Control, with an annual budget of approximately $260,000. They focused on \"identifying causes of firearm deaths, and methods to prevent them\". Their first report, published in the New England Journal of Medicine in 1993 entitled \"Guns are a Risk Factor for Homicide in the Home\", reported \"mere presence of a gun in a home increased the risk of a firearm-related death by 2.7 percent, and suicide fivefold—a \"huge\" increase.\" In response, the NRA launched a \"campaign to shut down the Injury Center.\" Two conservative pro-gun groups, Doctors for Responsible Gun Ownership and Doctors for Integrity and Policy Research joined the pro-gun effort, and, by 1995, politicians also supported the pro-gun initiative. In 1996, Jay Dickey (R) Arkansas introduced the Dickey Amendment statement stating \"none of the funds available for injury prevention and control at the Centers for Disease Control and Prevention may be used to advocate or promote gun control\" as a rider. in the 1996 appropriations bill.\" In 1997, \"Congress re-directed all of the money for gun research to the study of traumatic brain injury.\" David Satcher, CDC head 1993-98 advocated for firearms research. In 2016 over a dozen \"public health insiders, including current and former CDC senior leaders\" told The Trace interviewers that CDC senior leaders took a cautious stance in their interpretation of the Dickey Amendment and that they could do more but were afraid of political and personal retribution.",
"title": "Controversies"
},
{
"paragraph_id": 52,
"text": "In 2013, the American Medical Association, the American Psychological Association, and the American Academy of Pediatrics sent a letter to the leaders of the Senate Appropriations Committee asking them \"to support at least $10 million within the Centers for Disease Control and Prevention (CDC) in FY 2014 along with sufficient new taxes at the National Institutes of Health to support research into the causes and prevention of violence. Furthermore, we urge Members to oppose any efforts to reduce, eliminate, or condition CDC funding related to violence prevention research.\" Congress maintained the ban in subsequent budgets.",
"title": "Controversies"
},
{
"paragraph_id": 53,
"text": "In October 2014, the CDC gave a nurse with a fever who was later diagnosed with Ebola permission to board a commercial flight to Cleveland.",
"title": "Controversies"
},
{
"paragraph_id": 54,
"text": "The CDC has been widely criticized for its handling of the COVID-19 pandemic. In 2022, CDC director Rochelle Walensky acknowledged \"some pretty dramatic, pretty public mistakes, from testing to data to communications\", based on the findings of an internal examination.",
"title": "Controversies"
},
{
"paragraph_id": 55,
"text": "The first confirmed case of COVID-19 was discovered in the U.S. on January 20, 2020. However, widespread COVID-19 testing in the United States was effectively stalled until February 28, when federal officials revised a faulty CDC test, and days afterward, when the Food and Drug Administration began loosening rules that had restricted other labs from developing tests. In February 2020, as the CDC's early coronavirus test malfunctioned nationwide, CDC Director Robert R. Redfield reassured fellow officials on the White House Coronavirus Task Force that the problem would be quickly solved, according to White House officials. It took about three weeks to sort out the failed test kits, which may have been contaminated during their processing in a CDC lab. Later investigations by the FDA and the Department of Health and Human Services found that the CDC had violated its own protocols in developing its tests. In November 2020, NPR reported that an internal review document they obtained revealed that the CDC was aware that the first batch of tests which were issued in early January had a chance of being wrong 33 percent of the time, but they released them anyway.",
"title": "Controversies"
},
{
"paragraph_id": 56,
"text": "In May 2020, The Atlantic reported that the CDC was conflating the results of two different types of coronavirus tests — tests that diagnose current coronavirus infections, and tests that measure whether someone has ever had the virus. The magazine said this distorted several important metrics, provided the country with an inaccurate picture of the state of the pandemic, and overstated the country's testing ability.",
"title": "Controversies"
},
{
"paragraph_id": 57,
"text": "In July 2020, the Trump administration ordered hospitals to bypass the CDC and instead send all COVID-19 patient information to a database at the Department of Health and Human Services. Some health experts opposed the order and warned that the data might become politicized or withheld from the public. On July 15, the CDC alarmed health care groups by temporarily removing COVID-19 dashboards from its website. It restored the data a day later.",
"title": "Controversies"
},
{
"paragraph_id": 58,
"text": "In August 2020, the CDC recommended that people showing no COVID-19 symptoms do not need testing. The new guidelines alarmed many public health experts. The guidelines were crafted by the White House Coronavirus Task Force without the sign-off of Anthony Fauci of the NIH. Objections by other experts at the CDC went unheard. Officials said that a CDC document in July arguing for \"the importance of reopening schools\" was also crafted outside the CDC. On August 16, the chief of staff, Kyle McGowan, and his deputy, Amanda Campbell, resigned from the agency. The testing guidelines were reversed on September 18, 2020, after public controversy.",
"title": "Controversies"
},
{
"paragraph_id": 59,
"text": "In September 2020, the CDC drafted an order requiring masks on all public transportation in the United States, but the White House Coronavirus Task Force blocked the order, refusing to discuss it, according to two federal health officials.",
"title": "Controversies"
},
{
"paragraph_id": 60,
"text": "In October 2020, it was disclosed that White House advisers had repeatedly altered the writings of CDC scientists about COVID-19, including recommendations on church choirs, social distancing in bars and restaurants, and summaries of public-health reports.",
"title": "Controversies"
},
{
"paragraph_id": 61,
"text": "In the lead up to 2020 Thanksgiving, the CDC advised Americans not to travel for the holiday saying, \"It's not a requirement. It's a recommendation for the American public to consider.\" The White House coronavirus task force had its first public briefing in months on that date but travel was not mentioned.",
"title": "Controversies"
},
{
"paragraph_id": 62,
"text": "The New York Times later concluded that the CDC's decisions to \"ben[d] to political pressure from the Trump White House to alter key public health guidance or withhold it from the public [...] cost it a measure of public trust that experts say it still has not recaptured\" as of 2022.",
"title": "Controversies"
},
{
"paragraph_id": 63,
"text": "In May 2021, following criticism by scientists, the CDC updated its COVID-19 guidance to acknowledge airborne transmission of COVID-19, after having previously claimed that the majority of infections occurred via \"close contact, not airborne transmission\".",
"title": "Controversies"
},
{
"paragraph_id": 64,
"text": "In December 2021, CDC shortened its recommended isolation period for asymptomatic individuals infected with Covid-19 from 10 days to five.",
"title": "Controversies"
},
{
"paragraph_id": 65,
"text": "Until 2022, the CDC withheld critical data about COVID-19 vaccine boosters, hospitalizations and wastewater data.",
"title": "Controversies"
},
{
"paragraph_id": 66,
"text": "On June 10, 2022, the Biden Administration ordered the CDC to remove the COVID-19 testing requirement for air travelers entering the United States.",
"title": "Controversies"
},
{
"paragraph_id": 67,
"text": "In January 2022, it was revealed that the CDC had communicated with moderators at Facebook and Instagram over COVID-19 information and discussion on the platforms, including information that the CDC considered false or misleading and that might influence people not to get the COVID-19 vaccines.",
"title": "Controversies"
},
{
"paragraph_id": 68,
"text": "During the pandemic, the CDC Morbidity and Mortality Weekly Report (MMWR) came under pressure from political appointees at the Department of Health and Human Services (HHS) to modify its reporting so as not to conflict with what Trump was saying about the pandemic.",
"title": "Controversies"
},
{
"paragraph_id": 69,
"text": "Starting in June 2020, Michael Caputo, the HHS assistant secretary for public affairs, and his chief advisor Paul Alexander tried to delay, suppress, change, and retroactively edit MMR releases about the effectiveness of potential treatments for COVID-19, the transmissibility of the virus, and other issues where the president had taken a public stance. Alexander tried unsuccessfully to get personal approval of all issues of MMWR before they went out.",
"title": "Controversies"
},
{
"paragraph_id": 70,
"text": "Caputo claimed this oversight was necessary because MMWR reports were being tainted by \"political content\"; he demanded to know the political leanings of the scientists who reported that hydroxychloroquine had little benefit as a treatment while Trump was saying the opposite. In emails Alexander accused CDC scientists of attempting to \"hurt the president\" and writing \"hit pieces on the administration\".",
"title": "Controversies"
},
{
"paragraph_id": 71,
"text": "In October 2020, emails obtained by Politico showed that Alexander requested multiple alterations in a report. The published alterations included a title being changed from \"Children, Adolescents, and Young Adults\" to \"Persons.\" One current and two former CDC officials who reviewed the email exchanges said they were troubled by the \"intervention to alter scientific reports viewed as untouchable prior to the Trump administration\" that \"appeared to minimize the risks of the coronavirus to children by making the report's focus on children less clear.\"",
"title": "Controversies"
},
{
"paragraph_id": 72,
"text": "A poll conducted in September 2020 found that nearly 8 in 10 Americans trusted the CDC, a decrease from 87 percent in April 2020. Another poll showed an even larger drop in trust with the results dropping 16 percentage points. By January 2022, according to an NBC News poll, only 44% of Americans trusted the CDC compared to 69% at the beginning of the pandemic. As the trustworthiness eroded, so too did the information it disseminates. The diminishing level of trust in the CDC and the information releases also incited \"vaccine hesitancy\" with the result that \"just 53 percent of Americans said they would be somewhat or extremely likely to get a vaccine.\"",
"title": "Controversies"
},
{
"paragraph_id": 73,
"text": "In September 2020, amid the accusations and the faltering image of the CDC, the agency's leadership was called into question. Former acting director at the CDC, Richard Besser, said of Redfield that \"I find it concerning that the CDC director has not been outspoken when there have been instances of clear political interference in the interpretation of science.\" In addition, Mark Rosenberg, the first director of CDC's National Center for Injury Prevention and Control, also questioned Redfield's leadership and his lack of defense of the science.",
"title": "Controversies"
},
{
"paragraph_id": 74,
"text": "Historically, the CDC has not been a political agency; however, the COVID-19 pandemic, and specifically the Trump Administration's handling of the pandemic, resulted in a \"dangerous shift\" according to a previous CDC director and others. Four previous directors claim that the agency's voice was \"muted for political reasons.\" Politicization of the agency has continued into the Biden administration as COVID-19 guidance is contradicted by State guidance and the agency is criticized as \"CDC's credibility is eroding\".",
"title": "Controversies"
},
{
"paragraph_id": 75,
"text": "In 2021, the CDC, then under the leadership of the Biden Administration, received criticism for its mixed messaging surrounding COVID-19 vaccines, mask-wearing guidance, and the state of the pandemic.",
"title": "Controversies"
},
{
"paragraph_id": 76,
"text": "On May 16, 2011, the Centers for Disease Control and Prevention's blog published an article instructing the public on what to do to prepare for a zombie invasion. While the article did not claim that such a scenario was possible, it did use the popular culture appeal as a means of urging citizens to prepare for all potential hazards, such as earthquakes, tornadoes, and floods.",
"title": "Popular culture"
},
{
"paragraph_id": 77,
"text": "According to David Daigle, the associate director for Communications, Public Health Preparedness and Response, the idea arose when his team was discussing their upcoming hurricane-information campaign and Daigle mused that \"we say pretty much the same things every year, in the same way, and I just wonder how many people are paying attention.\" A social-media employee mentioned that the subject of zombies had come up a lot on Twitter when she had been tweeting about the Fukushima Daiichi nuclear disaster and radiation. The team realized that a campaign like this would most likely reach a different audience from the one that normally pays attention to hurricane-preparedness warnings and went to work on the zombie campaign, launching it right before hurricane season began. \"The whole idea was, if you're prepared for a zombie apocalypse, you're prepared for pretty much anything,\" said Daigle.",
"title": "Popular culture"
},
{
"paragraph_id": 78,
"text": "Once the blog article was posted, the CDC announced an open contest for YouTube submissions of the most creative and effective videos covering preparedness for a zombie apocalypse (or apocalypse of any kind), to be judged by the \"CDC Zombie Task Force\". Submissions were open until October 11, 2011. They also released a zombie-themed graphic novella available on their website. Zombie-themed educational materials for teachers are available on the site.",
"title": "Popular culture"
}
] | The Centers for Disease Control and Prevention (CDC) is the national public health agency of the United States. It is a United States federal agency under the Department of Health and Human Services, and is headquartered in Atlanta, Georgia. The agency's main goal is the protection of public health and safety through the control and prevention of disease, injury, and disability in the US and worldwide. The CDC focuses national attention on developing and applying disease control and prevention. It especially focuses its attention on infectious disease, food borne pathogens, environmental health, occupational safety and health, health promotion, injury prevention, and educational activities designed to improve the health of United States citizens. The CDC also conducts research and provides information on non-infectious diseases, such as obesity and diabetes, and is a founding member of the International Association of National Public Health Institutes. The CDC's current Director is Mandy Cohen who assumed office on July 10, 2023. | 2001-10-16T13:08:47Z | 2023-12-30T05:05:21Z | [
"Template:Use mdy dates",
"Template:Cite news",
"Template:As of",
"Template:ISBN",
"Template:Scholia",
"Template:Use American English",
"Template:Cite book",
"Template:Page needed",
"Template:CDC",
"Template:Wikinewscat",
"Template:Infobox government agency",
"Template:Short description",
"Template:Nbsp",
"Template:Convert",
"Template:Webarchive",
"Template:Commons category",
"Template:Navboxes",
"Template:Portal bar",
"Template:Redirect",
"Template:When",
"Template:Cite web",
"Template:Cite journal",
"Template:Refend",
"Template:See also",
"Template:Refbegin",
"Template:Authority control",
"Template:Reflist",
"Template:Wikiquote",
"Template:Official website"
] | https://en.wikipedia.org/wiki/Centers_for_Disease_Control_and_Prevention |
6,813 | Chandrasekhar limit | The Chandrasekhar limit (/ˌtʃændrəˈʃeɪkər/; is the maximum mass of a stable white dwarf star. The currently accepted value of the Chandrasekhar limit is about 1.4 M☉ (2.765×10 kg).
White dwarfs resist gravitational collapse primarily through electron degeneracy pressure, compared to main sequence stars, which resist collapse through thermal pressure. The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Consequently, a white dwarf with a mass greater than the limit is subject to further gravitational collapse, evolving into a different type of stellar remnant, such as a neutron star or black hole. Those with masses up to the limit remain stable as white dwarfs. The Tolman–Oppenheimer–Volkoff limit is theoretically a next level to reach in order for a neutron star to collapse into a denser form such as a black hole.
The limit was named after Subrahmanyan Chandrasekhar. Chandrasekhar improved upon the accuracy of the calculation in 1930 by calculating the limit for a polytrope model of a star in hydrostatic equilibrium, and comparing his limit to the earlier limit found by E. C. Stoner for a uniform density star. Importantly, the existence of a limit, based on the conceptual breakthrough of combining relativity with Fermi degeneracy, was indeed first established in separate papers published by Wilhelm Anderson and E. C. Stoner in 1929. The limit was initially ignored by the community of scientists because such a limit would logically require the existence of black holes, which were considered a scientific impossibility at the time. The fact that the roles of Stoner and Anderson are often overlooked in the astronomy community has been noted.
The priority dispute has been discussed at length by Virginia Trimble: "Chandrasekhar famously, perhaps even notoriously did his critical calculation on board ship in 1930, and ... was not aware of either Stoner's or Anderson's work at the time. His work was therefore independent, but, more to the point, he adopted Eddington's polytropes for his models which could, therefore, be in hydrostatic equilibrium, which constant density stars cannot, and real ones must be."
Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons increases on compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure.
In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form P = K1ρ, where P is the pressure, ρ is the mass density, and K1 is a constant. Solving the hydrostatic equation leads to a model white dwarf that is a polytrope of index 3/2 – and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass.
As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, the equation of state takes the form P = K2ρ. This yields a polytrope of index 3, which has a total mass, Mlimit, depending only on K2.
For a fully relativistic treatment, the equation of state used interpolates between the equations P = K1ρ for small ρ and P = K2ρ for large ρ. When this is done, the model radius still decreases with mass, but becomes zero at Mlimit. This is the Chandrasekhar limit. The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. μe has been set equal to 2. Radius is measured in standard solar radii or kilometers, and mass in standard solar masses.
Calculated values for the limit vary depending on the nuclear composition of the mass. Chandrasekhar gives the following expression, based on the equation of state for an ideal Fermi gas:
where:
As √ħc/G is the Planck mass, the limit is of the order of
The limiting mass can be obtained formally from the Chandrasekhar's white dwarf equation by taking the limit of large central density.
A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature. Lieb and Yau have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation.
In 1926, the British physicist Ralph H. Fowler observed that the relationship between the density, energy, and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei that obey Fermi–Dirac statistics. This Fermi gas model was then used by the British physicist Edmund Clifton Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming they were homogeneous spheres. Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately 1.37×10 kg. In 1930, Stoner derived the internal energy–density equation of state for a Fermi gas, and was then able to treat the mass–radius relationship in a fully relativistic manner, giving a limiting mass of approximately 2.19×10 kg (for μe = 2.5). Stoner went on to derive the pressure–density equation of state, which he published in 1932. These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter. Frenkel's work, however, was ignored by the astronomical and astrophysical community.
A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state, and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above. Chandrasekhar reviews this work in his Nobel Prize lecture. This value was also computed in 1932 by the Soviet physicist Lev Landau, who, however, did not apply it to white dwarfs and concluded that quantum laws might be invalid for stars heavier than 1.5 solar mass.
Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied:
The star has to go on radiating and radiating and contracting and contracting until, I suppose, it gets down to a few km radius, when gravity becomes strong enough to hold in the radiation, and the star can at last find peace. ... I think there should be a law of Nature to prevent a star from behaving in this absurd way!
Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law P = K1ρ universally applicable, even for large ρ. Although Niels Bohr, Fowler, Wolfgang Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar. Through the rest of his life, Eddington held to his position in his writings, including his work on his fundamental theory. The drama associated with this disagreement is one of the main themes of Empire of the Stars, Arthur I. Miller's biography of Chandrasekhar. In Miller's view:
Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing. As a result, Chandra's work was almost forgotten.
However, in 1983 in recognition for his work, Chandrasekhar shared a Nobel prize "for his theoretical studies of the physical processes of importance to the structure and evolution of the stars" with William Alfred Fowler.
The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various stages of stellar evolution, the nuclei required for this process are exhausted, and the core collapses, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse.
If a main-sequence star is not too massive (less than approximately 8 solar masses), it eventually sheds enough mass to form a white dwarf having mass below the Chandrasekhar limit, which consists of the former core of the star. For more-massive stars, electron degeneracy pressure does not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities destroy the star completely.) During the collapse, neutrons are formed by the capture of electrons by protons in the process of electron capture, leading to the emission of neutrinos. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy on the order of 10 J (100 foes). Most of this energy is carried away by the emitted neutrinos and the kinetic energy of the expanding shell of gas; only about 1% is emitted as optical light. This process is believed responsible for supernovae of types Ib, Ic, and II.
Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbon–oxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. As the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This eventually ignites nuclear fusion reactions, leading to an immediate carbon detonation, which disrupts the star and causes the supernova.
A strong indication of the reliability of Chandrasekhar's formula is that the absolute magnitudes of supernovae of Type Ia are all approximately the same; at maximum luminosity, MV is approximately −19.3, with a standard deviation of no more than 0.3. A 1-sigma interval therefore represents a factor of less than 2 in luminosity. This seems to indicate that all type Ia supernovae convert approximately the same amount of mass to energy.
In April 2003, the Supernova Legacy Survey observed a type Ia supernova, designated SNLS-03D3bb, in a galaxy approximately 4 billion light years away. According to a group of astronomers at the University of Toronto and elsewhere, the observations of this supernova are best explained by assuming that it arose from a white dwarf that had grown to twice the mass of the Sun before exploding. They believe that the star, dubbed the "Champagne Supernova" may have been spinning so fast that a centrifugal tendency allowed it to exceed the limit. Alternatively, the supernova may have resulted from the merger of two white dwarfs, so that the limit was only violated momentarily. Nevertheless, they point out that this observation poses a challenge to the use of type Ia supernovae as standard candles.
Since the observation of the Champagne Supernova in 2003, several more type Ia supernovae have been observed that are very bright, and thought to have originated from white dwarfs whose masses exceeded the Chandrasekhar limit. These include SN 2006gz, SN 2007if, and SN 2009dc. The super-Chandrasekhar mass white dwarfs that gave rise to these supernovae are believed to have had masses up to 2.4–2.8 solar masses. One way to potentially explain the problem of the Champagne Supernova was considering it the result of an aspherical explosion of a white dwarf. However, spectropolarimetric observations of SN 2009dc showed it had a polarization smaller than 0.3, making the large asphericity theory unlikely.
After a supernova explosion, a neutron star may be left behind (except Ia type supernova explosion, which never leaves any remnants behind). These objects are even more compact than white dwarfs and are also supported, in part, by degeneracy pressure. A neutron star, however, is so massive and compressed that electrons and protons have combined to form neutrons, and the star is thus supported by neutron degeneracy pressure (as well as short-range repulsive neutron-neutron interactions mediated by the strong force) instead of electron degeneracy pressure. The limiting value for neutron star mass, analogous to the Chandrasekhar limit, is known as the Tolman–Oppenheimer–Volkoff limit. | [
{
"paragraph_id": 0,
"text": "The Chandrasekhar limit (/ˌtʃændrəˈʃeɪkər/; is the maximum mass of a stable white dwarf star. The currently accepted value of the Chandrasekhar limit is about 1.4 M☉ (2.765×10 kg).",
"title": ""
},
{
"paragraph_id": 1,
"text": "White dwarfs resist gravitational collapse primarily through electron degeneracy pressure, compared to main sequence stars, which resist collapse through thermal pressure. The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Consequently, a white dwarf with a mass greater than the limit is subject to further gravitational collapse, evolving into a different type of stellar remnant, such as a neutron star or black hole. Those with masses up to the limit remain stable as white dwarfs. The Tolman–Oppenheimer–Volkoff limit is theoretically a next level to reach in order for a neutron star to collapse into a denser form such as a black hole.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The limit was named after Subrahmanyan Chandrasekhar. Chandrasekhar improved upon the accuracy of the calculation in 1930 by calculating the limit for a polytrope model of a star in hydrostatic equilibrium, and comparing his limit to the earlier limit found by E. C. Stoner for a uniform density star. Importantly, the existence of a limit, based on the conceptual breakthrough of combining relativity with Fermi degeneracy, was indeed first established in separate papers published by Wilhelm Anderson and E. C. Stoner in 1929. The limit was initially ignored by the community of scientists because such a limit would logically require the existence of black holes, which were considered a scientific impossibility at the time. The fact that the roles of Stoner and Anderson are often overlooked in the astronomy community has been noted.",
"title": ""
},
{
"paragraph_id": 3,
"text": "The priority dispute has been discussed at length by Virginia Trimble: \"Chandrasekhar famously, perhaps even notoriously did his critical calculation on board ship in 1930, and ... was not aware of either Stoner's or Anderson's work at the time. His work was therefore independent, but, more to the point, he adopted Eddington's polytropes for his models which could, therefore, be in hydrostatic equilibrium, which constant density stars cannot, and real ones must be.\"",
"title": ""
},
{
"paragraph_id": 4,
"text": "Electron degeneracy pressure is a quantum-mechanical effect arising from the Pauli exclusion principle. Since electrons are fermions, no two electrons can be in the same state, so not all electrons can be in the minimum-energy level. Rather, electrons must occupy a band of energy levels. Compression of the electron gas increases the number of electrons in a given volume and raises the maximum energy level in the occupied band. Therefore, the energy of the electrons increases on compression, so pressure must be exerted on the electron gas to compress it, producing electron degeneracy pressure. With sufficient compression, electrons are forced into nuclei in the process of electron capture, relieving the pressure.",
"title": "Physics"
},
{
"paragraph_id": 5,
"text": "In the nonrelativistic case, electron degeneracy pressure gives rise to an equation of state of the form P = K1ρ, where P is the pressure, ρ is the mass density, and K1 is a constant. Solving the hydrostatic equation leads to a model white dwarf that is a polytrope of index 3/2 – and therefore has radius inversely proportional to the cube root of its mass, and volume inversely proportional to its mass.",
"title": "Physics"
},
{
"paragraph_id": 6,
"text": "As the mass of a model white dwarf increases, the typical energies to which degeneracy pressure forces the electrons are no longer negligible relative to their rest masses. The velocities of the electrons approach the speed of light, and special relativity must be taken into account. In the strongly relativistic limit, the equation of state takes the form P = K2ρ. This yields a polytrope of index 3, which has a total mass, Mlimit, depending only on K2.",
"title": "Physics"
},
{
"paragraph_id": 7,
"text": "For a fully relativistic treatment, the equation of state used interpolates between the equations P = K1ρ for small ρ and P = K2ρ for large ρ. When this is done, the model radius still decreases with mass, but becomes zero at Mlimit. This is the Chandrasekhar limit. The curves of radius against mass for the non-relativistic and relativistic models are shown in the graph. They are colored blue and green, respectively. μe has been set equal to 2. Radius is measured in standard solar radii or kilometers, and mass in standard solar masses.",
"title": "Physics"
},
{
"paragraph_id": 8,
"text": "Calculated values for the limit vary depending on the nuclear composition of the mass. Chandrasekhar gives the following expression, based on the equation of state for an ideal Fermi gas:",
"title": "Physics"
},
{
"paragraph_id": 9,
"text": "where:",
"title": "Physics"
},
{
"paragraph_id": 10,
"text": "As √ħc/G is the Planck mass, the limit is of the order of",
"title": "Physics"
},
{
"paragraph_id": 11,
"text": "The limiting mass can be obtained formally from the Chandrasekhar's white dwarf equation by taking the limit of large central density.",
"title": "Physics"
},
{
"paragraph_id": 12,
"text": "A more accurate value of the limit than that given by this simple model requires adjusting for various factors, including electrostatic interactions between the electrons and nuclei and effects caused by nonzero temperature. Lieb and Yau have given a rigorous derivation of the limit from a relativistic many-particle Schrödinger equation.",
"title": "Physics"
},
{
"paragraph_id": 13,
"text": "In 1926, the British physicist Ralph H. Fowler observed that the relationship between the density, energy, and temperature of white dwarfs could be explained by viewing them as a gas of nonrelativistic, non-interacting electrons and nuclei that obey Fermi–Dirac statistics. This Fermi gas model was then used by the British physicist Edmund Clifton Stoner in 1929 to calculate the relationship among the mass, radius, and density of white dwarfs, assuming they were homogeneous spheres. Wilhelm Anderson applied a relativistic correction to this model, giving rise to a maximum possible mass of approximately 1.37×10 kg. In 1930, Stoner derived the internal energy–density equation of state for a Fermi gas, and was then able to treat the mass–radius relationship in a fully relativistic manner, giving a limiting mass of approximately 2.19×10 kg (for μe = 2.5). Stoner went on to derive the pressure–density equation of state, which he published in 1932. These equations of state were also previously published by the Soviet physicist Yakov Frenkel in 1928, together with some other remarks on the physics of degenerate matter. Frenkel's work, however, was ignored by the astronomical and astrophysical community.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "A series of papers published between 1931 and 1935 had its beginning on a trip from India to England in 1930, where the Indian physicist Subrahmanyan Chandrasekhar worked on the calculation of the statistics of a degenerate Fermi gas. In these papers, Chandrasekhar solved the hydrostatic equation together with the nonrelativistic Fermi gas equation of state, and also treated the case of a relativistic Fermi gas, giving rise to the value of the limit shown above. Chandrasekhar reviews this work in his Nobel Prize lecture. This value was also computed in 1932 by the Soviet physicist Lev Landau, who, however, did not apply it to white dwarfs and concluded that quantum laws might be invalid for stars heavier than 1.5 solar mass.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Chandrasekhar's work on the limit aroused controversy, owing to the opposition of the British astrophysicist Arthur Eddington. Eddington was aware that the existence of black holes was theoretically possible, and also realized that the existence of the limit made their formation possible. However, he was unwilling to accept that this could happen. After a talk by Chandrasekhar on the limit in 1935, he replied:",
"title": "History"
},
{
"paragraph_id": 16,
"text": "The star has to go on radiating and radiating and contracting and contracting until, I suppose, it gets down to a few km radius, when gravity becomes strong enough to hold in the radiation, and the star can at last find peace. ... I think there should be a law of Nature to prevent a star from behaving in this absurd way!",
"title": "History"
},
{
"paragraph_id": 17,
"text": "Eddington's proposed solution to the perceived problem was to modify relativistic mechanics so as to make the law P = K1ρ universally applicable, even for large ρ. Although Niels Bohr, Fowler, Wolfgang Pauli, and other physicists agreed with Chandrasekhar's analysis, at the time, owing to Eddington's status, they were unwilling to publicly support Chandrasekhar. Through the rest of his life, Eddington held to his position in his writings, including his work on his fundamental theory. The drama associated with this disagreement is one of the main themes of Empire of the Stars, Arthur I. Miller's biography of Chandrasekhar. In Miller's view:",
"title": "History"
},
{
"paragraph_id": 18,
"text": "Chandra's discovery might well have transformed and accelerated developments in both physics and astrophysics in the 1930s. Instead, Eddington's heavy-handed intervention lent weighty support to the conservative community astrophysicists, who steadfastly refused even to consider the idea that stars might collapse to nothing. As a result, Chandra's work was almost forgotten.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "However, in 1983 in recognition for his work, Chandrasekhar shared a Nobel prize \"for his theoretical studies of the physical processes of importance to the structure and evolution of the stars\" with William Alfred Fowler.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "The core of a star is kept from collapsing by the heat generated by the fusion of nuclei of lighter elements into heavier ones. At various stages of stellar evolution, the nuclei required for this process are exhausted, and the core collapses, causing it to become denser and hotter. A critical situation arises when iron accumulates in the core, since iron nuclei are incapable of generating further energy through fusion. If the core becomes sufficiently dense, electron degeneracy pressure will play a significant part in stabilizing it against gravitational collapse.",
"title": "Applications"
},
{
"paragraph_id": 21,
"text": "If a main-sequence star is not too massive (less than approximately 8 solar masses), it eventually sheds enough mass to form a white dwarf having mass below the Chandrasekhar limit, which consists of the former core of the star. For more-massive stars, electron degeneracy pressure does not keep the iron core from collapsing to very great density, leading to formation of a neutron star, black hole, or, speculatively, a quark star. (For very massive, low-metallicity stars, it is also possible that instabilities destroy the star completely.) During the collapse, neutrons are formed by the capture of electrons by protons in the process of electron capture, leading to the emission of neutrinos. The decrease in gravitational potential energy of the collapsing core releases a large amount of energy on the order of 10 J (100 foes). Most of this energy is carried away by the emitted neutrinos and the kinetic energy of the expanding shell of gas; only about 1% is emitted as optical light. This process is believed responsible for supernovae of types Ib, Ic, and II.",
"title": "Applications"
},
{
"paragraph_id": 22,
"text": "Type Ia supernovae derive their energy from runaway fusion of the nuclei in the interior of a white dwarf. This fate may befall carbon–oxygen white dwarfs that accrete matter from a companion giant star, leading to a steadily increasing mass. As the white dwarf's mass approaches the Chandrasekhar limit, its central density increases, and, as a result of compressional heating, its temperature also increases. This eventually ignites nuclear fusion reactions, leading to an immediate carbon detonation, which disrupts the star and causes the supernova.",
"title": "Applications"
},
{
"paragraph_id": 23,
"text": "A strong indication of the reliability of Chandrasekhar's formula is that the absolute magnitudes of supernovae of Type Ia are all approximately the same; at maximum luminosity, MV is approximately −19.3, with a standard deviation of no more than 0.3. A 1-sigma interval therefore represents a factor of less than 2 in luminosity. This seems to indicate that all type Ia supernovae convert approximately the same amount of mass to energy.",
"title": "Applications"
},
{
"paragraph_id": 24,
"text": "In April 2003, the Supernova Legacy Survey observed a type Ia supernova, designated SNLS-03D3bb, in a galaxy approximately 4 billion light years away. According to a group of astronomers at the University of Toronto and elsewhere, the observations of this supernova are best explained by assuming that it arose from a white dwarf that had grown to twice the mass of the Sun before exploding. They believe that the star, dubbed the \"Champagne Supernova\" may have been spinning so fast that a centrifugal tendency allowed it to exceed the limit. Alternatively, the supernova may have resulted from the merger of two white dwarfs, so that the limit was only violated momentarily. Nevertheless, they point out that this observation poses a challenge to the use of type Ia supernovae as standard candles.",
"title": "Super-Chandrasekhar mass supernovas"
},
{
"paragraph_id": 25,
"text": "Since the observation of the Champagne Supernova in 2003, several more type Ia supernovae have been observed that are very bright, and thought to have originated from white dwarfs whose masses exceeded the Chandrasekhar limit. These include SN 2006gz, SN 2007if, and SN 2009dc. The super-Chandrasekhar mass white dwarfs that gave rise to these supernovae are believed to have had masses up to 2.4–2.8 solar masses. One way to potentially explain the problem of the Champagne Supernova was considering it the result of an aspherical explosion of a white dwarf. However, spectropolarimetric observations of SN 2009dc showed it had a polarization smaller than 0.3, making the large asphericity theory unlikely.",
"title": "Super-Chandrasekhar mass supernovas"
},
{
"paragraph_id": 26,
"text": "After a supernova explosion, a neutron star may be left behind (except Ia type supernova explosion, which never leaves any remnants behind). These objects are even more compact than white dwarfs and are also supported, in part, by degeneracy pressure. A neutron star, however, is so massive and compressed that electrons and protons have combined to form neutrons, and the star is thus supported by neutron degeneracy pressure (as well as short-range repulsive neutron-neutron interactions mediated by the strong force) instead of electron degeneracy pressure. The limiting value for neutron star mass, analogous to the Chandrasekhar limit, is known as the Tolman–Oppenheimer–Volkoff limit.",
"title": "Tolman–Oppenheimer–Volkoff limit"
}
] | The Chandrasekhar limit (; is the maximum mass of a stable white dwarf star. The currently accepted value of the Chandrasekhar limit is about 1.4 M☉. White dwarfs resist gravitational collapse primarily through electron degeneracy pressure, compared to main sequence stars, which resist collapse through thermal pressure. The Chandrasekhar limit is the mass above which electron degeneracy pressure in the star's core is insufficient to balance the star's own gravitational self-attraction. Consequently, a white dwarf with a mass greater than the limit is subject to further gravitational collapse, evolving into a different type of stellar remnant, such as a neutron star or black hole. Those with masses up to the limit remain stable as white dwarfs. The Tolman–Oppenheimer–Volkoff limit is theoretically a next level to reach in order for a neutron star to collapse into a denser form such as a black hole. The limit was named after Subrahmanyan Chandrasekhar. Chandrasekhar improved upon the accuracy of the calculation in 1930 by calculating the limit for a polytrope model of a star in hydrostatic equilibrium, and comparing his limit to the earlier limit found by E. C. Stoner for a uniform density star. Importantly, the existence of a limit, based on the conceptual breakthrough of combining relativity with Fermi degeneracy, was indeed first established in separate papers published by Wilhelm Anderson and E. C. Stoner in 1929. The limit was initially ignored by the community of scientists because such a limit would logically require the existence of black holes, which were considered a scientific impossibility at the time. The fact that the roles of Stoner and Anderson are often overlooked in the astronomy community has been noted. The priority dispute has been discussed at length by Virginia Trimble: "Chandrasekhar famously, perhaps even notoriously did his critical calculation on board ship in 1930, and ... was not aware of either Stoner's or Anderson's work at the time. His work was therefore independent, but, more to the point, he adopted Eddington's polytropes for his models which could, therefore, be in hydrostatic equilibrium, which constant density stars cannot, and real ones must be." | 2001-10-16T16:33:16Z | 2023-12-19T04:16:51Z | [
"Template:Sfrac",
"Template:Quote",
"Template:Cite press release",
"Template:Neutron star",
"Template:Solar mass",
"Template:Mvar",
"Template:Main",
"Template:ISBN",
"Template:IPAc-en",
"Template:Math",
"Template:Reflist",
"Template:Webarchive",
"Template:White dwarf",
"Template:Portal bar",
"Template:Rp",
"Template:Cn",
"Template:Val",
"Template:Blockquote",
"Template:Cite web",
"Template:Cite book",
"Template:Cite journal",
"Template:Authority control",
"Template:Short description",
"Template:Use American English"
] | https://en.wikipedia.org/wiki/Chandrasekhar_limit |
6,814 | Congregationalist polity | Congregationalist polity, or congregational polity, often known as congregationalism, is a system of ecclesiastical polity in which every local church (congregation) is independent, ecclesiastically sovereign, or "autonomous". Its first articulation in writing is the Cambridge Platform of 1648 in New England.
Major Protestant Christian traditions that employ congregationalism include Quakerism, the Baptist churches, the Congregational Methodist Church, and Congregational churches known by the Congregationalist name and having descended from the Independent Reformed wing of the Anglo-American Puritan movement of the 17th century. More recent generations have witnessed a growing number of nondenominational churches, which are often congregationalist in their governance. Although autonomous, like minded congregations may enter into voluntary associations with other congregations, sometimes called conventions or denominations.
Congregationalism is distinguished from episcopal polity which is governance by a hierarchy of bishops, and is also distinct from presbyterian polity in which higher assemblies of congregational representatives can exercise considerable authority over individual congregations.
Congregationalism is not limited only to organization of Christian church congregations. The principles of congregationalism have been inherited by the Unitarian Universalist Association and the Canadian Unitarian Council. Most Jewish synagogues, many Sikh Gurdwaras, and most Islamic mosques in the US operate under congregational government, with no hierarchies.
The term congregationalist polity describes a form of church governance that is based on the local congregation. Each local congregation is independent and self-supporting, governed by its own members. Some band into loose voluntary associations with other congregations that share similar beliefs (e.g., the Willow Creek Association and the Unitarian Universalist Association). Others join "conventions", such as the Southern Baptist Convention, the National Baptist Convention or the American Baptist Churches USA (formerly the Northern Baptist Convention). In Quaker Congregationalism, monthly meetings, which are the most basic unit of administration, may be organized into larger Quarterly meetings or Yearly Meetings. Monthly, quarterly, or yearly meetings may also be associated with large "umbrella" associations such as Friends General Conference or Friends United Meeting. These conventions generally provide stronger ties between congregations, including some doctrinal direction and pooling of financial resources. Congregations that belong to associations and conventions are still independently governed. Most non-denominational churches are organized along congregationalist lines. Many do not see these voluntary associations as "denominations", because they "believe that there is no church other than the local church, and denominations are in variance to Scripture."
Congregational churches are Protestant churches in the Calvinist tradition practising congregationalist church governance, in which each congregation independently and autonomously runs its own affairs.
Most Baptists hold that no denominational or ecclesiastical organization has inherent authority over an individual Baptist church. Churches can properly relate to each other under this polity only through voluntary cooperation, never by any sort of coercion. Furthermore, this Baptist polity calls for freedom from governmental control. Exceptions to this local form of local governance include the Episcopal Baptists that have an episcopal system.
Independent Baptist churches have no formal organizational structure above the level of the local congregation. More generally among Baptists, a variety of parachurch agencies and evangelical educational institutions may be supported generously or not at all, depending entirely upon the local congregation's customs and predilections. Usually doctrinal conformity is held as a first consideration when a church makes a decision to grant or decline financial contributions to such agencies, which are legally external and separate from the congregations they serve. These practices also find currency among non-denominational fundamentalist or charismatic fellowships, many of which derive from Baptist origins, culturally if not theologically.
Most Southern Baptist and National Baptist congregations, by contrast, generally relate more closely to external groups such as mission agencies and educational institutions than do those of independent persuasion. However, they adhere to a very similar ecclesiology, refusing to permit outside control or oversight of the affairs of the local church.
Ecclesiastical government is congregational rather than denominational. Churches of Christ purposefully have no central headquarters, councils, or other organizational structure above the local church level. Rather, the independent congregations are a network with each congregation participating at its own discretion in various means of service and fellowship with other congregations. Churches of Christ are linked by their shared commitment to restoration principles.
Congregations are generally overseen by a plurality of elders (also known in some congregations as shepherds, bishops, or pastors) who are sometimes assisted in the administration of various works by deacons. Elders are generally seen as responsible for the spiritual welfare of the congregation, while deacons are seen as responsible for the non-spiritual needs of the church. Deacons serve under the supervision of the elders, and are often assigned to direct specific ministries. Successful service as a deacon is often seen as preparation for the eldership. Elders and deacons are chosen by the congregation based on the qualifications found in Timothy 3 and Titus 1. Congregations look for elders who have a mature enough understanding of scripture to enable them to supervise the minister and to teach, as well as to perform governance functions. In lieu of willing men who meet these qualifications, congregations are sometimes overseen by an unelected committee of the congregation's men.
While the early Restoration Movement had a tradition of itinerant preachers rather than "located Preachers", during the 20th century a long-term, formally trained congregational minister became the norm among Churches of Christ. Ministers are understood to serve under the oversight of the elders. While the presence of a long-term professional minister has sometimes created "significant de facto ministerial authority" and led to conflict between the minister and the elders, the eldership has remained the "ultimate locus of authority in the congregation". There is a small group within the Churches of Christ which oppose a single preacher and, instead, rotate preaching duties among qualified elders (this group tends to overlap with groups which oppose Sunday School and also have only one cup to serve the Lord's Supper).
Churches of Christ hold to the priesthood of all believers. No special titles are used for preachers or ministers that would identify them as clergy. Churches of Christ emphasize that there is no distinction between "clergy" and "laity" and that every member has a gift and a role to play in accomplishing the work of the church.
Methodists who disagreed with the episcopal polity of the Methodist Episcopal Church, South left their mother church to form the Congregational Methodist Church, which retains Wesleyan-Arminian theology but adopts congregationalist polity as a distinctive. | [
{
"paragraph_id": 0,
"text": "Congregationalist polity, or congregational polity, often known as congregationalism, is a system of ecclesiastical polity in which every local church (congregation) is independent, ecclesiastically sovereign, or \"autonomous\". Its first articulation in writing is the Cambridge Platform of 1648 in New England.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Major Protestant Christian traditions that employ congregationalism include Quakerism, the Baptist churches, the Congregational Methodist Church, and Congregational churches known by the Congregationalist name and having descended from the Independent Reformed wing of the Anglo-American Puritan movement of the 17th century. More recent generations have witnessed a growing number of nondenominational churches, which are often congregationalist in their governance. Although autonomous, like minded congregations may enter into voluntary associations with other congregations, sometimes called conventions or denominations.",
"title": ""
},
{
"paragraph_id": 2,
"text": "Congregationalism is distinguished from episcopal polity which is governance by a hierarchy of bishops, and is also distinct from presbyterian polity in which higher assemblies of congregational representatives can exercise considerable authority over individual congregations.",
"title": ""
},
{
"paragraph_id": 3,
"text": "Congregationalism is not limited only to organization of Christian church congregations. The principles of congregationalism have been inherited by the Unitarian Universalist Association and the Canadian Unitarian Council. Most Jewish synagogues, many Sikh Gurdwaras, and most Islamic mosques in the US operate under congregational government, with no hierarchies.",
"title": ""
},
{
"paragraph_id": 4,
"text": "The term congregationalist polity describes a form of church governance that is based on the local congregation. Each local congregation is independent and self-supporting, governed by its own members. Some band into loose voluntary associations with other congregations that share similar beliefs (e.g., the Willow Creek Association and the Unitarian Universalist Association). Others join \"conventions\", such as the Southern Baptist Convention, the National Baptist Convention or the American Baptist Churches USA (formerly the Northern Baptist Convention). In Quaker Congregationalism, monthly meetings, which are the most basic unit of administration, may be organized into larger Quarterly meetings or Yearly Meetings. Monthly, quarterly, or yearly meetings may also be associated with large \"umbrella\" associations such as Friends General Conference or Friends United Meeting. These conventions generally provide stronger ties between congregations, including some doctrinal direction and pooling of financial resources. Congregations that belong to associations and conventions are still independently governed. Most non-denominational churches are organized along congregationalist lines. Many do not see these voluntary associations as \"denominations\", because they \"believe that there is no church other than the local church, and denominations are in variance to Scripture.\"",
"title": "Basic form"
},
{
"paragraph_id": 5,
"text": "Congregational churches are Protestant churches in the Calvinist tradition practising congregationalist church governance, in which each congregation independently and autonomously runs its own affairs.",
"title": "Congregational church"
},
{
"paragraph_id": 6,
"text": "Most Baptists hold that no denominational or ecclesiastical organization has inherent authority over an individual Baptist church. Churches can properly relate to each other under this polity only through voluntary cooperation, never by any sort of coercion. Furthermore, this Baptist polity calls for freedom from governmental control. Exceptions to this local form of local governance include the Episcopal Baptists that have an episcopal system.",
"title": "Baptist churches"
},
{
"paragraph_id": 7,
"text": "Independent Baptist churches have no formal organizational structure above the level of the local congregation. More generally among Baptists, a variety of parachurch agencies and evangelical educational institutions may be supported generously or not at all, depending entirely upon the local congregation's customs and predilections. Usually doctrinal conformity is held as a first consideration when a church makes a decision to grant or decline financial contributions to such agencies, which are legally external and separate from the congregations they serve. These practices also find currency among non-denominational fundamentalist or charismatic fellowships, many of which derive from Baptist origins, culturally if not theologically.",
"title": "Baptist churches"
},
{
"paragraph_id": 8,
"text": "Most Southern Baptist and National Baptist congregations, by contrast, generally relate more closely to external groups such as mission agencies and educational institutions than do those of independent persuasion. However, they adhere to a very similar ecclesiology, refusing to permit outside control or oversight of the affairs of the local church.",
"title": "Baptist churches"
},
{
"paragraph_id": 9,
"text": "Ecclesiastical government is congregational rather than denominational. Churches of Christ purposefully have no central headquarters, councils, or other organizational structure above the local church level. Rather, the independent congregations are a network with each congregation participating at its own discretion in various means of service and fellowship with other congregations. Churches of Christ are linked by their shared commitment to restoration principles.",
"title": "Churches of Christ"
},
{
"paragraph_id": 10,
"text": "Congregations are generally overseen by a plurality of elders (also known in some congregations as shepherds, bishops, or pastors) who are sometimes assisted in the administration of various works by deacons. Elders are generally seen as responsible for the spiritual welfare of the congregation, while deacons are seen as responsible for the non-spiritual needs of the church. Deacons serve under the supervision of the elders, and are often assigned to direct specific ministries. Successful service as a deacon is often seen as preparation for the eldership. Elders and deacons are chosen by the congregation based on the qualifications found in Timothy 3 and Titus 1. Congregations look for elders who have a mature enough understanding of scripture to enable them to supervise the minister and to teach, as well as to perform governance functions. In lieu of willing men who meet these qualifications, congregations are sometimes overseen by an unelected committee of the congregation's men.",
"title": "Churches of Christ"
},
{
"paragraph_id": 11,
"text": "While the early Restoration Movement had a tradition of itinerant preachers rather than \"located Preachers\", during the 20th century a long-term, formally trained congregational minister became the norm among Churches of Christ. Ministers are understood to serve under the oversight of the elders. While the presence of a long-term professional minister has sometimes created \"significant de facto ministerial authority\" and led to conflict between the minister and the elders, the eldership has remained the \"ultimate locus of authority in the congregation\". There is a small group within the Churches of Christ which oppose a single preacher and, instead, rotate preaching duties among qualified elders (this group tends to overlap with groups which oppose Sunday School and also have only one cup to serve the Lord's Supper).",
"title": "Churches of Christ"
},
{
"paragraph_id": 12,
"text": "Churches of Christ hold to the priesthood of all believers. No special titles are used for preachers or ministers that would identify them as clergy. Churches of Christ emphasize that there is no distinction between \"clergy\" and \"laity\" and that every member has a gift and a role to play in accomplishing the work of the church.",
"title": "Churches of Christ"
},
{
"paragraph_id": 13,
"text": "Methodists who disagreed with the episcopal polity of the Methodist Episcopal Church, South left their mother church to form the Congregational Methodist Church, which retains Wesleyan-Arminian theology but adopts congregationalist polity as a distinctive.",
"title": "Congregational Methodist Church"
}
] | Congregationalist polity, or congregational polity, often known as congregationalism, is a system of ecclesiastical polity in which every local church (congregation) is independent, ecclesiastically sovereign, or "autonomous". Its first articulation in writing is the Cambridge Platform of 1648 in New England. Major Protestant Christian traditions that employ congregationalism include Quakerism, the Baptist churches, the Congregational Methodist Church, and Congregational churches known by the Congregationalist name and having descended from the Independent Reformed wing of the Anglo-American Puritan movement of the 17th century. More recent generations have witnessed a growing number of nondenominational churches, which are often congregationalist in their governance. Although autonomous, like minded congregations may enter into voluntary associations with other congregations, sometimes called conventions or denominations. Congregationalism is distinguished from episcopal polity which is governance by a hierarchy of bishops, and is also distinct from presbyterian polity in which higher assemblies of congregational representatives can exercise considerable authority over individual congregations. Congregationalism is not limited only to organization of Christian church congregations. The principles of congregationalism have been inherited by the Unitarian Universalist Association and the Canadian Unitarian Council. Most Jewish synagogues, many Sikh Gurdwaras, and most Islamic mosques in the US operate under congregational government, with no hierarchies. | 2002-02-25T15:51:15Z | 2023-12-04T22:13:13Z | [
"Template:Citation needed",
"Template:See also",
"Template:Cite Catholic Encyclopedia",
"Template:Sfn",
"Template:Efn",
"Template:Sfnm",
"Template:Cite encyclopedia",
"Template:Short description",
"Template:About",
"Template:Ecclesiastical polity",
"Template:Portal",
"Template:Notelist",
"Template:Reflist",
"Template:Cite web",
"Template:Refbegin",
"Template:Cite journal",
"Template:Refend",
"Template:Authority control",
"Template:More citations needed",
"Template:Cite book",
"Template:Cite EB1911"
] | https://en.wikipedia.org/wiki/Congregationalist_polity |
6,816 | Cavalry | Historically, cavalry (from the French word cavalerie, itself derived from "cheval" meaning "horse") are soldiers or warriors who fight mounted on horseback. Cavalry were the most mobile of the combat arms, operating as light cavalry in the roles of reconnaissance, screening, and skirmishing in many armies, or as heavy cavalry for decisive shock attacks in other armies. An individual soldier in the cavalry is known by a number of designations depending on era and tactics, such as a cavalryman, horseman, trooper, cataphract, knight, drabant, hussar, uhlan, mamluk, cuirassier, lancer, dragoon, or horse archer. The designation of cavalry was not usually given to any military forces that used other animals for mounts, such as camels or elephants. Infantry who moved on horseback, but dismounted to fight on foot, were known in the early 17th to the early 18th century as dragoons, a class of mounted infantry which in most armies later evolved into standard cavalry while retaining their historic designation.
Cavalry had the advantage of improved mobility, and a soldier fighting from horseback also had the advantages of greater height, speed, and inertial mass over an opponent on foot. Another element of horse mounted warfare is the psychological impact a mounted soldier can inflict on an opponent.
The speed, mobility, and shock value of cavalry was greatly appreciated and exploited in armed forces in the Ancient and Middle Ages; some forces were mostly cavalry, particularly in nomadic societies of Asia, notably the Huns of Attila and the later Mongol armies. In Europe, cavalry became increasingly armoured (heavy), and eventually evolving into the mounted knights of the medieval period. During the 17th century, cavalry in Europe discarded most of its armor, which was ineffective against the muskets and cannons that were coming into common use, and by the mid-18th century armor had mainly fallen into obsolescence, although some regiments retained a small thickened cuirass that offered protection against lances, sabres, and bayonets; including some protection against a shot from distance.
In the interwar period many cavalry units were converted into motorized infantry and mechanized infantry units, or reformed as tank troops. The cavalry tank or cruiser tank was one designed with a speed and purpose beyond that of infantry tanks and would subsequently develop into the main battle tank. Nonetheless, some cavalry still served during World War II (notably in the Red Army, the Mongolian People's Army, the Royal Italian Army, the Royal Hungarian Army, the Romanian Army, the Polish Land Forces, and German light reconnaissance units within the Waffen SS).
Most cavalry units that are horse-mounted in modern armies serve in purely ceremonial roles, or as mounted infantry in difficult terrain such as mountains or heavily forested areas. Modern usage of the term generally refers to units performing the role of reconnaissance, surveillance, and target acquisition (analogous to historical light cavalry) or main battle tank units (analogous to historical heavy cavalry).
Historically, cavalry was divided into light cavalry and heavy cavalry. The differences were their roles in combat, the size of their mounts, and how much armor was worn by the mount and rider.
Heavy cavalry, such as Byzantine cataphracts and knights of the Early Middle Ages in Europe, were used as shock troops, charging the main body of the enemy at the height of a battle; in many cases their actions decided the outcome of the battle, hence the later term battle cavalry. Light cavalry, such as horse archers, hussars, and Cossack cavalry, were assigned all the numerous roles that were ill-suited to more narrowly-focused heavy forces. This includes scouting, deterring enemy scouts, foraging, raiding, skirmishing, pursuit of retreating enemy forces, screening of retreating friendly forces, linking separated friendly forces, and countering enemy light forces in all these same roles.
Light and heavy cavalry roles continued through early modern warfare, but armor was reduced, with light cavalry mostly unarmored. Yet many cavalry units still retained cuirasses and helmets for their protective value against sword and bayonet strikes, and the morale boost these provide to the wearers, despite the actual armour giving little protection from firearms. By this time the main difference between light and heavy cavalry was in their training and weight; the former was regarded as best suited for harassment and reconnaissance, while the latter was considered best for close-order charges. By the start of the 20th century, as total battlefield firepower increased, cavalry increasingly tended to become dragoons in practice, riding mounted between battles, but dismounting to fight as infantry, even though retaining unit names that reflected their older cavalry roles. Military conservatism was however strong in most continental cavalry during peacetime and in these dismounted action continued to be regarded as a secondary function until the outbreak of World War I in 1914.
With the development of armored warfare, the heavy cavalry role of decisive shock troops had been taken over by armored units employing medium and heavy tanks, and later main battle tanks. Despite horse-born cavalry becoming obsolete, the term cavalry is still used, referring in modern times to units continuing to fulfill the traditional light cavalry roles, employing fast armored cars, light tanks, and infantry fighting vehicles instead of horses, while air cavalry employs helicopters.
Before the Iron Age, the role of cavalry on the battlefield was largely performed by light chariots. The chariot originated with the Sintashta-Petrovka culture in Central Asia and spread by nomadic or semi-nomadic Indo-Iranians. The chariot was quickly adopted by settled peoples both as a military technology and an object of ceremonial status, especially by the pharaohs of the New Kingdom of Egypt from 1550 BC as well as the Assyrian army and Babylonian royalty.
The power of mobility given by mounted units was recognized early on, but was offset by the difficulty of raising large forces and by the inability of horses (then mostly small) to carry heavy armor. Nonetheless, there are indications that, from the 15th century BC onwards, horseback riding was practiced amongst the military elites of the great states of the ancient Near East, most notably those in Egypt, Assyria, the Hittite Empire, and Mycenaean Greece.
Cavalry techniques, and the rise of true cavalry, were an innovation of equestrian nomads of the Central Asian and Iranian steppe and pastoralist tribes such as the Iranic Parthians and Sarmatians. Together with a core of armoured lancers, these were predominantly horse archers using the Parthian shot tactic.
The photograph straight above shows Assyrian cavalry from reliefs of 865–860 BC. At this time, the men had no spurs, saddles, saddle cloths, or stirrups. Fighting from the back of a horse was much more difficult than mere riding. The cavalry acted in pairs; the reins of the mounted archer were controlled by his neighbour's hand. Even at this early time, cavalry used swords, shields, spears, and bows. The sculpture implies two types of cavalry, but this might be a simplification by the artist. Later images of Assyrian cavalry show saddle cloths as primitive saddles, allowing each archer to control his own horse.
As early as 490 BC a breed of large horses was bred in the Nisaean plain in Media to carry men with increasing amounts of armour (Herodotus 7,40 & 9,20), but large horses were still very exceptional at this time. By the fourth century BC the Chinese during the Warring States period (403–221 BC) began to use cavalry against rival states, and by 331 BC when Alexander the Great defeated the Persians the use of chariots in battle was obsolete in most nations; despite a few ineffective attempts to revive scythed chariots. The last recorded use of chariots as a shock force in continental Europe was during the Battle of Telamon in 225 BC. However, chariots remained in use for ceremonial purposes such as carrying the victorious general in a Roman triumph, or for racing.
Outside of mainland Europe, the southern Britons met Julius Caesar with chariots in 55 and 54 BC, but by the time of the Roman conquest of Britain a century later chariots were obsolete, even in Britannia. The last mention of chariot use in Britain was by the Caledonians at the Mons Graupius, in 84 AD.
During the classical Greek period cavalry were usually limited to those citizens who could afford expensive war-horses. Three types of cavalry became common: light cavalry, whose riders, armed with javelins, could harass and skirmish; heavy cavalry, whose troopers, using lances, had the ability to close in on their opponents; and finally those whose equipment allowed them to fight either on horseback or foot. The role of horsemen did however remain secondary to that of the hoplites or heavy infantry who comprised the main strength of the citizen levies of the various city states.
Cavalry played a relatively minor role in ancient Greek city-states, with conflicts decided by massed armored infantry. However, Thebes produced Pelopidas, their first great cavalry commander, whose tactics and skills were absorbed by Philip II of Macedon when Philip was a guest-hostage in Thebes. Thessaly was widely known for producing competent cavalrymen, and later experiences in wars both with and against the Persians taught the Greeks the value of cavalry in skirmishing and pursuit. The Athenian author and soldier Xenophon in particular advocated the creation of a small but well-trained cavalry force; to that end, he wrote several manuals on horsemanship and cavalry operations.
The Macedonian Kingdom in the north, on the other hand, developed a strong cavalry force that culminated in the hetairoi (Companion cavalry) of Philip II of Macedon and Alexander the Great. In addition to these heavy cavalry, the Macedonian army also employed lighter horsemen called prodromoi for scouting and screening, as well as the Macedonian pike phalanx and various kinds of light infantry. There were also the Ippiko (or "Horserider"), Greek "heavy" cavalry, armed with kontos (or cavalry lance), and sword. These wore leather armour or mail plus a helmet. They were medium rather than heavy cavalry, meaning that they were better suited to be scouts, skirmishers, and pursuers rather than front line fighters. The effectiveness of this combination of cavalry and infantry helped to break enemy lines and was most dramatically demonstrated in Alexander's conquests of Persia, Bactria, and northwestern India.
The cavalry in the early Roman Republic remained the preserve of the wealthy landed class known as the equites—men who could afford the expense of maintaining a horse in addition to arms and armor heavier than those of the common legions. Horses were provided by the Republic and could be withdrawn if neglected or misused, together with the status of being a cavalryman.
As the class grew to be more of a social elite instead of a functional property-based military grouping, the Romans began to employ Italian socii for filling the ranks of their cavalry. The weakness of Roman cavalry was demonstrated by Hannibal Barca during the Second Punic War where he used his superior mounted forces to win several battles. The most notable of these was the Battle of Cannae, where he inflicted a catastrophic defeat on the Romans. At about the same time the Romans began to recruit foreign auxiliary cavalry from among Gauls, Iberians, and Numidians, the last being highly valued as mounted skirmishers and scouts (see Numidian cavalry). Julius Caesar had a high opinion of his escort of Germanic mixed cavalry, giving rise to the Cohortes Equitatae. Early emperors maintained an ala of Batavian cavalry as their personal bodyguards until the unit was dismissed by Galba after the Batavian Rebellion.
For the most part, Roman cavalry during the early Republic functioned as an adjunct to the legionary infantry and formed only one-fifth of the standing force comprising a consular army. Except in times of major mobilisation about 1,800 horsemen were maintained, with three hundred attached to each legion. The relatively low ratio of horsemen to infantry does not mean that the utility of cavalry should be underestimated, as its strategic role in scouting, skirmishing, and outpost duties was crucial to the Romans' capability to conduct operations over long distances in hostile or unfamiliar territory. On some occasions Roman cavalry also proved its ability to strike a decisive tactical blow against a weakened or unprepared enemy, such as the final charge at the Battle of Aquilonia.
After defeats such as the Battle of Carrhae, the Romans learned the importance of large cavalry formations from the Parthians. At the same time heavy spears and shields modelled on those favoured by the horsemen of the Greek city-states were adopted to replace the lighter weaponry of early Rome. These improvements in tactics and equipment reflected those of a thousand years earlier when the first Iranians to reach the Iranian Plateau forced the Assyrians to undertake similar reform. Nonetheless, the Romans would continue to rely mainly on their heavy infantry supported by auxiliary cavalry.
In the army of the late Roman Empire, cavalry played an increasingly important role. The Spatha, the classical sword throughout most of the 1st millennium was adopted as the standard model for the Empire's cavalry forces. By the 6th century these had evolved into lengthy straight weapons influenced by Persian and other eastern patterns. Other specialist weapons during this period included javlins, long reaching lancers, axes and maces.
The most widespread employment of heavy cavalry at this time was found in the forces of the Iranian empires, the Parthians and their Persian Sasanian successors. Both, but especially the former, were famed for the cataphract (fully armored cavalry armed with lances) even though the majority of their forces consisted of lighter horse archers. The West first encountered this eastern heavy cavalry during the Hellenistic period with further intensive contacts during the eight centuries of the Roman–Persian Wars. At first the Parthians' mobility greatly confounded the Romans, whose armoured close-order infantry proved unable to match the speed of the Parthians. However, later the Romans would successfully adapt such heavy armor and cavalry tactics by creating their own units of cataphracts and clibanarii.
The decline of the Roman infrastructure made it more difficult to field large infantry forces, and during the 4th and 5th centuries cavalry began to take a more dominant role on the European battlefield, also in part made possible by the appearance of new, larger breeds of horses. The replacement of the Roman saddle by variants on the Scythian model, with pommel and cantle, was also a significant factor as was the adoption of stirrups and the concomitant increase in stability of the rider's seat. Armored cataphracts began to be deployed in eastern Europe and the Near East, following the precedents established by Persian forces, as the main striking force of the armies in contrast to the earlier roles of cavalry as scouts, raiders, and outflankers.
The late-Roman cavalry tradition of organized units in a standing army differed fundamentally from the nobility of the Germanic invaders—individual warriors who could afford to provide their own horses and equipment. While there was no direct linkage with these predecessors the early medieval knight also developed as a member of a social and martial elite, able to meet the considerable expenses required by his role from grants of land and other incomes.
Xiongnu, Tujue, Avars, Kipchaks, Khitans, Mongols, Don Cossacks and the various Turkic peoples are also examples of the horse-mounted groups that managed to gain substantial successes in military conflicts with settled agrarian and urban societies, due to their strategic and tactical mobility. As European states began to assume the character of bureaucratic nation-states supporting professional standing armies, recruitment of these mounted warriors was undertaken in order to fill the strategic roles of scouts and raiders.
The best known instance of the continued employment of mounted tribal auxiliaries were the Cossack cavalry regiments of the Russian Empire. In Eastern Europe, and out onto the steppes, cavalry remained important much longer and dominated the scene of warfare until the early 17th century and even beyond, as the strategic mobility of cavalry was crucial for the semi-nomadic pastoralist lives that many steppe cultures led. Tibetans also had a tradition of cavalry warfare, in several military engagements with the Chinese Tang dynasty (618–907 AD).
Further east, the military history of China, specifically northern China, held a long tradition of intense military exchange between Han Chinese infantry forces of the settled dynastic empires and the mounted nomads or "barbarians" of the north. The naval history of China was centered more to the south, where mountains, rivers, and large lakes necessitated the employment of a large and well-kept navy.
In 307 BC, King Wuling of Zhao, the ruler of the former state of Jin, ordered his commanders and troops to adopt the trousers of the nomads as well as practice the nomads' form of mounted archery to hone their new cavalry skills.
The adoption of massed cavalry in China also broke the tradition of the chariot-riding Chinese aristocracy in battle, which had been in use since the ancient Shang dynasty (c. 1600–1050 BC). By this time large Chinese infantry-based armies of 100,000 to 200,000 troops were now buttressed with several hundred thousand mounted cavalry in support or as an effective striking force. The handheld pistol-and-trigger crossbow was invented in China in the fourth century BC; it was written by the Song dynasty scholars Zeng Gongliang, Ding Du, and Yang Weide in their book Wujing Zongyao (1044 AD) that massed missile fire by crossbowmen was the most effective defense against enemy cavalry charges.
On many occasions the Chinese studied nomadic cavalry tactics and applied the lessons in creating their own potent cavalry forces, while in others they simply recruited the tribal horsemen wholesale into their armies; and in yet other cases nomadic empires proved eager to enlist Chinese infantry and engineering, as in the case of the Mongol Empire and its sinicized part, the Yuan dynasty (1279–1368). The Chinese recognized early on during the Han dynasty (202 BC – 220 AD) that they were at a disadvantage in lacking the number of horses the northern nomadic peoples mustered in their armies. Emperor Wu of Han (r 141–87 BC) went to war with the Dayuan for this reason, since the Dayuan were hoarding a massive amount of tall, strong, Central Asian bred horses in the Hellenized–Greek region of Fergana (established slightly earlier by Alexander the Great). Although experiencing some defeats early on in the campaign, Emperor Wu's war from 104 BC to 102 BC succeeded in gathering the prized tribute of horses from Fergana.
Cavalry tactics in China were enhanced by the invention of the saddle-attached stirrup by at least the 4th century, as the oldest reliable depiction of a rider with paired stirrups was found in a Jin dynasty tomb of the year 322 AD. The Chinese invention of the horse collar by the 5th century was also a great improvement from the breast harness, allowing the horse to haul greater weight without heavy burden on its skeletal structure.
The horse warfare of Korea was first started during the ancient Korean kingdom Gojoseon. Since at least the 3rd century BC, there was influence of northern nomadic peoples and Yemaek peoples on Korean warfare. By roughly the first century BC, the ancient kingdom of Buyeo also had mounted warriors. The cavalry of Goguryeo, one of the Three Kingdoms of Korea, were called Gaemamusa (개마무사, 鎧馬武士), and were renowned as a fearsome heavy cavalry force. King Gwanggaeto the Great often led expeditions into the Baekje, Gaya confederacy, Buyeo, Later Yan and against Japanese invaders with his cavalry.
In the 12th century, Jurchen tribes began to violate the Goryeo–Jurchen borders, and eventually invaded Goryeo Korea. After experiencing the invasion by the Jurchen, Korean general Yun Gwan realized that Goryeo lacked efficient cavalry units. He reorganized the Goryeo military into a professional army that would contain decent and well-trained cavalry units. In 1107, the Jurchen were ultimately defeated, and surrendered to Yun Gwan. To mark the victory, General Yun built nine fortresses to the northeast of the Goryeo–Jurchen borders (동북 9성, 東北 九城).
The ancient Japanese of the Kofun period also adopted cavalry and equine culture by the 5th century AD. The emergence of the samurai aristocracy led to the development of armoured horse archers, themselves to develop into charging lancer cavalry as gunpowder weapons rendered bows obsolete. Japanese cavalry was largely made up of landowners who would be upon a horse to better survey the troops they were called upon to bring to an engagement, rather than traditional mounted warfare seen in other cultures with massed cavalry units.
An example is Yabusame (流鏑馬), a type of mounted archery in traditional Japanese archery. An archer on a running horse shoots three special "turnip-headed" arrows successively at three wooden targets.
This style of archery has its origins at the beginning of the Kamakura period. Minamoto no Yoritomo became alarmed at the lack of archery skills his samurai had. He organized yabusame as a form of practice. Currently, the best places to see yabusame performed are at the Tsurugaoka Hachiman-gū in Kamakura and Shimogamo Shrine in Kyoto (during Aoi Matsuri in early May). It is also performed in Samukawa and on the beach at Zushi, as well as other locations.
Kasagake or Kasakake (笠懸, かさがけ lit. "hat shooting") is a type of Japanese mounted archery. In contrast to yabusame, the types of targets are various and the archer shoots without stopping the horse. While yabusame has been played as a part of formal ceremonies, kasagake has developed as a game or practice of martial arts, focusing on technical elements of horse archery.
In the Indian subcontinent, cavalry played a major role from the Gupta dynasty (320–600) period onwards. India has also the oldest evidence for the introduction of toe-stirrups.
Indian literature contains numerous references to the mounted warriors of the Central Asian horse nomads, notably the Sakas, Kambojas, Yavanas, Pahlavas and Paradas. Numerous Puranic texts refer to a conflict in ancient India (16th century BC) in which the horsemen of five nations, called the "Five Hordes" (pañca.ganan) or Kṣatriya hordes (Kṣatriya ganah), attacked and captured the state of Ayudhya by dethroning its Vedic King Bahu
The Mahabharata, Ramayana, numerous Puranas and some foreign sources attest that the Kamboja cavalry frequently played role in ancient wars. V. R. Ramachandra Dikshitar writes: "Both the Puranas and the epics agree that the horses of the Sindhu and Kamboja regions were of the finest breed, and that the services of the Kambojas as cavalry troopers were utilised in ancient wars". J.A.O.S. writes: "Most famous horses are said to come either from Sindhu or Kamboja; of the latter (i.e. the Kamboja), the Indian epic Mahabharata speaks among the finest horsemen".
The Mahabharata speaks of the esteemed cavalry of the Kambojas, Sakas, Yavanas and Tusharas, all of whom had participated in the Kurukshetra war under the supreme command of Kamboja ruler Sudakshin Kamboj.
Mahabharata and Vishnudharmottara Purana pay especial attention to the Kambojas, Yavansa, Gandharas etc. being ashva.yuddha.kushalah (expert cavalrymen). In the Mahabharata war, the Kamboja cavalry along with that of the Sakas, Yavanas is reported to have been enlisted by the Kuru king Duryodhana of Hastinapura.
Herodotus (c. 484 – c. 425 BC) attests that the Gandarian mercenaries (i.e. Gandharans/Kambojans of Gandari Strapy of Achaemenids) from the 20th strapy of the Achaemenids were recruited in the army of emperor Xerxes I (486–465 BC), which he led against the Hellas. Similarly, the men of the Mountain Land from north of Kabul-River equivalent to medieval Kohistan (Pakistan), figure in the army of Darius III against Alexander at Arbela, providing a cavalry force and 15 elephants. This obviously refers to Kamboja cavalry south of Hindukush.
The Kambojas were famous for their horses, as well as cavalrymen (asva-yuddha-Kushalah). On account of their supreme position in horse (Ashva) culture, they were also popularly known as Ashvakas, i.e. the "horsemen" and their land was known as "Home of Horses". They are the Assakenoi and Aspasioi of the Classical writings, and the Ashvakayanas and Ashvayanas in Pāṇini's Ashtadhyayi. The Assakenoi had faced Alexander with 30,000 infantry, 20,000 cavalry and 30 war elephants. Scholars have identified the Assakenoi and Aspasioi clans of Kunar and Swat valleys as a section of the Kambojas. These hardy tribes had offered stubborn resistance to Alexander (c. 326 BC) during latter's campaign of the Kabul, Kunar and Swat valleys and had even extracted the praise of the Alexander's historians. These highlanders, designated as "parvatiya Ayudhajivinah" in Pāṇini's Astadhyayi, were rebellious, fiercely independent and freedom-loving cavalrymen who never easily yielded to any overlord.
The Sanskrit drama Mudra-rakashas by Visakha Dutta and the Jaina work Parishishtaparvan refer to Chandragupta's (c. 320 BC – c. 298 BC) alliance with Himalayan king Parvataka. The Himalayan alliance gave Chandragupta a formidable composite army made up of the cavalry forces of the Shakas, Yavanas, Kambojas, Kiratas, Parasikas and Bahlikas as attested by Mudra-Rakashas (Mudra-Rakshasa 2). These hordes had helped Chandragupta Maurya defeat the ruler of Magadha and placed Chandragupta on the throne, thus laying the foundations of Mauryan dynasty in Northern India.
The cavalry of Hunas and the Kambojas is also attested in the Raghu Vamsa epic poem of Sanskrit poet Kalidasa. Raghu of Kalidasa is believed to be Chandragupta II (Vikaramaditya) (375–413/15 AD), of the well-known Gupta dynasty.
As late as the mediaeval era, the Kamboja cavalry had also formed part of the Gurjara-Pratihara armed forces from the eighth to the 10th centuries AD. They had come to Bengal with the Pratiharas when the latter conquered part of the province.
Ancient Kambojas organised military sanghas and shrenis (corporations) to manage their political and military affairs, as Arthashastra of Kautiliya as well as the Mahabharata record. They are described as Ayuddha-jivi or Shastr-opajivis (nations-in-arms), which also means that the Kamboja cavalry offered its military services to other nations as well. There are numerous references to Kambojas having been requisitioned as cavalry troopers in ancient wars by outside nations.
The Mughal armies (lashkar) were primarily a cavalry force. The elite corps were the ahadi who provided direct service to the Emperor and acted as guard cavalry. Supplementary cavalry or dakhilis were recruited, equipped and paid by the central state. This was in contrast to the tabinan horsemen who were the followers of individual noblemen. Their training and equipment varied widely but they made up the backbone of the Mughal cavalry. Finally there were tribal irregulars led by and loyal to tributary chiefs. These included Hindus, Afghans and Turks summoned for military service when their autonomous leaders were called on by the Imperial government.
As the quality and availability of heavy infantry declined in Europe with the fall of the Roman Empire, heavy cavalry became more effective. Infantry that lack the cohesion and discipline of tight formations are more susceptible to being broken and scattered by shock combat—the main role of heavy cavalry, which rose to become the dominant force on the European battlefield.
As heavy cavalry increased in importance, it became the main focus of military development. The arms and armour for heavy cavalry increased, the high-backed saddle developed, and stirrups and spurs were added, increasing the advantage of heavy cavalry even more.
This shift in military importance was reflected in an increasingly hierarchical society as well. From the late 10th century onwards heavily armed horsemen, milites or knights, emerged as an expensive elite taking centre stage both on and off the battlefield. This class of aristocratic warriors was considered the "ultimate" in heavy cavalry: well-equipped with the best weapons, state-of-the-art armour from head to foot, leading with the lance in battle in a full-gallop, close-formation "knightly charge" that might prove irresistible, winning the battle almost as soon as it began.
But knights remained the minority of total available combat forces; the expense of arms, armour, and horses was only affordable to a select few. While mounted men-at-arms focused on a narrow combat role of shock combat, medieval armies relied on a large variety of foot troops to fulfill all the rest (skirmishing, flank guards, scouting, holding ground, etc.). Medieval chroniclers tended to pay undue attention to the knights at the expense of the common soldiers, which led early students of military history to suppose that heavy cavalry was the only force that mattered on medieval European battlefields. But well-trained and disciplined infantry could defeat knights.
Massed English longbowmen triumphed over French cavalry at Crécy, Poitiers and Agincourt, while at Gisors (1188), Bannockburn (1314), and Laupen (1339), foot-soldiers proved they could resist cavalry charges as long as they held their formation. Once the Swiss developed their pike squares for offensive as well as defensive use, infantry started to become the principal arm. This aggressive new doctrine gave the Swiss victory over a range of adversaries, and their enemies found that the only reliable way to defeat them was by the use of an even more comprehensive combined arms doctrine, as evidenced in the Battle of Marignano. The introduction of missile weapons that required less skill than the longbow, such as the crossbow and hand cannon, also helped remove the focus somewhat from cavalry elites to masses of cheap infantry equipped with easy-to-learn weapons. These missile weapons were very successfully used in the Hussite Wars, in combination with Wagenburg tactics.
This gradual rise in the dominance of infantry led to the adoption of dismounted tactics. From the earliest times knights and mounted men-at-arms had frequently dismounted to handle enemies they could not overcome on horseback, such as in the Battle of the Dyle (891) and the Battle of Bremule (1119), but after the 1350s this trend became more marked with the dismounted men-at-arms fighting as super-heavy infantry with two-handed swords and poleaxes. In any case, warfare in the Middle Ages tended to be dominated by raids and sieges rather than pitched battles, and mounted men-at-arms rarely had any choice other than dismounting when faced with the prospect of assaulting a fortified position.
The Islamic Prophet Muhammad made use of cavalry in many of his military campaigns including the Expedition of Dhu Qarad, and the expedition of Zaid ibn Haritha in al-Is which took place in September, 627 AD, fifth month of 6 AH of the Islamic calendar.
Early organized Arab mounted forces under the Rashidun caliphate comprised a light cavalry armed with lance and sword. Its main role was to attack the enemy flanks and rear. These relatively lightly armored horsemen formed the most effective element of the Muslim armies during the later stages of the Islamic conquest of the Levant. The best use of this lightly armed fast moving cavalry was revealed at the Battle of Yarmouk (636 AD) in which Khalid ibn Walid, knowing the skills of his horsemen, used them to turn the tables at every critical instance of the battle with their ability to engage, disengage, then turn back and attack again from the flank or rear. A strong cavalry regiment was formed by Khalid ibn Walid which included the veterans of the campaign of Iraq and Syria. Early Muslim historians have given it the name Tali'a mutaharrikah(طليعة متحركة), or the Mobile guard. This was used as an advance guard and a strong striking force to route the opposing armies with its greater mobility that give it an upper hand when maneuvering against any Byzantine army. With this mobile striking force, the conquest of Syria was made easy.
The Battle of Talas in 751 AD was a conflict between the Arab Abbasid Caliphate and the Chinese Tang dynasty over the control of Central Asia. Chinese infantry were routed by Arab cavalry near the bank of the River Talas.
Until the 11th century the classic cavalry strategy of the Arab Middle East incorporated the razzia tactics of fast moving raids by mixed bodies of horsemen and infantry. Under the talented leadership of Saladin and other Islamic commanders the emphasis changed to Mamluk horse-archers backed by bodies of irregular light cavalry. Trained to rapidly disperse, harass and regroup these flexible mounted forces proved capable of withstanding the previously invincible heavy knights of the western crusaders at battles such as Hattin in 1187.
Originating in the 9th century as Central Asian ghulams or captives utilised as mounted auxiliaries by Arab armies, Mamluks were subsequently trained as cavalry soldiers rather than solely mounted-archers, with increased priority being given to the use of lances and swords. Mamluks were to follow the dictates of al-furusiyya, a code of conduct that included values like courage and generosity but also doctrine of cavalry tactics, horsemanship, archery and treatment of wounds.
By the late 13th century the Manluk armies had evolved into a professional elite of cavalry, backed by more numerous but less well-trained footmen.
The Islamic Berber states of North Africa employed elite horse mounted cavalry armed with spears and following the model of the original Arab occupiers of the region. Horse-harness and weapons were manufactured locally and the six-monthly stipends for horsemen were double those of their infantry counterparts. During the 8th century Islamic conquest of Iberia large numbers of horses and riders were shipped from North Africa, to specialise in raiding and the provision of support for the massed Berber footmen of the main armies.
Maghrebi traditions of mounted warfare eventually influenced a number of sub-Saharan African polities in the medieval era. The Esos of Ikoyi, military aristocrats of the Yoruba peoples, were a notable manifestation of this phenomenon.
Qizilbash, were a class of Safavid militant warriors in Iran during the 15th to 18th centuries, who often fought as elite cavalry.
During its period of greatest expansion, from the 14th to 17th centuries, cavalry formed the powerful core of the Ottoman armies. Registers dated 1475 record 22,000 Sipahi feudal cavalry levied in Europe, 17,000 Sipahis recruited from Anatolia, and 3,000 Kapikulu (regular body-guard cavalry). During the 18th century however the Ottoman mounted troops evolved into light cavalry serving in the thinly populated regions of the Middle East and North Africa. Such frontier horsemen were largely raised by local governors and were separate from the main field armies of the Ottoman Empire. At the beginning of the 19th century modernised Nizam-I Credit ("New Army") regiments appeared, including full-time cavalry units officered from the horse guards of the Sultan.
Ironically, the rise of infantry in the early 16th century coincided with the "golden age" of heavy cavalry; a French or Spanish army at the beginning of the century could have up to half its numbers made up of various kinds of light and heavy cavalry, whereas in earlier medieval and later 17th-century armies the proportion of cavalry was seldom more than a quarter.
Knighthood largely lost its military functions and became more closely tied to social and economic prestige in an increasingly capitalistic Western society. With the rise of drilled and trained infantry, the mounted men-at-arms, now sometimes called gendarmes and often part of the standing army themselves, adopted the same role as in the Hellenistic age, that of delivering a decisive blow once the battle was already engaged, either by charging the enemy in the flank or attacking their commander-in-chief.
From the 1550s onwards, the use of gunpowder weapons solidified infantry's dominance of the battlefield and began to allow true mass armies to develop. This is closely related to the increase in the size of armies throughout the early modern period; heavily armored cavalrymen were expensive to raise and maintain and it took years to train a skilled horseman or a horse, while arquebusiers and later musketeers could be trained and kept in the field at much lower cost, and were much easier to recruit.
The Spanish tercio and later formations relegated cavalry to a supporting role. The pistol was specifically developed to try to bring cavalry back into the conflict, together with manoeuvres such as the caracole. The caracole was not particularly successful, however, and the charge (whether with lance, sword, or pistol) remained as the primary mode of employment for many types of European cavalry, although by this time it was delivered in much deeper formations and with greater discipline than before. The demi-lancers and the heavily armored sword-and-pistol reiters were among the types of cavalry whose heyday was in the 16th and 17th centuries. During this period the Polish Winged hussars were a dominating heavy cavalry force in Eastern Europe that initially achieved great success against Swedes, Russians, Turks and other, until repeatably beaten by either combined arms tactics, increase in firepower or beaten in melee with the Drabant cavalry of the Swedish Empire. From their last engagement in 1702 (at the Battle of Kliszów) until 1776, the obsolete Winged hussars were demoted and largely assigned to ceremonial roles. The Polish Winged hussars military prowess peaked at the Siege of Vienna in 1683, when hussar banners participated in the largest cavalry charge in history and successfully repelled the Ottoman attack.
Cavalry retained an important role in this age of regularization and standardization across European armies. They remained the primary choice for confronting enemy cavalry. Attacking an unbroken infantry force head-on usually resulted in failure, but extended linear infantry formations were vulnerable to flank or rear attacks. Cavalry was important at Blenheim (1704), Rossbach (1757), Marengo (1800), Eylau and Friedland (1807), remaining significant throughout the Napoleonic Wars.
Even with the increasing prominence of infantry, cavalry still had an irreplaceable role in armies, due to their greater mobility. Their non-battle duties often included patrolling the fringes of army encampments, with standing orders to intercept suspected shirkers and deserters, as well as, serving as outpost pickets in advance of the main body. During battle, lighter cavalry such as hussars and uhlans might skirmish with other cavalry, attack light infantry, or charge and either capture enemy artillery or render them useless by plugging the touchholes with iron spikes. Heavier cavalry such as cuirassiers, dragoons, and carabiniers usually charged towards infantry formations or opposing cavalry in order to rout them. Both light and heavy cavalry pursued retreating enemies, the point where most battle casualties occurred.
The greatest cavalry charge of modern history was at the 1807 Battle of Eylau, when the entire 11,000-strong French cavalry reserve, led by Joachim Murat, launched a huge charge on and through the Russian infantry lines. Cavalry's dominating and menacing presence on the battlefield was countered by the use of infantry squares. The most notable examples are at the Battle of Quatre Bras and later at the Battle of Waterloo, the latter which the repeated charges by up to 9,000 French cavalrymen ordered by Michel Ney failed to break the British-Allied army, who had formed into squares.
Massed infantry, especially those formed in squares were deadly to cavalry, but offered an excellent target for artillery. Once a bombardment had disordered the infantry formation, cavalry were able to rout and pursue the scattered foot soldiers. It was not until individual firearms gained accuracy and improved rates of fire that cavalry was diminished in this role as well. Even then light cavalry remained an indispensable tool for scouting, screening the army's movements, and harassing the enemy's supply lines until military aircraft supplanted them in this role in the early stages of World War I.
By the beginning of the 19th century, European cavalry fell into four main categories:
There were cavalry variations for individual nations as well: France had the chasseurs à cheval; Prussia had the Jäger zu Pferde; Bavaria, Saxony and Austria had the Chevaulegers; and Russia had Cossacks. Britain, from the mid-18th century, had Light Dragoons as light cavalry and Dragoons, Dragoon Guards and Household Cavalry as heavy cavalry. Only after the end of the Napoleonic wars were the Household Cavalry equipped with cuirasses, and some other regiments were converted to lancers. In the United States Army prior to 1862 the cavalry were almost always dragoons. The Imperial Japanese Army had its cavalry uniformed as hussars, but they fought as dragoons.
In the Crimean War, the Charge of the Light Brigade and the Thin Red Line at the Battle of Balaclava showed the vulnerability of cavalry, when deployed without effective support.
During the Franco-Prussian War, at the Battle of Mars-la-Tour in 1870, a Prussian cavalry brigade decisively smashed the centre of the French battle line, after skilfully concealing their approach. This event became known as Von Bredow's Death Ride after the brigade commander Adalbert von Bredow; it would be used in the following decades to argue that massed cavalry charges still had a place on the modern battlefield.
Cavalry found a new role in colonial campaigns (irregular warfare), where modern weapons were lacking and the slow moving infantry-artillery train or fixed fortifications were often ineffective against indigenous insurgents (unless the latter offered a fight on an equal footing, as at Tel-el-Kebir, Omdurman, etc.). Cavalry "flying columns" proved effective, or at least cost-effective, in many campaigns—although an astute native commander (like Samori in western Africa, Shamil in the Caucasus, or any of the better Boer commanders) could turn the tables and use the greater mobility of their cavalry to offset their relative lack of firepower compared with European forces.
In 1903 the British Indian Army maintained forty regiments of cavalry, numbering about 25,000 Indian sowars (cavalrymen), with British and Indian officers.
Among the more famous regiments in the lineages of the modern Indian and Pakistani armies are:
Several of these formations are still active, though they now are armoured formations, for example the Guides Cavalry of Pakistan.
The French Army maintained substantial cavalry forces in Algeria and Morocco from 1830 until the end of the Second World War. Much of the Mediterranean coastal terrain was suitable for mounted action and there was a long established culture of horsemanship amongst the Arab and Berber inhabitants. The French forces included Spahis, Chasseurs d' Afrique, Foreign Legion cavalry and mounted Goumiers. Both Spain and Italy raised cavalry regiments from amongst the indigenous horsemen of their North African territories (see regulares, Italian Spahis and savari respectively).
Imperial Germany employed mounted formations in South West Africa as part of the Schutztruppen (colonial army) garrisoning the territory.
In the early American Civil War the regular United States Army mounted rifle, dragoon, and two existing cavalry regiments were reorganized and renamed cavalry regiments, of which there were six. Over a hundred other federal and state cavalry regiments were organized, but the infantry played a much larger role in many battles due to its larger numbers, lower cost per rifle fielded, and much easier recruitment. However, cavalry saw a role as part of screening forces and in foraging and scouting. The later phases of the war saw the Federal army developing a truly effective cavalry force fighting as scouts, raiders, and, with repeating rifles, as mounted infantry. The distinguished 1st Virginia Cavalry ranks as one of the most effectual and successful cavalry units on the Confederate side. Noted cavalry commanders included Confederate general J.E.B. Stuart, Nathan Bedford Forrest, and John Singleton Mosby (a.k.a. "The Grey Ghost") and on the Union side, Philip Sheridan and George Armstrong Custer. Post Civil War, as the volunteer armies disbanded, the regular army cavalry regiments increased in number from six to ten, among them Custer's U.S. 7th Cavalry Regiment of Little Bighorn fame, and the African-American U.S. 9th Cavalry Regiment and U.S. 10th Cavalry Regiment. The black units, along with others (both cavalry and infantry), collectively became known as the Buffalo Soldiers. According to Robert M. Utley:
These regiments, which rarely took the field as complete organizations, served throughout the American Indian Wars through the close of the frontier in the 1890s. Volunteer cavalry regiments like the Rough Riders consisted of horsemen such as cowboys, ranchers and other outdoorsmen, that served as a cavalry in the United States Military.
At the beginning of the 20th century, all armies still maintained substantial cavalry forces, although there was contention over whether their role should revert to that of mounted infantry (the historic dragoon function). With motorised vehicles and aircraft still under development, horse mounted troops remained the only fully mobile forces available for manoeuvre warfare until 1914.
Following the experience of the South African War of 1899–1902 (where mounted Boer citizen commandos fighting on foot from cover proved more effective than regular cavalry), the British Army withdrew lances for all but ceremonial purposes and placed a new emphasis on training for dismounted action in 1903. Lances were however readopted for active service in 1912.
In 1882, the Imperial Russian Army converted all its line hussar and lancer regiments to dragoons, with an emphasis on mounted infantry training. In 1910 these regiments reverted to their historic roles, designations and uniforms.
By 1909, official regulations dictating the role of the Imperial German cavalry had been revised to indicate an increasing realization of the realities of modern warfare. The massive cavalry charge in three waves which had previously marked the end of annual maneuvers was discontinued and a new emphasis was placed in training on scouting, raiding and pursuit; rather than main battle involvement. The perceived importance of cavalry was however still evident, with thirteen new regiments of mounted rifles (Jäger zu Pferde) being raised shortly before the outbreak of war in 1914.
In spite of significant experience in mounted warfare in Morocco during 1908–14, the French cavalry remained a highly conservative institution. The traditional tactical distinctions between heavy, medium, and light cavalry branches were retained. French cuirassiers wore breastplates and plumed helmets unchanged from the Napoleonic period, during the early months of World War I. Dragoons were similarly equipped, though they did not wear cuirasses and did carry lances. Light cavalry were described as being "a blaze of colour". French cavalry of all branches were well mounted and were trained to change position and charge at full gallop. One weakness in training was that French cavalrymen seldom dismounted on the march and their horses suffered heavily from raw backs in August 1914.
In August 1914, all combatant armies still retained substantial numbers of cavalry and the mobile nature of the opening battles on both Eastern and Western Fronts provided a number of instances of traditional cavalry actions, though on a smaller and more scattered scale than those of previous wars. The 110 regiments of Imperial German cavalry, while as colourful and traditional as any in peacetime appearance, had adopted a practice of falling back on infantry support when any substantial opposition was encountered. These cautious tactics aroused derision amongst their more conservative French and Russian opponents but proved appropriate to the new nature of warfare. A single attempt by the German army, on 12 August 1914, to use six regiments of massed cavalry to cut off the Belgian field army from Antwerp floundered when they were driven back in disorder by rifle fire. The two German cavalry brigades involved lost 492 men and 843 horses in repeated charges against dismounted Belgian lancers and infantry. One of the last recorded charges by French cavalry took place on the night of 9/10 September 1914 when a squadron of the 16th Dragoons overran a German airfield at Soissons, while suffering heavy losses. Once the front lines stabilised on the Western Front with the start of Trench Warfare, a combination of barbed wire, uneven muddy terrain, machine guns and rapid fire rifles proved deadly to horse mounted troops and by early 1915 most cavalry units were no longer seeing front line action.
On the Eastern Front, a more fluid form of warfare arose from flat open terrain favorable to mounted warfare. On the outbreak of war in 1914 the bulk of the Russian cavalry was deployed at full strength in frontier garrisons and, during the period that the main armies were mobilizing, scouting and raiding into East Prussia and Austrian Galicia was undertaken by mounted troops trained to fight with sabre and lance in the traditional style. On 21 August 1914 the 4th Austro-Hungarian Kavalleriedivison fought a major mounted engagement at Jaroslavic with the Russian 10th Cavalry Division, in what was arguably the final historic battle to involve thousands of horsemen on both sides. While this was the last massed cavalry encounter on the Eastern Front, the absence of good roads limited the use of mechanized transport and even the technologically advanced Imperial German Army continued to deploy up to twenty-four horse-mounted divisions in the East, as late as 1917.
For the remainder of the War on the Western Front, cavalry had virtually no role to play. The British and French armies dismounted many of their cavalry regiments and used them in infantry and other roles: the Life Guards for example spent the last months of the War as a machine gun corps; and the Australian Light Horse served as light infantry during the Gallipoli campaign. In September 1914 cavalry comprised 9.28% of the total manpower of the British Expeditionary Force in France—by July 1918 this proportion had fallen to 1.65%. As early as the first winter of the war most French cavalry regiments had dismounted a squadron each, for service in the trenches. The French cavalry numbered 102,000 in May 1915 but had been reduced to 63,000 by October 1918. The German Army dismounted nearly all their cavalry in the West, maintaining only one mounted division on that front by January 1917.
Italy entered the war in 1915 with thirty regiments of line cavalry, lancers and light horse. While employed effectively against their Austro-Hungarian counterparts during the initial offensives across the Isonzo River, the Italian mounted forces ceased to have a significant role as the front shifted into mountainous terrain. By 1916 most cavalry machine-gun sections and two complete cavalry divisions had been dismounted and seconded to the infantry.
Some cavalry were retained as mounted troops in reserve behind the lines, in anticipation of a penetration of the opposing trenches that it seemed would never come. Tanks, introduced on the Western Front by the British in September 1916 during the Battle of the Somme, had the capacity to achieve such breakthroughs but did not have the reliable range to exploit them. In their first major use at the Battle of Cambrai (1917), the plan was for a cavalry division to follow behind the tanks, however they were not able to cross a canal because a tank had broken the only bridge. On a few other occasions, throughout the war, cavalry were readied in significant numbers for involvement in major offensives; such as in the Battle of Caporetto and the Battle of Moreuil Wood. However it was not until the German Army had been forced to retreat in the Hundred Days Offensive of 1918, that limited numbers of cavalry were again able to operate with any effectiveness in their intended role. There was a successful charge by the British 7th Dragoon Guards on the last day of the war.
In the wider spaces of the Eastern Front, a more fluid form of warfare continued and there was still a use for mounted troops. Some wide-ranging actions were fought, again mostly in the early months of the war. However, even here the value of cavalry was overrated and the maintenance of large mounted formations at the front by the Russian Army put a major strain on the railway system, to little strategic advantage. In February 1917, the Russian regular cavalry (exclusive of Cossacks) was reduced by nearly a third from its peak number of 200,000, as two squadrons of each regiment were dismounted and incorporated into additional infantry battalions. Their Austro-Hungarian opponents, plagued by a shortage of trained infantry, had been obliged to progressively convert most horse cavalry regiments to dismounted rifle units starting in late 1914.
In the Middle East, during the Sinai and Palestine Campaign mounted forces (British, Indian, Ottoman, Australian, Arab and New Zealand) retained an important strategic role both as mounted infantry and cavalry.
In Egypt, the mounted infantry formations like the New Zealand Mounted Rifles Brigade and Australian Light Horse of ANZAC Mounted Division, operating as mounted infantry, drove German and Ottoman forces back from Romani to Magdhaba and Rafa and out of the Egyptian Sinai Peninsula in 1916.
After a stalemate on the Gaza–Beersheba line between March and October 1917, Beersheba was captured by the Australian Mounted Division's 4th Light Horse Brigade. Their mounted charge succeeded after a coordinated attack by the British Infantry and Yeomanry cavalry and the Australian and New Zealand Light Horse and Mounted Rifles brigades. A series of coordinated attacks by these Egyptian Expeditionary Force infantry and mounted troops were also successful at the Battle of Mughar Ridge, during which the British infantry divisions and the Desert Mounted Corps drove two Ottoman armies back to the Jaffa—Jerusalem line. The infantry with mainly dismounted cavalry and mounted infantry fought in the Judean Hills to eventually almost encircle Jerusalem which was occupied shortly after.
During a pause in operations necessitated by the German spring offensive in 1918 on the Western Front, joint infantry and mounted infantry attacks towards Amman and Es Salt resulted in retreats back to the Jordan Valley which continued to be occupied by mounted divisions during the summer of 1918.
The Australian Mounted Division was armed with swords and in September, after the successful breaching of the Ottoman line on the Mediterranean coast by the British Empire infantry XXI Corps was followed by cavalry attacks by the 4th Cavalry Division, 5th Cavalry Division and Australian Mounted Divisions which almost encircled two Ottoman armies in the Judean Hills forcing their retreat. Meanwhile, Chaytor's Force of infantry and mounted infantry in ANZAC Mounted Division held the Jordan Valley, covering the right flank to later advance eastwards to capture Es Salt and Amman and half of a third Ottoman army. A subsequent pursuit by the 4th Cavalry Division and the Australian Mounted Division followed by the 5th Cavalry Division to Damascus. Armoured cars and 5th Cavalry Division lancers were continuing the pursuit of Ottoman units north of Aleppo when the Armistice of Mudros was signed by the Ottoman Empire.
A combination of military conservatism in almost all armies and post-war financial constraints prevented the lessons of 1914–1918 being acted on immediately. There was a general reduction in the number of cavalry regiments in the British, French, Italian and other Western armies but it was still argued with conviction (for example in the 1922 edition of the Encyclopædia Britannica) that mounted troops had a major role to play in future warfare. The 1920s saw an interim period during which cavalry remained as a proud and conspicuous element of all major armies, though much less so than prior to 1914.
Cavalry was extensively used in the Russian Civil War and the Soviet-Polish War. The last major cavalry battle was the Battle of Komarów in 1920, between Poland and the Russian Bolsheviks. Colonial warfare in Morocco, Syria, the Middle East and the North West Frontier of India provided some opportunities for mounted action against enemies lacking advanced weaponry.
The post-war German Army (Reichsheer) was permitted a large proportion of cavalry (18 regiments or 16.4% of total manpower) under the conditions of the Treaty of Versailles.
The British Army mechanised all cavalry regiments between 1929 and 1941, redefining their role from horse to armoured vehicles to form the Royal Armoured Corps together with the Royal Tank Regiment. The U.S. Cavalry abandoned its sabres in 1934 and commenced the conversion of its horsed regiments to mechanized cavalry, starting with the First Regiment of Cavalry in January 1933.
During the Turkish War of Independence, Turkish cavalry under General Fahrettin Altay was instrumental in the Kemalist victory over the invading Greek Army in 1922 during the Battle of Dumlupınar. The 5th Cavalry Division was able to slip behind the main Greek army, cutting off all communication and supply lines as well as retreat options. This forced the surrender of the remaining Greek forces and may have been the last time in history that cavalry played a definitive role in the outcome of a battle.
During the 1930s, the French Army experimented with integrating mounted and mechanised cavalry units into larger formations. Dragoon regiments were converted to motorised infantry (trucks and motor cycles), and cuirassiers to armoured units; while light cavalry (chasseurs a' cheval, hussars and spahis) remained as mounted sabre squadrons. The theory was that mixed forces comprising these diverse units could utilise the strengths of each according to circumstances. In practice mounted troops proved unable to keep up with fast moving mechanised units over any distance.
The 39 cavalry regiments of the British Indian Army were reduced to 21 as the result of a series of amalgamations immediately following World War I. The new establishment remained unchanged until 1936 when three regiments were redesignated as permanent training units, each with six, still mounted, regiments linked to them. In 1938, the process of mechanization began with the conversion of a full cavalry brigade (two Indian regiments and one British) to armoured car and tank units. By the end of 1940, all of the Indian cavalry had been mechanized, initially and in the majority of cases, to motorized infantry transported in 15cwt trucks. The last horsed regiment of the British Indian Army (other than the Viceroy's Bodyguard and some Indian States Forces regiments) was the 19th King George's Own Lancers which had its final mounted parade at Rawalpindi on 28 October 1939. This unit still exists in the Pakistan Army as an armored regiment.
While most armies still maintained cavalry units at the outbreak of World War II in 1939, significant mounted action was largely restricted to the Polish, Balkan, and Soviet campaigns. Rather than charge their mounts into battle, cavalry units were either used as mounted infantry (using horses to move into position and then dismounting for combat) or as reconnaissance units (especially in areas not suited to tracked or wheeled vehicles).
A popular myth is that Polish cavalry armed with lances charged German tanks during the September 1939 campaign. This arose from misreporting of a single clash on 1 September near Krojanty, when two squadrons of the Polish 18th Lancers armed with sabres scattered German infantry before being caught in the open by German armoured cars. Two examples illustrate how the myth developed. First, because motorised vehicles were in short supply, the Poles used horses to pull anti-tank weapons into position. Second, there were a few incidents when Polish cavalry was trapped by German tanks, and attempted to fight free. However, this did not mean that the Polish army chose to attack tanks with horse cavalry. Later, on the Eastern Front, the Red Army did deploy cavalry units effectively against the Germans.
A more correct term would be "mounted infantry" instead of "cavalry", as horses were primarily used as a means of transportation, for which they were very suitable in view of the very poor road conditions in pre-war Poland. Another myth describes Polish cavalry as being armed with both sabres and lances; lances were used for peacetime ceremonial purposes only and the primary weapon of the Polish cavalryman in 1939 was a rifle. Individual equipment did include a sabre, probably because of well-established tradition, and in the case of a melee combat this secondary weapon would probably be more effective than a rifle and bayonet. Moreover, the Polish cavalry brigade order of battle in 1939 included, apart from the mounted soldiers themselves, light and heavy machine guns (wheeled), the Anti-tank rifle, model 35, anti-aircraft weapons, anti tank artillery such as the Bofors 37 mm, also light and scout tanks, etc. The last cavalry vs. cavalry mutual charge in Europe took place in Poland during the Battle of Krasnobród, when Polish and German cavalry units clashed with each other.
The last classical cavalry charge of the war took place on March 1, 1945, during the Battle of Schoenfeld by the 1st "Warsaw" Independent Cavalry Brigade. Infantry and tanks had been employed to little effect against the German position, both of which floundered in the open wetlands only to be dominated by infantry and antitank fire from the German fortifications on the forward slope of Hill 157, overlooking the wetlands. The Germans had not taken cavalry into consideration when fortifying their position which, combined with the "Warsaw"s swift assault, overran the German anti-tank guns and consolidated into an attack into the village itself, now supported by infantry and tanks.
The Italian invasion of Greece in October 1940 saw mounted cavalry used effectively by the Greek defenders along the mountainous frontier with Albania. Three Greek cavalry regiments (two mounted and one partially mechanized) played an important role in the Italian defeat in this difficult terrain.
The contribution of Soviet cavalry to the development of modern military operational doctrine and its importance in defeating Nazi Germany has been eclipsed by the higher profile of tanks and airplanes. Soviet cavalry contributed significantly to the defeat of the Axis armies. They were able to provide the most mobile troops available in the early stages, when trucks and other equipment were low in quality; as well as providing cover for retreating forces. Considering their relatively limited numbers, the Soviet cavalry played a significant role in giving Germany its first real defeats in the early stages of the war. The continuing potential of mounted troops was demonstrated during the Battle of Moscow, against Guderian and the powerful central German 9th Army. Pavel Belov was given by Stavka a mobile group including the elite 9th tank brigade, ski battalions, Katyusha rocket launcher battalion among others, the unit additionally received new weapons. This newly created group became the first to carry the Soviet counter-offensive in late November, when the general offensive began on December 5. These mobile units often played major roles in both defensive and offensive operations.
Cavalry were amongst the first Soviet units to complete the encirclement in the Battle of Stalingrad, thus sealing the fate of the German 6th Army. Mounted Soviet forces also played a role in the encirclement of Berlin, with some Cossack cavalry units reaching the Reichstag in April 1945. Throughout the war they performed important tasks such as the capture of bridgeheads which is considered one of the hardest jobs in battle, often doing so with inferior numbers. For instance the 8th Guards Cavalry Regiment of the 2nd Guards Cavalry Division (Soviet Union), 1st Guards Cavalry Corps often fought outnumbered against elite German units.
By the final stages of the war only the Soviet Union was still fielding mounted units in substantial numbers, some in combined mechanized and horse units. The main advantage of this tactical approach was in enabling mounted infantry to keep pace with advancing tanks. Other factors favoring the retention of mounted forces included the high quality of Russian Cossacks, which provided about half of all mounted Soviet cavalry throughout the war. They excelled in warfare manoeuvers, since the lack of roads limited the effectiveness of wheeled vehicles in many parts of the Eastern Front. Another consideration was that sufficient logistic capacity was often not available to support very large motorized forces, whereas cavalry was relatively easy to maintain when detached from the main army and acting on its own initiative. The main usage of the Soviet cavalry involved infiltration through front lines with subsequent deep raids, which disorganized German supply lines. Another role was the pursuit of retreating enemy forces during major front-line operations and breakthroughs.
During World War II, the Royal Hungarian Army's hussars were typically only used to undertake reconnaissance tasks against Soviet forces, and then only in detachments of section or squadron strength.
The last documented hussar attack was conducted by Lieutenant Colonel Kálmán Mikecz on August 16, 1941, at Nikolaev. The hussars arriving as reinforcements, were employed to break through Russian positions ahead of German troops. The hussars equipped with swords and submachine guns broke through the Russian lines in a single attack.
An eyewitness account of the last hussar attack by Erich Kern, a German officer, was written in his memoir in 1948:
… We were again in a tough fight with the desperately defensive enemy who dug himself along a high railway embankment. We've been attacked four times already, and we've been kicked back all four times. The battalion commander swore, but the company commanders were helpless. Then, instead of the artillery support we asked for countless times, a Hungarian hussar regiment appeared on the scene. We laughed. What the hell do they want here with their graceful, elegant horses? We froze at once: these Hungarians went crazy. Cavalry Squadron approached after a cavalry squadron. The command word rang. The bronze-brown, slender riders almost grew to their saddle. Their shining colonel of golden parolis jerked his sword. Four or five armored cars cut out of the wings, and the regiment slashed across the wide plain with flashing swords in the afternoon sun. Seydlitz attacked like this once before. Forgetting all caution, we climbed out of our covers. It was all like a great equestrian movie. The first shots rumbled, then became less frequent. With astonished eyes, in disbelief, we watched as the Soviet regiment, which had so far repulsed our attacks with desperate determination, now turned around and left its positions in panic. And the triumphant Hungarians chased the Russian in front of them and shredded them with their glittering sabers. The hussar sword, it seems, was a bit much for the nerves of Russians. Now, for once, the ancient weapon has triumphed over modern equipment ....
The last mounted sabre charge by Italian cavalry occurred on August 24, 1942, at Isbuscenski (Russia), when a squadron of the Savoia Cavalry Regiment charged the 812th Siberian Infantry Regiment. The remainder of the regiment, together with the Novara Lancers made a dismounted attack in an action that ended with the retreat of the Russians after heavy losses on both sides. The final Italian cavalry action occurred on October 17, 1942, in Poloj (now Croatia) by a squadron of the Alexandria Cavalry Regiment against a large group of Yugoslav partisans.
Romanian, Hungarian and Italian cavalry were dispersed or disbanded following the retreat of the Axis forces from Russia. Germany still maintained some mounted (mixed with bicycles) SS and Cossack units until the last days of the War.
Finland used mounted troops against Russian forces effectively in forested terrain during the Continuation War. The last Finnish cavalry unit was not disbanded until 1947.
The U.S. Army's last horse cavalry actions were fought during World War II: a) by the 26th Cavalry Regiment—a small mounted regiment of Philippine Scouts which fought the Japanese during the retreat down the Bataan peninsula, until it was effectively destroyed by January 1942; and b) on captured German horses by the mounted reconnaissance section of the U.S. 10th Mountain Division in a spearhead pursuit of the German Army across the Po Valley in Italy in April 1945. The last horsed U.S. Cavalry (the Second Cavalry Division) were dismounted in March 1944.
All British Army cavalry regiments had been mechanised since 1 March 1942 when the Queen's Own Yorkshire Dragoons (Yeomanry) was converted to a motorised role, following mounted service against the Vichy French in Syria the previous year. The final cavalry charge by British Empire forces occurred on 21 March 1942 when a 60 strong patrol of the Burma Frontier Force encountered Japanese infantry near Toungoo airfield in central Myanmar. The Sikh sowars of the Frontier Force cavalry, led by Captain Arthur Sandeman of The Central India Horse (21st King George V's Own Horse), charged in the old style with sabres and most were killed.
In the early stages of World War II, mounted units of the Mongolian People's Army were involved in the Battle of Khalkhin Gol against invading Japanese forces. Soviet forces under the command of Georgy Zhukov, together with Mongolian forces, defeated the Japanese Sixth army and effectively ended the Soviet–Japanese Border Wars. After the Soviet–Japanese Neutrality Pact of 1941, Mongolia remained neutral throughout most of the war, but its geographical situation meant that the country served as a buffer between Japanese forces and the Soviet Union. In addition to keeping around 10% of the population under arms, Mongolia provided half a million trained horses for use by the Soviet Army. In 1945 a partially mounted Soviet-Mongolian Cavalry Mechanized Group played a supporting role on the western flank of the Soviet invasion of Manchuria. The last active service seen by cavalry units of the Mongolian Army occurred in 1946–1948, during border clashes between Mongolia and the Republic of China.
While most modern "cavalry" units have some historic connection with formerly mounted troops this is not always the case. The modern Irish Defence Forces (DF) includes a "Cavalry Corps" equipped with armoured cars and Scorpion tracked combat reconnaissance vehicles. The DF has never included horse cavalry since its establishment in 1922 (other than a small mounted escort of Blue Hussars drawn from the Artillery Corps when required for ceremonial occasions). However, the mystique of the cavalry is such that the name has been introduced for what was always a mechanised force.
Some engagements in late 20th and early 21st century guerrilla wars involved mounted troops, particularly against partisan or guerrilla fighters in areas with poor transport infrastructure. Such units were not used as cavalry but rather as mounted infantry. Examples occurred in Afghanistan, Portuguese Africa and Rhodesia. The French Army used existing mounted squadrons of Spahis to a limited extent for patrol work during the Algerian War (1954–1962). The Swiss Army maintained a mounted dragoon regiment for combat purposes until 1973. The Portuguese Army used horse mounted cavalry with some success in the wars of independence in Angola and Mozambique in the 1960s and 1970s. During the 1964–1979 Rhodesian Bush War the Rhodesian Army created an elite mounted infantry unit called Grey's Scouts to fight unconventional actions against the rebel forces of Robert Mugabe and Joshua Nkomo. The horse mounted infantry of the Scouts were effective and reportedly feared by their opponents in the rebel African forces. In the 1978 to present Afghan Civil War period there have been several instances of horse mounted combat.
Central and South American armies maintained mounted cavalry for longer than those of Asia, Europe, or North America. The Mexican Army included a number of horse mounted cavalry regiments as late as the mid-1990s and the Chilean Army had five such regiments in 1983 as mounted mountain troops.
The Soviet Army retained horse cavalry divisions until 1955.
Today the Indian Army's 61st Cavalry is reported to be the largest existing horse-mounted cavalry unit still having operational potential. It was raised in 1951 from the amalgamated state cavalry squadrons of Gwalior, Jodhpur, and Mysore. While primarily utilised for ceremonial purposes, the regiment can be deployed for internal security or police roles if required. The 61st Cavalry and the President's Body Guard parade in full dress uniform in New Delhi each year in what is probably the largest assembly of traditional cavalry still to be seen in the world. Both the Indian and the Pakistani armies maintain armoured regiments with the titles of Lancers or Horse, dating back to the 19th century.
As of 2007, the Chinese People's Liberation Army employed two battalions of horse-mounted border guards in Xinjiang for border patrol purposes. PLA mounted units last saw action during border clashes with Vietnam in the 1970s and 1980s, after which most cavalry units were disbanded as part of major military downsizing in the 1980s. In the wake of the 2008 Sichuan earthquake, there were calls to rebuild the army horse inventory for disaster relief in difficult terrain. Subsequent Chinese media reports confirm that the PLA maintains operational horse cavalry at squadron strength in Xinjiang and Inner Mongolia for scouting, logistical, and border security purposes.
The Chilean Army still maintains a mixed armoured cavalry regiment, with elements of it acting as mounted mountain exploration troops, based in the city of Angol, being part of the III Mountain Division, and another independent exploration cavalry detachment in the town of Chaitén. The rugged mountain terrain calls for the use of special horses suited for that use.
The Argentine Army has two mounted cavalry units: the Regiment of Horse Grenadiers, which performs mostly ceremonial duties but at the same time is responsible for the president's security (in this case, acting as infantry), and the 4th Mountain Cavalry Regiment (which comprises both horse and light armoured squadrons), stationed in San Martín de los Andes, where it has an exploration role as part the 6th Mountain Brigade. Most armoured cavalry units of the Army are considered successors to the old cavalry regiments from the Independence Wars, and keep their traditional names, such as Hussars, Cuirassiers, Lancers, etc., and uniforms. Equestrian training remains an important part of their tradition, especially among officers.
Cavalry or mounted gendarmerie units continue to be maintained for purely or primarily ceremonial purposes by the Algerian, Argentine, Bolivian, Brazilian, British, Bulgarian, Canadian, Chilean, Colombian, Danish, Dutch, Finnish, French, Hungarian, Indian, Italian, Jordanian, Malaysian, Moroccan, Nepalese, Nigerian, North Korean, Omani, Pakistani, Panamanian, Paraguayan, Peruvian, Polish, Portuguese, Russian, Senegalese, Spanish, Swedish, Thai, Tunisian, Turkmenistan, United States, Uruguayan and Venezuelan armed forces.
A number of armoured regiments in the British Army retain the historic designations of Hussars, Dragoons, Light Dragoons, Dragoon Guards, Lancers and Yeomanry. Only the Household Cavalry (consisting of the Life Guards' mounted squadron, The Blues and Royals' mounted squadron, the State Trumpeters of The Household Cavalry and the Household Cavalry Mounted Band) are maintained for mounted (and dismounted) ceremonial duties in London.
The French Army still has regiments with the historic designations of Cuirassiers, Hussars, Chasseurs, Dragoons and Spahis. Only the cavalry of the Republican Guard and a ceremonial fanfare detachment of trumpeters for the cavalry/armoured branch as a whole are now mounted.
In the Canadian Army, a number of regular and reserve units have cavalry roots, including The Royal Canadian Hussars (Montreal), the Governor General's Horse Guards, Lord Strathcona's Horse, The British Columbia Dragoons, The Royal Canadian Dragoons, and the South Alberta Light Horse. Of these, only Lord Strathcona's Horse and the Governor General's Horse Guards maintain an official ceremonial horse-mounted cavalry troop or squadron.
The modern Pakistan army maintains about 40 armoured regiments with the historic titles of Lancers, Cavalry or Horse. Six of these date back to the 19th century, although only the President's Body Guard remains horse-mounted.
In 2002, the Army of the Russian Federation reintroduced a ceremonial mounted squadron wearing historic uniforms.
Both the Australian and New Zealand armies follow the British practice of maintaining traditional titles (Light Horse or Mounted Rifles) for modern mechanised units. However, neither country retains a horse-mounted unit.
Several armored units of the modern United States Army retain the designation of "armored cavalry". The United States also has "air cavalry" units equipped with helicopters. The Horse Cavalry Detachment of the U.S. Army's 1st Cavalry Division, made up of active duty soldiers, still functions as an active unit, trained to approximate the weapons, tools, equipment and techniques used by the United States Cavalry in the 1880s.
The First Troop Philadelphia City Cavalry is a volunteer unit within the Pennsylvania Army National Guard which serves as a combat force when in federal service but acts in a mounted disaster relief role when in state service. In addition, the Parsons' Mounted Cavalry is a Reserve Officer Training Corps unit which forms part of the Corps of Cadets at Texas A&M University. Valley Forge Military Academy and College also has a Mounted Company, known as D-Troop .
Some individual U.S. states maintain cavalry units as a part of their respective state defense forces. The Maryland Defense Force includes a cavalry unit, Cavalry Troop A, which serves primarily as a ceremonial unit. The unit training includes a saber qualification course based upon the 1926 U.S. Army course. Cavalry Troop A also assists other Maryland agencies as a rural search and rescue asset. In Massachusetts, The National Lancers trace their lineage to a volunteer cavalry militia unit established in 1836 and are currently organized as an official part of the Massachusetts Organized Militia. The National Lancers maintain three units, Troops A, B, and C, which serve in a ceremonial role and assist in search and rescue missions. In July 2004, the National Lancers were ordered into active state service to guard Camp Curtis Guild during the 2004 Democratic National Convention. The Governor's Horse Guard of Connecticut maintains two companies which are trained in urban crowd control. In 2020, the California State Guard stood up the 26th Mounted Operations Detachment, a search-and-rescue cavalry unit.
From the beginning of civilization to the 20th century, ownership of heavy cavalry horses has been a mark of wealth amongst settled peoples. A cavalry horse involves considerable expense in breeding, training, feeding, and equipment, and has very little productive use except as a mode of transport.
For this reason, and because of their often decisive military role, the cavalry has typically been associated with high social status. This was most clearly seen in the feudal system, where a lord was expected to enter combat armored and on horseback and bring with him an entourage of lightly armed peasants on foot. If landlords and peasant levies came into conflict, the poorly trained footmen would be ill-equipped to defeat armored knights.
In later national armies, service as an officer in the cavalry was generally a badge of high social status. For instance prior to 1914 most officers of British cavalry regiments came from a socially privileged background and the considerable expenses associated with their role generally required private means, even after it became possible for officers of the line infantry regiments to live on their pay. Options open to poorer cavalry officers in the various European armies included service with less fashionable (though often highly professional) frontier or colonial units. These included the British Indian cavalry, the Russian Cossacks or the French Chasseurs d'Afrique.
During the 19th and early 20th centuries most monarchies maintained a mounted cavalry element in their royal or imperial guards. These ranged from small units providing ceremonial escorts and palace guards, through to large formations intended for active service. The mounted escort of the Spanish Royal Household provided an example of the former and the twelve cavalry regiments of the Prussian Imperial Guard an example of the latter. In either case the officers of such units were likely to be drawn from the aristocracies of their respective societies.
Some sense of the noise and power of a cavalry charge can be gained from the 1970 film Waterloo, which featured some 2,000 cavalrymen, some of them Cossacks. It included detailed displays of the horsemanship required to manage animal and weapons in large numbers at the gallop (unlike the real battle of Waterloo, where deep mud significantly slowed the horses). The Gary Cooper movie They Came to Cordura contains a scene of a cavalry regiment deploying from march to battle line formation. A smaller-scale cavalry charge can be seen in The Lord of the Rings: The Return of the King (2003); although the finished scene has substantial computer-generated imagery, raw footage and reactions of the riders are shown in the Extended Version DVD Appendices.
Other films that show cavalry actions include: | [
{
"paragraph_id": 0,
"text": "Historically, cavalry (from the French word cavalerie, itself derived from \"cheval\" meaning \"horse\") are soldiers or warriors who fight mounted on horseback. Cavalry were the most mobile of the combat arms, operating as light cavalry in the roles of reconnaissance, screening, and skirmishing in many armies, or as heavy cavalry for decisive shock attacks in other armies. An individual soldier in the cavalry is known by a number of designations depending on era and tactics, such as a cavalryman, horseman, trooper, cataphract, knight, drabant, hussar, uhlan, mamluk, cuirassier, lancer, dragoon, or horse archer. The designation of cavalry was not usually given to any military forces that used other animals for mounts, such as camels or elephants. Infantry who moved on horseback, but dismounted to fight on foot, were known in the early 17th to the early 18th century as dragoons, a class of mounted infantry which in most armies later evolved into standard cavalry while retaining their historic designation.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Cavalry had the advantage of improved mobility, and a soldier fighting from horseback also had the advantages of greater height, speed, and inertial mass over an opponent on foot. Another element of horse mounted warfare is the psychological impact a mounted soldier can inflict on an opponent.",
"title": ""
},
{
"paragraph_id": 2,
"text": "The speed, mobility, and shock value of cavalry was greatly appreciated and exploited in armed forces in the Ancient and Middle Ages; some forces were mostly cavalry, particularly in nomadic societies of Asia, notably the Huns of Attila and the later Mongol armies. In Europe, cavalry became increasingly armoured (heavy), and eventually evolving into the mounted knights of the medieval period. During the 17th century, cavalry in Europe discarded most of its armor, which was ineffective against the muskets and cannons that were coming into common use, and by the mid-18th century armor had mainly fallen into obsolescence, although some regiments retained a small thickened cuirass that offered protection against lances, sabres, and bayonets; including some protection against a shot from distance.",
"title": ""
},
{
"paragraph_id": 3,
"text": "In the interwar period many cavalry units were converted into motorized infantry and mechanized infantry units, or reformed as tank troops. The cavalry tank or cruiser tank was one designed with a speed and purpose beyond that of infantry tanks and would subsequently develop into the main battle tank. Nonetheless, some cavalry still served during World War II (notably in the Red Army, the Mongolian People's Army, the Royal Italian Army, the Royal Hungarian Army, the Romanian Army, the Polish Land Forces, and German light reconnaissance units within the Waffen SS).",
"title": ""
},
{
"paragraph_id": 4,
"text": "Most cavalry units that are horse-mounted in modern armies serve in purely ceremonial roles, or as mounted infantry in difficult terrain such as mountains or heavily forested areas. Modern usage of the term generally refers to units performing the role of reconnaissance, surveillance, and target acquisition (analogous to historical light cavalry) or main battle tank units (analogous to historical heavy cavalry).",
"title": ""
},
{
"paragraph_id": 5,
"text": "Historically, cavalry was divided into light cavalry and heavy cavalry. The differences were their roles in combat, the size of their mounts, and how much armor was worn by the mount and rider.",
"title": "Role"
},
{
"paragraph_id": 6,
"text": "Heavy cavalry, such as Byzantine cataphracts and knights of the Early Middle Ages in Europe, were used as shock troops, charging the main body of the enemy at the height of a battle; in many cases their actions decided the outcome of the battle, hence the later term battle cavalry. Light cavalry, such as horse archers, hussars, and Cossack cavalry, were assigned all the numerous roles that were ill-suited to more narrowly-focused heavy forces. This includes scouting, deterring enemy scouts, foraging, raiding, skirmishing, pursuit of retreating enemy forces, screening of retreating friendly forces, linking separated friendly forces, and countering enemy light forces in all these same roles.",
"title": "Role"
},
{
"paragraph_id": 7,
"text": "Light and heavy cavalry roles continued through early modern warfare, but armor was reduced, with light cavalry mostly unarmored. Yet many cavalry units still retained cuirasses and helmets for their protective value against sword and bayonet strikes, and the morale boost these provide to the wearers, despite the actual armour giving little protection from firearms. By this time the main difference between light and heavy cavalry was in their training and weight; the former was regarded as best suited for harassment and reconnaissance, while the latter was considered best for close-order charges. By the start of the 20th century, as total battlefield firepower increased, cavalry increasingly tended to become dragoons in practice, riding mounted between battles, but dismounting to fight as infantry, even though retaining unit names that reflected their older cavalry roles. Military conservatism was however strong in most continental cavalry during peacetime and in these dismounted action continued to be regarded as a secondary function until the outbreak of World War I in 1914.",
"title": "Role"
},
{
"paragraph_id": 8,
"text": "With the development of armored warfare, the heavy cavalry role of decisive shock troops had been taken over by armored units employing medium and heavy tanks, and later main battle tanks. Despite horse-born cavalry becoming obsolete, the term cavalry is still used, referring in modern times to units continuing to fulfill the traditional light cavalry roles, employing fast armored cars, light tanks, and infantry fighting vehicles instead of horses, while air cavalry employs helicopters.",
"title": "Role"
},
{
"paragraph_id": 9,
"text": "Before the Iron Age, the role of cavalry on the battlefield was largely performed by light chariots. The chariot originated with the Sintashta-Petrovka culture in Central Asia and spread by nomadic or semi-nomadic Indo-Iranians. The chariot was quickly adopted by settled peoples both as a military technology and an object of ceremonial status, especially by the pharaohs of the New Kingdom of Egypt from 1550 BC as well as the Assyrian army and Babylonian royalty.",
"title": "Early history"
},
{
"paragraph_id": 10,
"text": "The power of mobility given by mounted units was recognized early on, but was offset by the difficulty of raising large forces and by the inability of horses (then mostly small) to carry heavy armor. Nonetheless, there are indications that, from the 15th century BC onwards, horseback riding was practiced amongst the military elites of the great states of the ancient Near East, most notably those in Egypt, Assyria, the Hittite Empire, and Mycenaean Greece.",
"title": "Early history"
},
{
"paragraph_id": 11,
"text": "Cavalry techniques, and the rise of true cavalry, were an innovation of equestrian nomads of the Central Asian and Iranian steppe and pastoralist tribes such as the Iranic Parthians and Sarmatians. Together with a core of armoured lancers, these were predominantly horse archers using the Parthian shot tactic.",
"title": "Early history"
},
{
"paragraph_id": 12,
"text": "The photograph straight above shows Assyrian cavalry from reliefs of 865–860 BC. At this time, the men had no spurs, saddles, saddle cloths, or stirrups. Fighting from the back of a horse was much more difficult than mere riding. The cavalry acted in pairs; the reins of the mounted archer were controlled by his neighbour's hand. Even at this early time, cavalry used swords, shields, spears, and bows. The sculpture implies two types of cavalry, but this might be a simplification by the artist. Later images of Assyrian cavalry show saddle cloths as primitive saddles, allowing each archer to control his own horse.",
"title": "Early history"
},
{
"paragraph_id": 13,
"text": "As early as 490 BC a breed of large horses was bred in the Nisaean plain in Media to carry men with increasing amounts of armour (Herodotus 7,40 & 9,20), but large horses were still very exceptional at this time. By the fourth century BC the Chinese during the Warring States period (403–221 BC) began to use cavalry against rival states, and by 331 BC when Alexander the Great defeated the Persians the use of chariots in battle was obsolete in most nations; despite a few ineffective attempts to revive scythed chariots. The last recorded use of chariots as a shock force in continental Europe was during the Battle of Telamon in 225 BC. However, chariots remained in use for ceremonial purposes such as carrying the victorious general in a Roman triumph, or for racing.",
"title": "Early history"
},
{
"paragraph_id": 14,
"text": "Outside of mainland Europe, the southern Britons met Julius Caesar with chariots in 55 and 54 BC, but by the time of the Roman conquest of Britain a century later chariots were obsolete, even in Britannia. The last mention of chariot use in Britain was by the Caledonians at the Mons Graupius, in 84 AD.",
"title": "Early history"
},
{
"paragraph_id": 15,
"text": "During the classical Greek period cavalry were usually limited to those citizens who could afford expensive war-horses. Three types of cavalry became common: light cavalry, whose riders, armed with javelins, could harass and skirmish; heavy cavalry, whose troopers, using lances, had the ability to close in on their opponents; and finally those whose equipment allowed them to fight either on horseback or foot. The role of horsemen did however remain secondary to that of the hoplites or heavy infantry who comprised the main strength of the citizen levies of the various city states.",
"title": "Early history"
},
{
"paragraph_id": 16,
"text": "Cavalry played a relatively minor role in ancient Greek city-states, with conflicts decided by massed armored infantry. However, Thebes produced Pelopidas, their first great cavalry commander, whose tactics and skills were absorbed by Philip II of Macedon when Philip was a guest-hostage in Thebes. Thessaly was widely known for producing competent cavalrymen, and later experiences in wars both with and against the Persians taught the Greeks the value of cavalry in skirmishing and pursuit. The Athenian author and soldier Xenophon in particular advocated the creation of a small but well-trained cavalry force; to that end, he wrote several manuals on horsemanship and cavalry operations.",
"title": "Early history"
},
{
"paragraph_id": 17,
"text": "The Macedonian Kingdom in the north, on the other hand, developed a strong cavalry force that culminated in the hetairoi (Companion cavalry) of Philip II of Macedon and Alexander the Great. In addition to these heavy cavalry, the Macedonian army also employed lighter horsemen called prodromoi for scouting and screening, as well as the Macedonian pike phalanx and various kinds of light infantry. There were also the Ippiko (or \"Horserider\"), Greek \"heavy\" cavalry, armed with kontos (or cavalry lance), and sword. These wore leather armour or mail plus a helmet. They were medium rather than heavy cavalry, meaning that they were better suited to be scouts, skirmishers, and pursuers rather than front line fighters. The effectiveness of this combination of cavalry and infantry helped to break enemy lines and was most dramatically demonstrated in Alexander's conquests of Persia, Bactria, and northwestern India.",
"title": "Early history"
},
{
"paragraph_id": 18,
"text": "The cavalry in the early Roman Republic remained the preserve of the wealthy landed class known as the equites—men who could afford the expense of maintaining a horse in addition to arms and armor heavier than those of the common legions. Horses were provided by the Republic and could be withdrawn if neglected or misused, together with the status of being a cavalryman.",
"title": "Early history"
},
{
"paragraph_id": 19,
"text": "As the class grew to be more of a social elite instead of a functional property-based military grouping, the Romans began to employ Italian socii for filling the ranks of their cavalry. The weakness of Roman cavalry was demonstrated by Hannibal Barca during the Second Punic War where he used his superior mounted forces to win several battles. The most notable of these was the Battle of Cannae, where he inflicted a catastrophic defeat on the Romans. At about the same time the Romans began to recruit foreign auxiliary cavalry from among Gauls, Iberians, and Numidians, the last being highly valued as mounted skirmishers and scouts (see Numidian cavalry). Julius Caesar had a high opinion of his escort of Germanic mixed cavalry, giving rise to the Cohortes Equitatae. Early emperors maintained an ala of Batavian cavalry as their personal bodyguards until the unit was dismissed by Galba after the Batavian Rebellion.",
"title": "Early history"
},
{
"paragraph_id": 20,
"text": "For the most part, Roman cavalry during the early Republic functioned as an adjunct to the legionary infantry and formed only one-fifth of the standing force comprising a consular army. Except in times of major mobilisation about 1,800 horsemen were maintained, with three hundred attached to each legion. The relatively low ratio of horsemen to infantry does not mean that the utility of cavalry should be underestimated, as its strategic role in scouting, skirmishing, and outpost duties was crucial to the Romans' capability to conduct operations over long distances in hostile or unfamiliar territory. On some occasions Roman cavalry also proved its ability to strike a decisive tactical blow against a weakened or unprepared enemy, such as the final charge at the Battle of Aquilonia.",
"title": "Early history"
},
{
"paragraph_id": 21,
"text": "After defeats such as the Battle of Carrhae, the Romans learned the importance of large cavalry formations from the Parthians. At the same time heavy spears and shields modelled on those favoured by the horsemen of the Greek city-states were adopted to replace the lighter weaponry of early Rome. These improvements in tactics and equipment reflected those of a thousand years earlier when the first Iranians to reach the Iranian Plateau forced the Assyrians to undertake similar reform. Nonetheless, the Romans would continue to rely mainly on their heavy infantry supported by auxiliary cavalry.",
"title": "Early history"
},
{
"paragraph_id": 22,
"text": "In the army of the late Roman Empire, cavalry played an increasingly important role. The Spatha, the classical sword throughout most of the 1st millennium was adopted as the standard model for the Empire's cavalry forces. By the 6th century these had evolved into lengthy straight weapons influenced by Persian and other eastern patterns. Other specialist weapons during this period included javlins, long reaching lancers, axes and maces.",
"title": "Early history"
},
{
"paragraph_id": 23,
"text": "The most widespread employment of heavy cavalry at this time was found in the forces of the Iranian empires, the Parthians and their Persian Sasanian successors. Both, but especially the former, were famed for the cataphract (fully armored cavalry armed with lances) even though the majority of their forces consisted of lighter horse archers. The West first encountered this eastern heavy cavalry during the Hellenistic period with further intensive contacts during the eight centuries of the Roman–Persian Wars. At first the Parthians' mobility greatly confounded the Romans, whose armoured close-order infantry proved unable to match the speed of the Parthians. However, later the Romans would successfully adapt such heavy armor and cavalry tactics by creating their own units of cataphracts and clibanarii.",
"title": "Early history"
},
{
"paragraph_id": 24,
"text": "The decline of the Roman infrastructure made it more difficult to field large infantry forces, and during the 4th and 5th centuries cavalry began to take a more dominant role on the European battlefield, also in part made possible by the appearance of new, larger breeds of horses. The replacement of the Roman saddle by variants on the Scythian model, with pommel and cantle, was also a significant factor as was the adoption of stirrups and the concomitant increase in stability of the rider's seat. Armored cataphracts began to be deployed in eastern Europe and the Near East, following the precedents established by Persian forces, as the main striking force of the armies in contrast to the earlier roles of cavalry as scouts, raiders, and outflankers.",
"title": "Early history"
},
{
"paragraph_id": 25,
"text": "The late-Roman cavalry tradition of organized units in a standing army differed fundamentally from the nobility of the Germanic invaders—individual warriors who could afford to provide their own horses and equipment. While there was no direct linkage with these predecessors the early medieval knight also developed as a member of a social and martial elite, able to meet the considerable expenses required by his role from grants of land and other incomes.",
"title": "Early history"
},
{
"paragraph_id": 26,
"text": "Xiongnu, Tujue, Avars, Kipchaks, Khitans, Mongols, Don Cossacks and the various Turkic peoples are also examples of the horse-mounted groups that managed to gain substantial successes in military conflicts with settled agrarian and urban societies, due to their strategic and tactical mobility. As European states began to assume the character of bureaucratic nation-states supporting professional standing armies, recruitment of these mounted warriors was undertaken in order to fill the strategic roles of scouts and raiders.",
"title": "Asia"
},
{
"paragraph_id": 27,
"text": "The best known instance of the continued employment of mounted tribal auxiliaries were the Cossack cavalry regiments of the Russian Empire. In Eastern Europe, and out onto the steppes, cavalry remained important much longer and dominated the scene of warfare until the early 17th century and even beyond, as the strategic mobility of cavalry was crucial for the semi-nomadic pastoralist lives that many steppe cultures led. Tibetans also had a tradition of cavalry warfare, in several military engagements with the Chinese Tang dynasty (618–907 AD).",
"title": "Asia"
},
{
"paragraph_id": 28,
"text": "Further east, the military history of China, specifically northern China, held a long tradition of intense military exchange between Han Chinese infantry forces of the settled dynastic empires and the mounted nomads or \"barbarians\" of the north. The naval history of China was centered more to the south, where mountains, rivers, and large lakes necessitated the employment of a large and well-kept navy.",
"title": "Asia"
},
{
"paragraph_id": 29,
"text": "In 307 BC, King Wuling of Zhao, the ruler of the former state of Jin, ordered his commanders and troops to adopt the trousers of the nomads as well as practice the nomads' form of mounted archery to hone their new cavalry skills.",
"title": "Asia"
},
{
"paragraph_id": 30,
"text": "The adoption of massed cavalry in China also broke the tradition of the chariot-riding Chinese aristocracy in battle, which had been in use since the ancient Shang dynasty (c. 1600–1050 BC). By this time large Chinese infantry-based armies of 100,000 to 200,000 troops were now buttressed with several hundred thousand mounted cavalry in support or as an effective striking force. The handheld pistol-and-trigger crossbow was invented in China in the fourth century BC; it was written by the Song dynasty scholars Zeng Gongliang, Ding Du, and Yang Weide in their book Wujing Zongyao (1044 AD) that massed missile fire by crossbowmen was the most effective defense against enemy cavalry charges.",
"title": "Asia"
},
{
"paragraph_id": 31,
"text": "On many occasions the Chinese studied nomadic cavalry tactics and applied the lessons in creating their own potent cavalry forces, while in others they simply recruited the tribal horsemen wholesale into their armies; and in yet other cases nomadic empires proved eager to enlist Chinese infantry and engineering, as in the case of the Mongol Empire and its sinicized part, the Yuan dynasty (1279–1368). The Chinese recognized early on during the Han dynasty (202 BC – 220 AD) that they were at a disadvantage in lacking the number of horses the northern nomadic peoples mustered in their armies. Emperor Wu of Han (r 141–87 BC) went to war with the Dayuan for this reason, since the Dayuan were hoarding a massive amount of tall, strong, Central Asian bred horses in the Hellenized–Greek region of Fergana (established slightly earlier by Alexander the Great). Although experiencing some defeats early on in the campaign, Emperor Wu's war from 104 BC to 102 BC succeeded in gathering the prized tribute of horses from Fergana.",
"title": "Asia"
},
{
"paragraph_id": 32,
"text": "Cavalry tactics in China were enhanced by the invention of the saddle-attached stirrup by at least the 4th century, as the oldest reliable depiction of a rider with paired stirrups was found in a Jin dynasty tomb of the year 322 AD. The Chinese invention of the horse collar by the 5th century was also a great improvement from the breast harness, allowing the horse to haul greater weight without heavy burden on its skeletal structure.",
"title": "Asia"
},
{
"paragraph_id": 33,
"text": "The horse warfare of Korea was first started during the ancient Korean kingdom Gojoseon. Since at least the 3rd century BC, there was influence of northern nomadic peoples and Yemaek peoples on Korean warfare. By roughly the first century BC, the ancient kingdom of Buyeo also had mounted warriors. The cavalry of Goguryeo, one of the Three Kingdoms of Korea, were called Gaemamusa (개마무사, 鎧馬武士), and were renowned as a fearsome heavy cavalry force. King Gwanggaeto the Great often led expeditions into the Baekje, Gaya confederacy, Buyeo, Later Yan and against Japanese invaders with his cavalry.",
"title": "Asia"
},
{
"paragraph_id": 34,
"text": "In the 12th century, Jurchen tribes began to violate the Goryeo–Jurchen borders, and eventually invaded Goryeo Korea. After experiencing the invasion by the Jurchen, Korean general Yun Gwan realized that Goryeo lacked efficient cavalry units. He reorganized the Goryeo military into a professional army that would contain decent and well-trained cavalry units. In 1107, the Jurchen were ultimately defeated, and surrendered to Yun Gwan. To mark the victory, General Yun built nine fortresses to the northeast of the Goryeo–Jurchen borders (동북 9성, 東北 九城).",
"title": "Asia"
},
{
"paragraph_id": 35,
"text": "The ancient Japanese of the Kofun period also adopted cavalry and equine culture by the 5th century AD. The emergence of the samurai aristocracy led to the development of armoured horse archers, themselves to develop into charging lancer cavalry as gunpowder weapons rendered bows obsolete. Japanese cavalry was largely made up of landowners who would be upon a horse to better survey the troops they were called upon to bring to an engagement, rather than traditional mounted warfare seen in other cultures with massed cavalry units.",
"title": "Asia"
},
{
"paragraph_id": 36,
"text": "An example is Yabusame (流鏑馬), a type of mounted archery in traditional Japanese archery. An archer on a running horse shoots three special \"turnip-headed\" arrows successively at three wooden targets.",
"title": "Asia"
},
{
"paragraph_id": 37,
"text": "This style of archery has its origins at the beginning of the Kamakura period. Minamoto no Yoritomo became alarmed at the lack of archery skills his samurai had. He organized yabusame as a form of practice. Currently, the best places to see yabusame performed are at the Tsurugaoka Hachiman-gū in Kamakura and Shimogamo Shrine in Kyoto (during Aoi Matsuri in early May). It is also performed in Samukawa and on the beach at Zushi, as well as other locations.",
"title": "Asia"
},
{
"paragraph_id": 38,
"text": "Kasagake or Kasakake (笠懸, かさがけ lit. \"hat shooting\") is a type of Japanese mounted archery. In contrast to yabusame, the types of targets are various and the archer shoots without stopping the horse. While yabusame has been played as a part of formal ceremonies, kasagake has developed as a game or practice of martial arts, focusing on technical elements of horse archery.",
"title": "Asia"
},
{
"paragraph_id": 39,
"text": "In the Indian subcontinent, cavalry played a major role from the Gupta dynasty (320–600) period onwards. India has also the oldest evidence for the introduction of toe-stirrups.",
"title": "Asia"
},
{
"paragraph_id": 40,
"text": "Indian literature contains numerous references to the mounted warriors of the Central Asian horse nomads, notably the Sakas, Kambojas, Yavanas, Pahlavas and Paradas. Numerous Puranic texts refer to a conflict in ancient India (16th century BC) in which the horsemen of five nations, called the \"Five Hordes\" (pañca.ganan) or Kṣatriya hordes (Kṣatriya ganah), attacked and captured the state of Ayudhya by dethroning its Vedic King Bahu",
"title": "Asia"
},
{
"paragraph_id": 41,
"text": "The Mahabharata, Ramayana, numerous Puranas and some foreign sources attest that the Kamboja cavalry frequently played role in ancient wars. V. R. Ramachandra Dikshitar writes: \"Both the Puranas and the epics agree that the horses of the Sindhu and Kamboja regions were of the finest breed, and that the services of the Kambojas as cavalry troopers were utilised in ancient wars\". J.A.O.S. writes: \"Most famous horses are said to come either from Sindhu or Kamboja; of the latter (i.e. the Kamboja), the Indian epic Mahabharata speaks among the finest horsemen\".",
"title": "Asia"
},
{
"paragraph_id": 42,
"text": "The Mahabharata speaks of the esteemed cavalry of the Kambojas, Sakas, Yavanas and Tusharas, all of whom had participated in the Kurukshetra war under the supreme command of Kamboja ruler Sudakshin Kamboj.",
"title": "Asia"
},
{
"paragraph_id": 43,
"text": "Mahabharata and Vishnudharmottara Purana pay especial attention to the Kambojas, Yavansa, Gandharas etc. being ashva.yuddha.kushalah (expert cavalrymen). In the Mahabharata war, the Kamboja cavalry along with that of the Sakas, Yavanas is reported to have been enlisted by the Kuru king Duryodhana of Hastinapura.",
"title": "Asia"
},
{
"paragraph_id": 44,
"text": "Herodotus (c. 484 – c. 425 BC) attests that the Gandarian mercenaries (i.e. Gandharans/Kambojans of Gandari Strapy of Achaemenids) from the 20th strapy of the Achaemenids were recruited in the army of emperor Xerxes I (486–465 BC), which he led against the Hellas. Similarly, the men of the Mountain Land from north of Kabul-River equivalent to medieval Kohistan (Pakistan), figure in the army of Darius III against Alexander at Arbela, providing a cavalry force and 15 elephants. This obviously refers to Kamboja cavalry south of Hindukush.",
"title": "Asia"
},
{
"paragraph_id": 45,
"text": "The Kambojas were famous for their horses, as well as cavalrymen (asva-yuddha-Kushalah). On account of their supreme position in horse (Ashva) culture, they were also popularly known as Ashvakas, i.e. the \"horsemen\" and their land was known as \"Home of Horses\". They are the Assakenoi and Aspasioi of the Classical writings, and the Ashvakayanas and Ashvayanas in Pāṇini's Ashtadhyayi. The Assakenoi had faced Alexander with 30,000 infantry, 20,000 cavalry and 30 war elephants. Scholars have identified the Assakenoi and Aspasioi clans of Kunar and Swat valleys as a section of the Kambojas. These hardy tribes had offered stubborn resistance to Alexander (c. 326 BC) during latter's campaign of the Kabul, Kunar and Swat valleys and had even extracted the praise of the Alexander's historians. These highlanders, designated as \"parvatiya Ayudhajivinah\" in Pāṇini's Astadhyayi, were rebellious, fiercely independent and freedom-loving cavalrymen who never easily yielded to any overlord.",
"title": "Asia"
},
{
"paragraph_id": 46,
"text": "The Sanskrit drama Mudra-rakashas by Visakha Dutta and the Jaina work Parishishtaparvan refer to Chandragupta's (c. 320 BC – c. 298 BC) alliance with Himalayan king Parvataka. The Himalayan alliance gave Chandragupta a formidable composite army made up of the cavalry forces of the Shakas, Yavanas, Kambojas, Kiratas, Parasikas and Bahlikas as attested by Mudra-Rakashas (Mudra-Rakshasa 2). These hordes had helped Chandragupta Maurya defeat the ruler of Magadha and placed Chandragupta on the throne, thus laying the foundations of Mauryan dynasty in Northern India.",
"title": "Asia"
},
{
"paragraph_id": 47,
"text": "The cavalry of Hunas and the Kambojas is also attested in the Raghu Vamsa epic poem of Sanskrit poet Kalidasa. Raghu of Kalidasa is believed to be Chandragupta II (Vikaramaditya) (375–413/15 AD), of the well-known Gupta dynasty.",
"title": "Asia"
},
{
"paragraph_id": 48,
"text": "As late as the mediaeval era, the Kamboja cavalry had also formed part of the Gurjara-Pratihara armed forces from the eighth to the 10th centuries AD. They had come to Bengal with the Pratiharas when the latter conquered part of the province.",
"title": "Asia"
},
{
"paragraph_id": 49,
"text": "Ancient Kambojas organised military sanghas and shrenis (corporations) to manage their political and military affairs, as Arthashastra of Kautiliya as well as the Mahabharata record. They are described as Ayuddha-jivi or Shastr-opajivis (nations-in-arms), which also means that the Kamboja cavalry offered its military services to other nations as well. There are numerous references to Kambojas having been requisitioned as cavalry troopers in ancient wars by outside nations.",
"title": "Asia"
},
{
"paragraph_id": 50,
"text": "The Mughal armies (lashkar) were primarily a cavalry force. The elite corps were the ahadi who provided direct service to the Emperor and acted as guard cavalry. Supplementary cavalry or dakhilis were recruited, equipped and paid by the central state. This was in contrast to the tabinan horsemen who were the followers of individual noblemen. Their training and equipment varied widely but they made up the backbone of the Mughal cavalry. Finally there were tribal irregulars led by and loyal to tributary chiefs. These included Hindus, Afghans and Turks summoned for military service when their autonomous leaders were called on by the Imperial government.",
"title": "Asia"
},
{
"paragraph_id": 51,
"text": "As the quality and availability of heavy infantry declined in Europe with the fall of the Roman Empire, heavy cavalry became more effective. Infantry that lack the cohesion and discipline of tight formations are more susceptible to being broken and scattered by shock combat—the main role of heavy cavalry, which rose to become the dominant force on the European battlefield.",
"title": "European Middle Ages"
},
{
"paragraph_id": 52,
"text": "As heavy cavalry increased in importance, it became the main focus of military development. The arms and armour for heavy cavalry increased, the high-backed saddle developed, and stirrups and spurs were added, increasing the advantage of heavy cavalry even more.",
"title": "European Middle Ages"
},
{
"paragraph_id": 53,
"text": "This shift in military importance was reflected in an increasingly hierarchical society as well. From the late 10th century onwards heavily armed horsemen, milites or knights, emerged as an expensive elite taking centre stage both on and off the battlefield. This class of aristocratic warriors was considered the \"ultimate\" in heavy cavalry: well-equipped with the best weapons, state-of-the-art armour from head to foot, leading with the lance in battle in a full-gallop, close-formation \"knightly charge\" that might prove irresistible, winning the battle almost as soon as it began.",
"title": "European Middle Ages"
},
{
"paragraph_id": 54,
"text": "But knights remained the minority of total available combat forces; the expense of arms, armour, and horses was only affordable to a select few. While mounted men-at-arms focused on a narrow combat role of shock combat, medieval armies relied on a large variety of foot troops to fulfill all the rest (skirmishing, flank guards, scouting, holding ground, etc.). Medieval chroniclers tended to pay undue attention to the knights at the expense of the common soldiers, which led early students of military history to suppose that heavy cavalry was the only force that mattered on medieval European battlefields. But well-trained and disciplined infantry could defeat knights.",
"title": "European Middle Ages"
},
{
"paragraph_id": 55,
"text": "Massed English longbowmen triumphed over French cavalry at Crécy, Poitiers and Agincourt, while at Gisors (1188), Bannockburn (1314), and Laupen (1339), foot-soldiers proved they could resist cavalry charges as long as they held their formation. Once the Swiss developed their pike squares for offensive as well as defensive use, infantry started to become the principal arm. This aggressive new doctrine gave the Swiss victory over a range of adversaries, and their enemies found that the only reliable way to defeat them was by the use of an even more comprehensive combined arms doctrine, as evidenced in the Battle of Marignano. The introduction of missile weapons that required less skill than the longbow, such as the crossbow and hand cannon, also helped remove the focus somewhat from cavalry elites to masses of cheap infantry equipped with easy-to-learn weapons. These missile weapons were very successfully used in the Hussite Wars, in combination with Wagenburg tactics.",
"title": "European Middle Ages"
},
{
"paragraph_id": 56,
"text": "This gradual rise in the dominance of infantry led to the adoption of dismounted tactics. From the earliest times knights and mounted men-at-arms had frequently dismounted to handle enemies they could not overcome on horseback, such as in the Battle of the Dyle (891) and the Battle of Bremule (1119), but after the 1350s this trend became more marked with the dismounted men-at-arms fighting as super-heavy infantry with two-handed swords and poleaxes. In any case, warfare in the Middle Ages tended to be dominated by raids and sieges rather than pitched battles, and mounted men-at-arms rarely had any choice other than dismounting when faced with the prospect of assaulting a fortified position.",
"title": "European Middle Ages"
},
{
"paragraph_id": 57,
"text": "The Islamic Prophet Muhammad made use of cavalry in many of his military campaigns including the Expedition of Dhu Qarad, and the expedition of Zaid ibn Haritha in al-Is which took place in September, 627 AD, fifth month of 6 AH of the Islamic calendar.",
"title": "Islamic States"
},
{
"paragraph_id": 58,
"text": "Early organized Arab mounted forces under the Rashidun caliphate comprised a light cavalry armed with lance and sword. Its main role was to attack the enemy flanks and rear. These relatively lightly armored horsemen formed the most effective element of the Muslim armies during the later stages of the Islamic conquest of the Levant. The best use of this lightly armed fast moving cavalry was revealed at the Battle of Yarmouk (636 AD) in which Khalid ibn Walid, knowing the skills of his horsemen, used them to turn the tables at every critical instance of the battle with their ability to engage, disengage, then turn back and attack again from the flank or rear. A strong cavalry regiment was formed by Khalid ibn Walid which included the veterans of the campaign of Iraq and Syria. Early Muslim historians have given it the name Tali'a mutaharrikah(طليعة متحركة), or the Mobile guard. This was used as an advance guard and a strong striking force to route the opposing armies with its greater mobility that give it an upper hand when maneuvering against any Byzantine army. With this mobile striking force, the conquest of Syria was made easy.",
"title": "Islamic States"
},
{
"paragraph_id": 59,
"text": "The Battle of Talas in 751 AD was a conflict between the Arab Abbasid Caliphate and the Chinese Tang dynasty over the control of Central Asia. Chinese infantry were routed by Arab cavalry near the bank of the River Talas.",
"title": "Islamic States"
},
{
"paragraph_id": 60,
"text": "Until the 11th century the classic cavalry strategy of the Arab Middle East incorporated the razzia tactics of fast moving raids by mixed bodies of horsemen and infantry. Under the talented leadership of Saladin and other Islamic commanders the emphasis changed to Mamluk horse-archers backed by bodies of irregular light cavalry. Trained to rapidly disperse, harass and regroup these flexible mounted forces proved capable of withstanding the previously invincible heavy knights of the western crusaders at battles such as Hattin in 1187.",
"title": "Islamic States"
},
{
"paragraph_id": 61,
"text": "Originating in the 9th century as Central Asian ghulams or captives utilised as mounted auxiliaries by Arab armies, Mamluks were subsequently trained as cavalry soldiers rather than solely mounted-archers, with increased priority being given to the use of lances and swords. Mamluks were to follow the dictates of al-furusiyya, a code of conduct that included values like courage and generosity but also doctrine of cavalry tactics, horsemanship, archery and treatment of wounds.",
"title": "Islamic States"
},
{
"paragraph_id": 62,
"text": "By the late 13th century the Manluk armies had evolved into a professional elite of cavalry, backed by more numerous but less well-trained footmen.",
"title": "Islamic States"
},
{
"paragraph_id": 63,
"text": "The Islamic Berber states of North Africa employed elite horse mounted cavalry armed with spears and following the model of the original Arab occupiers of the region. Horse-harness and weapons were manufactured locally and the six-monthly stipends for horsemen were double those of their infantry counterparts. During the 8th century Islamic conquest of Iberia large numbers of horses and riders were shipped from North Africa, to specialise in raiding and the provision of support for the massed Berber footmen of the main armies.",
"title": "Islamic States"
},
{
"paragraph_id": 64,
"text": "Maghrebi traditions of mounted warfare eventually influenced a number of sub-Saharan African polities in the medieval era. The Esos of Ikoyi, military aristocrats of the Yoruba peoples, were a notable manifestation of this phenomenon.",
"title": "Islamic States"
},
{
"paragraph_id": 65,
"text": "Qizilbash, were a class of Safavid militant warriors in Iran during the 15th to 18th centuries, who often fought as elite cavalry.",
"title": "Islamic States"
},
{
"paragraph_id": 66,
"text": "During its period of greatest expansion, from the 14th to 17th centuries, cavalry formed the powerful core of the Ottoman armies. Registers dated 1475 record 22,000 Sipahi feudal cavalry levied in Europe, 17,000 Sipahis recruited from Anatolia, and 3,000 Kapikulu (regular body-guard cavalry). During the 18th century however the Ottoman mounted troops evolved into light cavalry serving in the thinly populated regions of the Middle East and North Africa. Such frontier horsemen were largely raised by local governors and were separate from the main field armies of the Ottoman Empire. At the beginning of the 19th century modernised Nizam-I Credit (\"New Army\") regiments appeared, including full-time cavalry units officered from the horse guards of the Sultan.",
"title": "Islamic States"
},
{
"paragraph_id": 67,
"text": "Ironically, the rise of infantry in the early 16th century coincided with the \"golden age\" of heavy cavalry; a French or Spanish army at the beginning of the century could have up to half its numbers made up of various kinds of light and heavy cavalry, whereas in earlier medieval and later 17th-century armies the proportion of cavalry was seldom more than a quarter.",
"title": "Renaissance Europe"
},
{
"paragraph_id": 68,
"text": "Knighthood largely lost its military functions and became more closely tied to social and economic prestige in an increasingly capitalistic Western society. With the rise of drilled and trained infantry, the mounted men-at-arms, now sometimes called gendarmes and often part of the standing army themselves, adopted the same role as in the Hellenistic age, that of delivering a decisive blow once the battle was already engaged, either by charging the enemy in the flank or attacking their commander-in-chief.",
"title": "Renaissance Europe"
},
{
"paragraph_id": 69,
"text": "From the 1550s onwards, the use of gunpowder weapons solidified infantry's dominance of the battlefield and began to allow true mass armies to develop. This is closely related to the increase in the size of armies throughout the early modern period; heavily armored cavalrymen were expensive to raise and maintain and it took years to train a skilled horseman or a horse, while arquebusiers and later musketeers could be trained and kept in the field at much lower cost, and were much easier to recruit.",
"title": "Renaissance Europe"
},
{
"paragraph_id": 70,
"text": "The Spanish tercio and later formations relegated cavalry to a supporting role. The pistol was specifically developed to try to bring cavalry back into the conflict, together with manoeuvres such as the caracole. The caracole was not particularly successful, however, and the charge (whether with lance, sword, or pistol) remained as the primary mode of employment for many types of European cavalry, although by this time it was delivered in much deeper formations and with greater discipline than before. The demi-lancers and the heavily armored sword-and-pistol reiters were among the types of cavalry whose heyday was in the 16th and 17th centuries. During this period the Polish Winged hussars were a dominating heavy cavalry force in Eastern Europe that initially achieved great success against Swedes, Russians, Turks and other, until repeatably beaten by either combined arms tactics, increase in firepower or beaten in melee with the Drabant cavalry of the Swedish Empire. From their last engagement in 1702 (at the Battle of Kliszów) until 1776, the obsolete Winged hussars were demoted and largely assigned to ceremonial roles. The Polish Winged hussars military prowess peaked at the Siege of Vienna in 1683, when hussar banners participated in the largest cavalry charge in history and successfully repelled the Ottoman attack.",
"title": "Renaissance Europe"
},
{
"paragraph_id": 71,
"text": "Cavalry retained an important role in this age of regularization and standardization across European armies. They remained the primary choice for confronting enemy cavalry. Attacking an unbroken infantry force head-on usually resulted in failure, but extended linear infantry formations were vulnerable to flank or rear attacks. Cavalry was important at Blenheim (1704), Rossbach (1757), Marengo (1800), Eylau and Friedland (1807), remaining significant throughout the Napoleonic Wars.",
"title": "18th-century Europe and Napoleonic Wars"
},
{
"paragraph_id": 72,
"text": "Even with the increasing prominence of infantry, cavalry still had an irreplaceable role in armies, due to their greater mobility. Their non-battle duties often included patrolling the fringes of army encampments, with standing orders to intercept suspected shirkers and deserters, as well as, serving as outpost pickets in advance of the main body. During battle, lighter cavalry such as hussars and uhlans might skirmish with other cavalry, attack light infantry, or charge and either capture enemy artillery or render them useless by plugging the touchholes with iron spikes. Heavier cavalry such as cuirassiers, dragoons, and carabiniers usually charged towards infantry formations or opposing cavalry in order to rout them. Both light and heavy cavalry pursued retreating enemies, the point where most battle casualties occurred.",
"title": "18th-century Europe and Napoleonic Wars"
},
{
"paragraph_id": 73,
"text": "The greatest cavalry charge of modern history was at the 1807 Battle of Eylau, when the entire 11,000-strong French cavalry reserve, led by Joachim Murat, launched a huge charge on and through the Russian infantry lines. Cavalry's dominating and menacing presence on the battlefield was countered by the use of infantry squares. The most notable examples are at the Battle of Quatre Bras and later at the Battle of Waterloo, the latter which the repeated charges by up to 9,000 French cavalrymen ordered by Michel Ney failed to break the British-Allied army, who had formed into squares.",
"title": "18th-century Europe and Napoleonic Wars"
},
{
"paragraph_id": 74,
"text": "Massed infantry, especially those formed in squares were deadly to cavalry, but offered an excellent target for artillery. Once a bombardment had disordered the infantry formation, cavalry were able to rout and pursue the scattered foot soldiers. It was not until individual firearms gained accuracy and improved rates of fire that cavalry was diminished in this role as well. Even then light cavalry remained an indispensable tool for scouting, screening the army's movements, and harassing the enemy's supply lines until military aircraft supplanted them in this role in the early stages of World War I.",
"title": "18th-century Europe and Napoleonic Wars"
},
{
"paragraph_id": 75,
"text": "By the beginning of the 19th century, European cavalry fell into four main categories:",
"title": "19th century"
},
{
"paragraph_id": 76,
"text": "There were cavalry variations for individual nations as well: France had the chasseurs à cheval; Prussia had the Jäger zu Pferde; Bavaria, Saxony and Austria had the Chevaulegers; and Russia had Cossacks. Britain, from the mid-18th century, had Light Dragoons as light cavalry and Dragoons, Dragoon Guards and Household Cavalry as heavy cavalry. Only after the end of the Napoleonic wars were the Household Cavalry equipped with cuirasses, and some other regiments were converted to lancers. In the United States Army prior to 1862 the cavalry were almost always dragoons. The Imperial Japanese Army had its cavalry uniformed as hussars, but they fought as dragoons.",
"title": "19th century"
},
{
"paragraph_id": 77,
"text": "In the Crimean War, the Charge of the Light Brigade and the Thin Red Line at the Battle of Balaclava showed the vulnerability of cavalry, when deployed without effective support.",
"title": "19th century"
},
{
"paragraph_id": 78,
"text": "During the Franco-Prussian War, at the Battle of Mars-la-Tour in 1870, a Prussian cavalry brigade decisively smashed the centre of the French battle line, after skilfully concealing their approach. This event became known as Von Bredow's Death Ride after the brigade commander Adalbert von Bredow; it would be used in the following decades to argue that massed cavalry charges still had a place on the modern battlefield.",
"title": "19th century"
},
{
"paragraph_id": 79,
"text": "Cavalry found a new role in colonial campaigns (irregular warfare), where modern weapons were lacking and the slow moving infantry-artillery train or fixed fortifications were often ineffective against indigenous insurgents (unless the latter offered a fight on an equal footing, as at Tel-el-Kebir, Omdurman, etc.). Cavalry \"flying columns\" proved effective, or at least cost-effective, in many campaigns—although an astute native commander (like Samori in western Africa, Shamil in the Caucasus, or any of the better Boer commanders) could turn the tables and use the greater mobility of their cavalry to offset their relative lack of firepower compared with European forces.",
"title": "19th century"
},
{
"paragraph_id": 80,
"text": "In 1903 the British Indian Army maintained forty regiments of cavalry, numbering about 25,000 Indian sowars (cavalrymen), with British and Indian officers.",
"title": "19th century"
},
{
"paragraph_id": 81,
"text": "Among the more famous regiments in the lineages of the modern Indian and Pakistani armies are:",
"title": "19th century"
},
{
"paragraph_id": 82,
"text": "Several of these formations are still active, though they now are armoured formations, for example the Guides Cavalry of Pakistan.",
"title": "19th century"
},
{
"paragraph_id": 83,
"text": "The French Army maintained substantial cavalry forces in Algeria and Morocco from 1830 until the end of the Second World War. Much of the Mediterranean coastal terrain was suitable for mounted action and there was a long established culture of horsemanship amongst the Arab and Berber inhabitants. The French forces included Spahis, Chasseurs d' Afrique, Foreign Legion cavalry and mounted Goumiers. Both Spain and Italy raised cavalry regiments from amongst the indigenous horsemen of their North African territories (see regulares, Italian Spahis and savari respectively).",
"title": "19th century"
},
{
"paragraph_id": 84,
"text": "Imperial Germany employed mounted formations in South West Africa as part of the Schutztruppen (colonial army) garrisoning the territory.",
"title": "19th century"
},
{
"paragraph_id": 85,
"text": "In the early American Civil War the regular United States Army mounted rifle, dragoon, and two existing cavalry regiments were reorganized and renamed cavalry regiments, of which there were six. Over a hundred other federal and state cavalry regiments were organized, but the infantry played a much larger role in many battles due to its larger numbers, lower cost per rifle fielded, and much easier recruitment. However, cavalry saw a role as part of screening forces and in foraging and scouting. The later phases of the war saw the Federal army developing a truly effective cavalry force fighting as scouts, raiders, and, with repeating rifles, as mounted infantry. The distinguished 1st Virginia Cavalry ranks as one of the most effectual and successful cavalry units on the Confederate side. Noted cavalry commanders included Confederate general J.E.B. Stuart, Nathan Bedford Forrest, and John Singleton Mosby (a.k.a. \"The Grey Ghost\") and on the Union side, Philip Sheridan and George Armstrong Custer. Post Civil War, as the volunteer armies disbanded, the regular army cavalry regiments increased in number from six to ten, among them Custer's U.S. 7th Cavalry Regiment of Little Bighorn fame, and the African-American U.S. 9th Cavalry Regiment and U.S. 10th Cavalry Regiment. The black units, along with others (both cavalry and infantry), collectively became known as the Buffalo Soldiers. According to Robert M. Utley:",
"title": "19th century"
},
{
"paragraph_id": 86,
"text": "These regiments, which rarely took the field as complete organizations, served throughout the American Indian Wars through the close of the frontier in the 1890s. Volunteer cavalry regiments like the Rough Riders consisted of horsemen such as cowboys, ranchers and other outdoorsmen, that served as a cavalry in the United States Military.",
"title": "19th century"
},
{
"paragraph_id": 87,
"text": "At the beginning of the 20th century, all armies still maintained substantial cavalry forces, although there was contention over whether their role should revert to that of mounted infantry (the historic dragoon function). With motorised vehicles and aircraft still under development, horse mounted troops remained the only fully mobile forces available for manoeuvre warfare until 1914.",
"title": "Developments 1900–1914"
},
{
"paragraph_id": 88,
"text": "Following the experience of the South African War of 1899–1902 (where mounted Boer citizen commandos fighting on foot from cover proved more effective than regular cavalry), the British Army withdrew lances for all but ceremonial purposes and placed a new emphasis on training for dismounted action in 1903. Lances were however readopted for active service in 1912.",
"title": "Developments 1900–1914"
},
{
"paragraph_id": 89,
"text": "In 1882, the Imperial Russian Army converted all its line hussar and lancer regiments to dragoons, with an emphasis on mounted infantry training. In 1910 these regiments reverted to their historic roles, designations and uniforms.",
"title": "Developments 1900–1914"
},
{
"paragraph_id": 90,
"text": "By 1909, official regulations dictating the role of the Imperial German cavalry had been revised to indicate an increasing realization of the realities of modern warfare. The massive cavalry charge in three waves which had previously marked the end of annual maneuvers was discontinued and a new emphasis was placed in training on scouting, raiding and pursuit; rather than main battle involvement. The perceived importance of cavalry was however still evident, with thirteen new regiments of mounted rifles (Jäger zu Pferde) being raised shortly before the outbreak of war in 1914.",
"title": "Developments 1900–1914"
},
{
"paragraph_id": 91,
"text": "In spite of significant experience in mounted warfare in Morocco during 1908–14, the French cavalry remained a highly conservative institution. The traditional tactical distinctions between heavy, medium, and light cavalry branches were retained. French cuirassiers wore breastplates and plumed helmets unchanged from the Napoleonic period, during the early months of World War I. Dragoons were similarly equipped, though they did not wear cuirasses and did carry lances. Light cavalry were described as being \"a blaze of colour\". French cavalry of all branches were well mounted and were trained to change position and charge at full gallop. One weakness in training was that French cavalrymen seldom dismounted on the march and their horses suffered heavily from raw backs in August 1914.",
"title": "Developments 1900–1914"
},
{
"paragraph_id": 92,
"text": "In August 1914, all combatant armies still retained substantial numbers of cavalry and the mobile nature of the opening battles on both Eastern and Western Fronts provided a number of instances of traditional cavalry actions, though on a smaller and more scattered scale than those of previous wars. The 110 regiments of Imperial German cavalry, while as colourful and traditional as any in peacetime appearance, had adopted a practice of falling back on infantry support when any substantial opposition was encountered. These cautious tactics aroused derision amongst their more conservative French and Russian opponents but proved appropriate to the new nature of warfare. A single attempt by the German army, on 12 August 1914, to use six regiments of massed cavalry to cut off the Belgian field army from Antwerp floundered when they were driven back in disorder by rifle fire. The two German cavalry brigades involved lost 492 men and 843 horses in repeated charges against dismounted Belgian lancers and infantry. One of the last recorded charges by French cavalry took place on the night of 9/10 September 1914 when a squadron of the 16th Dragoons overran a German airfield at Soissons, while suffering heavy losses. Once the front lines stabilised on the Western Front with the start of Trench Warfare, a combination of barbed wire, uneven muddy terrain, machine guns and rapid fire rifles proved deadly to horse mounted troops and by early 1915 most cavalry units were no longer seeing front line action.",
"title": "First World War"
},
{
"paragraph_id": 93,
"text": "On the Eastern Front, a more fluid form of warfare arose from flat open terrain favorable to mounted warfare. On the outbreak of war in 1914 the bulk of the Russian cavalry was deployed at full strength in frontier garrisons and, during the period that the main armies were mobilizing, scouting and raiding into East Prussia and Austrian Galicia was undertaken by mounted troops trained to fight with sabre and lance in the traditional style. On 21 August 1914 the 4th Austro-Hungarian Kavalleriedivison fought a major mounted engagement at Jaroslavic with the Russian 10th Cavalry Division, in what was arguably the final historic battle to involve thousands of horsemen on both sides. While this was the last massed cavalry encounter on the Eastern Front, the absence of good roads limited the use of mechanized transport and even the technologically advanced Imperial German Army continued to deploy up to twenty-four horse-mounted divisions in the East, as late as 1917.",
"title": "First World War"
},
{
"paragraph_id": 94,
"text": "For the remainder of the War on the Western Front, cavalry had virtually no role to play. The British and French armies dismounted many of their cavalry regiments and used them in infantry and other roles: the Life Guards for example spent the last months of the War as a machine gun corps; and the Australian Light Horse served as light infantry during the Gallipoli campaign. In September 1914 cavalry comprised 9.28% of the total manpower of the British Expeditionary Force in France—by July 1918 this proportion had fallen to 1.65%. As early as the first winter of the war most French cavalry regiments had dismounted a squadron each, for service in the trenches. The French cavalry numbered 102,000 in May 1915 but had been reduced to 63,000 by October 1918. The German Army dismounted nearly all their cavalry in the West, maintaining only one mounted division on that front by January 1917.",
"title": "First World War"
},
{
"paragraph_id": 95,
"text": "Italy entered the war in 1915 with thirty regiments of line cavalry, lancers and light horse. While employed effectively against their Austro-Hungarian counterparts during the initial offensives across the Isonzo River, the Italian mounted forces ceased to have a significant role as the front shifted into mountainous terrain. By 1916 most cavalry machine-gun sections and two complete cavalry divisions had been dismounted and seconded to the infantry.",
"title": "First World War"
},
{
"paragraph_id": 96,
"text": "Some cavalry were retained as mounted troops in reserve behind the lines, in anticipation of a penetration of the opposing trenches that it seemed would never come. Tanks, introduced on the Western Front by the British in September 1916 during the Battle of the Somme, had the capacity to achieve such breakthroughs but did not have the reliable range to exploit them. In their first major use at the Battle of Cambrai (1917), the plan was for a cavalry division to follow behind the tanks, however they were not able to cross a canal because a tank had broken the only bridge. On a few other occasions, throughout the war, cavalry were readied in significant numbers for involvement in major offensives; such as in the Battle of Caporetto and the Battle of Moreuil Wood. However it was not until the German Army had been forced to retreat in the Hundred Days Offensive of 1918, that limited numbers of cavalry were again able to operate with any effectiveness in their intended role. There was a successful charge by the British 7th Dragoon Guards on the last day of the war.",
"title": "First World War"
},
{
"paragraph_id": 97,
"text": "In the wider spaces of the Eastern Front, a more fluid form of warfare continued and there was still a use for mounted troops. Some wide-ranging actions were fought, again mostly in the early months of the war. However, even here the value of cavalry was overrated and the maintenance of large mounted formations at the front by the Russian Army put a major strain on the railway system, to little strategic advantage. In February 1917, the Russian regular cavalry (exclusive of Cossacks) was reduced by nearly a third from its peak number of 200,000, as two squadrons of each regiment were dismounted and incorporated into additional infantry battalions. Their Austro-Hungarian opponents, plagued by a shortage of trained infantry, had been obliged to progressively convert most horse cavalry regiments to dismounted rifle units starting in late 1914.",
"title": "First World War"
},
{
"paragraph_id": 98,
"text": "In the Middle East, during the Sinai and Palestine Campaign mounted forces (British, Indian, Ottoman, Australian, Arab and New Zealand) retained an important strategic role both as mounted infantry and cavalry.",
"title": "First World War"
},
{
"paragraph_id": 99,
"text": "In Egypt, the mounted infantry formations like the New Zealand Mounted Rifles Brigade and Australian Light Horse of ANZAC Mounted Division, operating as mounted infantry, drove German and Ottoman forces back from Romani to Magdhaba and Rafa and out of the Egyptian Sinai Peninsula in 1916.",
"title": "First World War"
},
{
"paragraph_id": 100,
"text": "After a stalemate on the Gaza–Beersheba line between March and October 1917, Beersheba was captured by the Australian Mounted Division's 4th Light Horse Brigade. Their mounted charge succeeded after a coordinated attack by the British Infantry and Yeomanry cavalry and the Australian and New Zealand Light Horse and Mounted Rifles brigades. A series of coordinated attacks by these Egyptian Expeditionary Force infantry and mounted troops were also successful at the Battle of Mughar Ridge, during which the British infantry divisions and the Desert Mounted Corps drove two Ottoman armies back to the Jaffa—Jerusalem line. The infantry with mainly dismounted cavalry and mounted infantry fought in the Judean Hills to eventually almost encircle Jerusalem which was occupied shortly after.",
"title": "First World War"
},
{
"paragraph_id": 101,
"text": "During a pause in operations necessitated by the German spring offensive in 1918 on the Western Front, joint infantry and mounted infantry attacks towards Amman and Es Salt resulted in retreats back to the Jordan Valley which continued to be occupied by mounted divisions during the summer of 1918.",
"title": "First World War"
},
{
"paragraph_id": 102,
"text": "The Australian Mounted Division was armed with swords and in September, after the successful breaching of the Ottoman line on the Mediterranean coast by the British Empire infantry XXI Corps was followed by cavalry attacks by the 4th Cavalry Division, 5th Cavalry Division and Australian Mounted Divisions which almost encircled two Ottoman armies in the Judean Hills forcing their retreat. Meanwhile, Chaytor's Force of infantry and mounted infantry in ANZAC Mounted Division held the Jordan Valley, covering the right flank to later advance eastwards to capture Es Salt and Amman and half of a third Ottoman army. A subsequent pursuit by the 4th Cavalry Division and the Australian Mounted Division followed by the 5th Cavalry Division to Damascus. Armoured cars and 5th Cavalry Division lancers were continuing the pursuit of Ottoman units north of Aleppo when the Armistice of Mudros was signed by the Ottoman Empire.",
"title": "First World War"
},
{
"paragraph_id": 103,
"text": "A combination of military conservatism in almost all armies and post-war financial constraints prevented the lessons of 1914–1918 being acted on immediately. There was a general reduction in the number of cavalry regiments in the British, French, Italian and other Western armies but it was still argued with conviction (for example in the 1922 edition of the Encyclopædia Britannica) that mounted troops had a major role to play in future warfare. The 1920s saw an interim period during which cavalry remained as a proud and conspicuous element of all major armies, though much less so than prior to 1914.",
"title": "Post–World War I"
},
{
"paragraph_id": 104,
"text": "Cavalry was extensively used in the Russian Civil War and the Soviet-Polish War. The last major cavalry battle was the Battle of Komarów in 1920, between Poland and the Russian Bolsheviks. Colonial warfare in Morocco, Syria, the Middle East and the North West Frontier of India provided some opportunities for mounted action against enemies lacking advanced weaponry.",
"title": "Post–World War I"
},
{
"paragraph_id": 105,
"text": "The post-war German Army (Reichsheer) was permitted a large proportion of cavalry (18 regiments or 16.4% of total manpower) under the conditions of the Treaty of Versailles.",
"title": "Post–World War I"
},
{
"paragraph_id": 106,
"text": "The British Army mechanised all cavalry regiments between 1929 and 1941, redefining their role from horse to armoured vehicles to form the Royal Armoured Corps together with the Royal Tank Regiment. The U.S. Cavalry abandoned its sabres in 1934 and commenced the conversion of its horsed regiments to mechanized cavalry, starting with the First Regiment of Cavalry in January 1933.",
"title": "Post–World War I"
},
{
"paragraph_id": 107,
"text": "During the Turkish War of Independence, Turkish cavalry under General Fahrettin Altay was instrumental in the Kemalist victory over the invading Greek Army in 1922 during the Battle of Dumlupınar. The 5th Cavalry Division was able to slip behind the main Greek army, cutting off all communication and supply lines as well as retreat options. This forced the surrender of the remaining Greek forces and may have been the last time in history that cavalry played a definitive role in the outcome of a battle.",
"title": "Post–World War I"
},
{
"paragraph_id": 108,
"text": "During the 1930s, the French Army experimented with integrating mounted and mechanised cavalry units into larger formations. Dragoon regiments were converted to motorised infantry (trucks and motor cycles), and cuirassiers to armoured units; while light cavalry (chasseurs a' cheval, hussars and spahis) remained as mounted sabre squadrons. The theory was that mixed forces comprising these diverse units could utilise the strengths of each according to circumstances. In practice mounted troops proved unable to keep up with fast moving mechanised units over any distance.",
"title": "Post–World War I"
},
{
"paragraph_id": 109,
"text": "The 39 cavalry regiments of the British Indian Army were reduced to 21 as the result of a series of amalgamations immediately following World War I. The new establishment remained unchanged until 1936 when three regiments were redesignated as permanent training units, each with six, still mounted, regiments linked to them. In 1938, the process of mechanization began with the conversion of a full cavalry brigade (two Indian regiments and one British) to armoured car and tank units. By the end of 1940, all of the Indian cavalry had been mechanized, initially and in the majority of cases, to motorized infantry transported in 15cwt trucks. The last horsed regiment of the British Indian Army (other than the Viceroy's Bodyguard and some Indian States Forces regiments) was the 19th King George's Own Lancers which had its final mounted parade at Rawalpindi on 28 October 1939. This unit still exists in the Pakistan Army as an armored regiment.",
"title": "Post–World War I"
},
{
"paragraph_id": 110,
"text": "While most armies still maintained cavalry units at the outbreak of World War II in 1939, significant mounted action was largely restricted to the Polish, Balkan, and Soviet campaigns. Rather than charge their mounts into battle, cavalry units were either used as mounted infantry (using horses to move into position and then dismounting for combat) or as reconnaissance units (especially in areas not suited to tracked or wheeled vehicles).",
"title": "World War II"
},
{
"paragraph_id": 111,
"text": "A popular myth is that Polish cavalry armed with lances charged German tanks during the September 1939 campaign. This arose from misreporting of a single clash on 1 September near Krojanty, when two squadrons of the Polish 18th Lancers armed with sabres scattered German infantry before being caught in the open by German armoured cars. Two examples illustrate how the myth developed. First, because motorised vehicles were in short supply, the Poles used horses to pull anti-tank weapons into position. Second, there were a few incidents when Polish cavalry was trapped by German tanks, and attempted to fight free. However, this did not mean that the Polish army chose to attack tanks with horse cavalry. Later, on the Eastern Front, the Red Army did deploy cavalry units effectively against the Germans.",
"title": "World War II"
},
{
"paragraph_id": 112,
"text": "A more correct term would be \"mounted infantry\" instead of \"cavalry\", as horses were primarily used as a means of transportation, for which they were very suitable in view of the very poor road conditions in pre-war Poland. Another myth describes Polish cavalry as being armed with both sabres and lances; lances were used for peacetime ceremonial purposes only and the primary weapon of the Polish cavalryman in 1939 was a rifle. Individual equipment did include a sabre, probably because of well-established tradition, and in the case of a melee combat this secondary weapon would probably be more effective than a rifle and bayonet. Moreover, the Polish cavalry brigade order of battle in 1939 included, apart from the mounted soldiers themselves, light and heavy machine guns (wheeled), the Anti-tank rifle, model 35, anti-aircraft weapons, anti tank artillery such as the Bofors 37 mm, also light and scout tanks, etc. The last cavalry vs. cavalry mutual charge in Europe took place in Poland during the Battle of Krasnobród, when Polish and German cavalry units clashed with each other.",
"title": "World War II"
},
{
"paragraph_id": 113,
"text": "The last classical cavalry charge of the war took place on March 1, 1945, during the Battle of Schoenfeld by the 1st \"Warsaw\" Independent Cavalry Brigade. Infantry and tanks had been employed to little effect against the German position, both of which floundered in the open wetlands only to be dominated by infantry and antitank fire from the German fortifications on the forward slope of Hill 157, overlooking the wetlands. The Germans had not taken cavalry into consideration when fortifying their position which, combined with the \"Warsaw\"s swift assault, overran the German anti-tank guns and consolidated into an attack into the village itself, now supported by infantry and tanks.",
"title": "World War II"
},
{
"paragraph_id": 114,
"text": "The Italian invasion of Greece in October 1940 saw mounted cavalry used effectively by the Greek defenders along the mountainous frontier with Albania. Three Greek cavalry regiments (two mounted and one partially mechanized) played an important role in the Italian defeat in this difficult terrain.",
"title": "World War II"
},
{
"paragraph_id": 115,
"text": "The contribution of Soviet cavalry to the development of modern military operational doctrine and its importance in defeating Nazi Germany has been eclipsed by the higher profile of tanks and airplanes. Soviet cavalry contributed significantly to the defeat of the Axis armies. They were able to provide the most mobile troops available in the early stages, when trucks and other equipment were low in quality; as well as providing cover for retreating forces. Considering their relatively limited numbers, the Soviet cavalry played a significant role in giving Germany its first real defeats in the early stages of the war. The continuing potential of mounted troops was demonstrated during the Battle of Moscow, against Guderian and the powerful central German 9th Army. Pavel Belov was given by Stavka a mobile group including the elite 9th tank brigade, ski battalions, Katyusha rocket launcher battalion among others, the unit additionally received new weapons. This newly created group became the first to carry the Soviet counter-offensive in late November, when the general offensive began on December 5. These mobile units often played major roles in both defensive and offensive operations.",
"title": "World War II"
},
{
"paragraph_id": 116,
"text": "Cavalry were amongst the first Soviet units to complete the encirclement in the Battle of Stalingrad, thus sealing the fate of the German 6th Army. Mounted Soviet forces also played a role in the encirclement of Berlin, with some Cossack cavalry units reaching the Reichstag in April 1945. Throughout the war they performed important tasks such as the capture of bridgeheads which is considered one of the hardest jobs in battle, often doing so with inferior numbers. For instance the 8th Guards Cavalry Regiment of the 2nd Guards Cavalry Division (Soviet Union), 1st Guards Cavalry Corps often fought outnumbered against elite German units.",
"title": "World War II"
},
{
"paragraph_id": 117,
"text": "By the final stages of the war only the Soviet Union was still fielding mounted units in substantial numbers, some in combined mechanized and horse units. The main advantage of this tactical approach was in enabling mounted infantry to keep pace with advancing tanks. Other factors favoring the retention of mounted forces included the high quality of Russian Cossacks, which provided about half of all mounted Soviet cavalry throughout the war. They excelled in warfare manoeuvers, since the lack of roads limited the effectiveness of wheeled vehicles in many parts of the Eastern Front. Another consideration was that sufficient logistic capacity was often not available to support very large motorized forces, whereas cavalry was relatively easy to maintain when detached from the main army and acting on its own initiative. The main usage of the Soviet cavalry involved infiltration through front lines with subsequent deep raids, which disorganized German supply lines. Another role was the pursuit of retreating enemy forces during major front-line operations and breakthroughs.",
"title": "World War II"
},
{
"paragraph_id": 118,
"text": "During World War II, the Royal Hungarian Army's hussars were typically only used to undertake reconnaissance tasks against Soviet forces, and then only in detachments of section or squadron strength.",
"title": "World War II"
},
{
"paragraph_id": 119,
"text": "The last documented hussar attack was conducted by Lieutenant Colonel Kálmán Mikecz on August 16, 1941, at Nikolaev. The hussars arriving as reinforcements, were employed to break through Russian positions ahead of German troops. The hussars equipped with swords and submachine guns broke through the Russian lines in a single attack.",
"title": "World War II"
},
{
"paragraph_id": 120,
"text": "An eyewitness account of the last hussar attack by Erich Kern, a German officer, was written in his memoir in 1948:",
"title": "World War II"
},
{
"paragraph_id": 121,
"text": "… We were again in a tough fight with the desperately defensive enemy who dug himself along a high railway embankment. We've been attacked four times already, and we've been kicked back all four times. The battalion commander swore, but the company commanders were helpless. Then, instead of the artillery support we asked for countless times, a Hungarian hussar regiment appeared on the scene. We laughed. What the hell do they want here with their graceful, elegant horses? We froze at once: these Hungarians went crazy. Cavalry Squadron approached after a cavalry squadron. The command word rang. The bronze-brown, slender riders almost grew to their saddle. Their shining colonel of golden parolis jerked his sword. Four or five armored cars cut out of the wings, and the regiment slashed across the wide plain with flashing swords in the afternoon sun. Seydlitz attacked like this once before. Forgetting all caution, we climbed out of our covers. It was all like a great equestrian movie. The first shots rumbled, then became less frequent. With astonished eyes, in disbelief, we watched as the Soviet regiment, which had so far repulsed our attacks with desperate determination, now turned around and left its positions in panic. And the triumphant Hungarians chased the Russian in front of them and shredded them with their glittering sabers. The hussar sword, it seems, was a bit much for the nerves of Russians. Now, for once, the ancient weapon has triumphed over modern equipment ....",
"title": "World War II"
},
{
"paragraph_id": 122,
"text": "The last mounted sabre charge by Italian cavalry occurred on August 24, 1942, at Isbuscenski (Russia), when a squadron of the Savoia Cavalry Regiment charged the 812th Siberian Infantry Regiment. The remainder of the regiment, together with the Novara Lancers made a dismounted attack in an action that ended with the retreat of the Russians after heavy losses on both sides. The final Italian cavalry action occurred on October 17, 1942, in Poloj (now Croatia) by a squadron of the Alexandria Cavalry Regiment against a large group of Yugoslav partisans.",
"title": "World War II"
},
{
"paragraph_id": 123,
"text": "Romanian, Hungarian and Italian cavalry were dispersed or disbanded following the retreat of the Axis forces from Russia. Germany still maintained some mounted (mixed with bicycles) SS and Cossack units until the last days of the War.",
"title": "World War II"
},
{
"paragraph_id": 124,
"text": "Finland used mounted troops against Russian forces effectively in forested terrain during the Continuation War. The last Finnish cavalry unit was not disbanded until 1947.",
"title": "World War II"
},
{
"paragraph_id": 125,
"text": "The U.S. Army's last horse cavalry actions were fought during World War II: a) by the 26th Cavalry Regiment—a small mounted regiment of Philippine Scouts which fought the Japanese during the retreat down the Bataan peninsula, until it was effectively destroyed by January 1942; and b) on captured German horses by the mounted reconnaissance section of the U.S. 10th Mountain Division in a spearhead pursuit of the German Army across the Po Valley in Italy in April 1945. The last horsed U.S. Cavalry (the Second Cavalry Division) were dismounted in March 1944.",
"title": "World War II"
},
{
"paragraph_id": 126,
"text": "All British Army cavalry regiments had been mechanised since 1 March 1942 when the Queen's Own Yorkshire Dragoons (Yeomanry) was converted to a motorised role, following mounted service against the Vichy French in Syria the previous year. The final cavalry charge by British Empire forces occurred on 21 March 1942 when a 60 strong patrol of the Burma Frontier Force encountered Japanese infantry near Toungoo airfield in central Myanmar. The Sikh sowars of the Frontier Force cavalry, led by Captain Arthur Sandeman of The Central India Horse (21st King George V's Own Horse), charged in the old style with sabres and most were killed.",
"title": "World War II"
},
{
"paragraph_id": 127,
"text": "In the early stages of World War II, mounted units of the Mongolian People's Army were involved in the Battle of Khalkhin Gol against invading Japanese forces. Soviet forces under the command of Georgy Zhukov, together with Mongolian forces, defeated the Japanese Sixth army and effectively ended the Soviet–Japanese Border Wars. After the Soviet–Japanese Neutrality Pact of 1941, Mongolia remained neutral throughout most of the war, but its geographical situation meant that the country served as a buffer between Japanese forces and the Soviet Union. In addition to keeping around 10% of the population under arms, Mongolia provided half a million trained horses for use by the Soviet Army. In 1945 a partially mounted Soviet-Mongolian Cavalry Mechanized Group played a supporting role on the western flank of the Soviet invasion of Manchuria. The last active service seen by cavalry units of the Mongolian Army occurred in 1946–1948, during border clashes between Mongolia and the Republic of China.",
"title": "World War II"
},
{
"paragraph_id": 128,
"text": "While most modern \"cavalry\" units have some historic connection with formerly mounted troops this is not always the case. The modern Irish Defence Forces (DF) includes a \"Cavalry Corps\" equipped with armoured cars and Scorpion tracked combat reconnaissance vehicles. The DF has never included horse cavalry since its establishment in 1922 (other than a small mounted escort of Blue Hussars drawn from the Artillery Corps when required for ceremonial occasions). However, the mystique of the cavalry is such that the name has been introduced for what was always a mechanised force.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 129,
"text": "Some engagements in late 20th and early 21st century guerrilla wars involved mounted troops, particularly against partisan or guerrilla fighters in areas with poor transport infrastructure. Such units were not used as cavalry but rather as mounted infantry. Examples occurred in Afghanistan, Portuguese Africa and Rhodesia. The French Army used existing mounted squadrons of Spahis to a limited extent for patrol work during the Algerian War (1954–1962). The Swiss Army maintained a mounted dragoon regiment for combat purposes until 1973. The Portuguese Army used horse mounted cavalry with some success in the wars of independence in Angola and Mozambique in the 1960s and 1970s. During the 1964–1979 Rhodesian Bush War the Rhodesian Army created an elite mounted infantry unit called Grey's Scouts to fight unconventional actions against the rebel forces of Robert Mugabe and Joshua Nkomo. The horse mounted infantry of the Scouts were effective and reportedly feared by their opponents in the rebel African forces. In the 1978 to present Afghan Civil War period there have been several instances of horse mounted combat.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 130,
"text": "Central and South American armies maintained mounted cavalry for longer than those of Asia, Europe, or North America. The Mexican Army included a number of horse mounted cavalry regiments as late as the mid-1990s and the Chilean Army had five such regiments in 1983 as mounted mountain troops.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 131,
"text": "The Soviet Army retained horse cavalry divisions until 1955.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 132,
"text": "Today the Indian Army's 61st Cavalry is reported to be the largest existing horse-mounted cavalry unit still having operational potential. It was raised in 1951 from the amalgamated state cavalry squadrons of Gwalior, Jodhpur, and Mysore. While primarily utilised for ceremonial purposes, the regiment can be deployed for internal security or police roles if required. The 61st Cavalry and the President's Body Guard parade in full dress uniform in New Delhi each year in what is probably the largest assembly of traditional cavalry still to be seen in the world. Both the Indian and the Pakistani armies maintain armoured regiments with the titles of Lancers or Horse, dating back to the 19th century.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 133,
"text": "As of 2007, the Chinese People's Liberation Army employed two battalions of horse-mounted border guards in Xinjiang for border patrol purposes. PLA mounted units last saw action during border clashes with Vietnam in the 1970s and 1980s, after which most cavalry units were disbanded as part of major military downsizing in the 1980s. In the wake of the 2008 Sichuan earthquake, there were calls to rebuild the army horse inventory for disaster relief in difficult terrain. Subsequent Chinese media reports confirm that the PLA maintains operational horse cavalry at squadron strength in Xinjiang and Inner Mongolia for scouting, logistical, and border security purposes.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 134,
"text": "The Chilean Army still maintains a mixed armoured cavalry regiment, with elements of it acting as mounted mountain exploration troops, based in the city of Angol, being part of the III Mountain Division, and another independent exploration cavalry detachment in the town of Chaitén. The rugged mountain terrain calls for the use of special horses suited for that use.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 135,
"text": "The Argentine Army has two mounted cavalry units: the Regiment of Horse Grenadiers, which performs mostly ceremonial duties but at the same time is responsible for the president's security (in this case, acting as infantry), and the 4th Mountain Cavalry Regiment (which comprises both horse and light armoured squadrons), stationed in San Martín de los Andes, where it has an exploration role as part the 6th Mountain Brigade. Most armoured cavalry units of the Army are considered successors to the old cavalry regiments from the Independence Wars, and keep their traditional names, such as Hussars, Cuirassiers, Lancers, etc., and uniforms. Equestrian training remains an important part of their tradition, especially among officers.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 136,
"text": "",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 137,
"text": "Cavalry or mounted gendarmerie units continue to be maintained for purely or primarily ceremonial purposes by the Algerian, Argentine, Bolivian, Brazilian, British, Bulgarian, Canadian, Chilean, Colombian, Danish, Dutch, Finnish, French, Hungarian, Indian, Italian, Jordanian, Malaysian, Moroccan, Nepalese, Nigerian, North Korean, Omani, Pakistani, Panamanian, Paraguayan, Peruvian, Polish, Portuguese, Russian, Senegalese, Spanish, Swedish, Thai, Tunisian, Turkmenistan, United States, Uruguayan and Venezuelan armed forces.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 138,
"text": "A number of armoured regiments in the British Army retain the historic designations of Hussars, Dragoons, Light Dragoons, Dragoon Guards, Lancers and Yeomanry. Only the Household Cavalry (consisting of the Life Guards' mounted squadron, The Blues and Royals' mounted squadron, the State Trumpeters of The Household Cavalry and the Household Cavalry Mounted Band) are maintained for mounted (and dismounted) ceremonial duties in London.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 139,
"text": "The French Army still has regiments with the historic designations of Cuirassiers, Hussars, Chasseurs, Dragoons and Spahis. Only the cavalry of the Republican Guard and a ceremonial fanfare detachment of trumpeters for the cavalry/armoured branch as a whole are now mounted.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 140,
"text": "In the Canadian Army, a number of regular and reserve units have cavalry roots, including The Royal Canadian Hussars (Montreal), the Governor General's Horse Guards, Lord Strathcona's Horse, The British Columbia Dragoons, The Royal Canadian Dragoons, and the South Alberta Light Horse. Of these, only Lord Strathcona's Horse and the Governor General's Horse Guards maintain an official ceremonial horse-mounted cavalry troop or squadron.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 141,
"text": "The modern Pakistan army maintains about 40 armoured regiments with the historic titles of Lancers, Cavalry or Horse. Six of these date back to the 19th century, although only the President's Body Guard remains horse-mounted.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 142,
"text": "In 2002, the Army of the Russian Federation reintroduced a ceremonial mounted squadron wearing historic uniforms.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 143,
"text": "Both the Australian and New Zealand armies follow the British practice of maintaining traditional titles (Light Horse or Mounted Rifles) for modern mechanised units. However, neither country retains a horse-mounted unit.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 144,
"text": "Several armored units of the modern United States Army retain the designation of \"armored cavalry\". The United States also has \"air cavalry\" units equipped with helicopters. The Horse Cavalry Detachment of the U.S. Army's 1st Cavalry Division, made up of active duty soldiers, still functions as an active unit, trained to approximate the weapons, tools, equipment and techniques used by the United States Cavalry in the 1880s.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 145,
"text": "The First Troop Philadelphia City Cavalry is a volunteer unit within the Pennsylvania Army National Guard which serves as a combat force when in federal service but acts in a mounted disaster relief role when in state service. In addition, the Parsons' Mounted Cavalry is a Reserve Officer Training Corps unit which forms part of the Corps of Cadets at Texas A&M University. Valley Forge Military Academy and College also has a Mounted Company, known as D-Troop .",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 146,
"text": "Some individual U.S. states maintain cavalry units as a part of their respective state defense forces. The Maryland Defense Force includes a cavalry unit, Cavalry Troop A, which serves primarily as a ceremonial unit. The unit training includes a saber qualification course based upon the 1926 U.S. Army course. Cavalry Troop A also assists other Maryland agencies as a rural search and rescue asset. In Massachusetts, The National Lancers trace their lineage to a volunteer cavalry militia unit established in 1836 and are currently organized as an official part of the Massachusetts Organized Militia. The National Lancers maintain three units, Troops A, B, and C, which serve in a ceremonial role and assist in search and rescue missions. In July 2004, the National Lancers were ordered into active state service to guard Camp Curtis Guild during the 2004 Democratic National Convention. The Governor's Horse Guard of Connecticut maintains two companies which are trained in urban crowd control. In 2020, the California State Guard stood up the 26th Mounted Operations Detachment, a search-and-rescue cavalry unit.",
"title": "Post–World War II to the present day"
},
{
"paragraph_id": 147,
"text": "From the beginning of civilization to the 20th century, ownership of heavy cavalry horses has been a mark of wealth amongst settled peoples. A cavalry horse involves considerable expense in breeding, training, feeding, and equipment, and has very little productive use except as a mode of transport.",
"title": "Social status"
},
{
"paragraph_id": 148,
"text": "For this reason, and because of their often decisive military role, the cavalry has typically been associated with high social status. This was most clearly seen in the feudal system, where a lord was expected to enter combat armored and on horseback and bring with him an entourage of lightly armed peasants on foot. If landlords and peasant levies came into conflict, the poorly trained footmen would be ill-equipped to defeat armored knights.",
"title": "Social status"
},
{
"paragraph_id": 149,
"text": "In later national armies, service as an officer in the cavalry was generally a badge of high social status. For instance prior to 1914 most officers of British cavalry regiments came from a socially privileged background and the considerable expenses associated with their role generally required private means, even after it became possible for officers of the line infantry regiments to live on their pay. Options open to poorer cavalry officers in the various European armies included service with less fashionable (though often highly professional) frontier or colonial units. These included the British Indian cavalry, the Russian Cossacks or the French Chasseurs d'Afrique.",
"title": "Social status"
},
{
"paragraph_id": 150,
"text": "During the 19th and early 20th centuries most monarchies maintained a mounted cavalry element in their royal or imperial guards. These ranged from small units providing ceremonial escorts and palace guards, through to large formations intended for active service. The mounted escort of the Spanish Royal Household provided an example of the former and the twelve cavalry regiments of the Prussian Imperial Guard an example of the latter. In either case the officers of such units were likely to be drawn from the aristocracies of their respective societies.",
"title": "Social status"
},
{
"paragraph_id": 151,
"text": "Some sense of the noise and power of a cavalry charge can be gained from the 1970 film Waterloo, which featured some 2,000 cavalrymen, some of them Cossacks. It included detailed displays of the horsemanship required to manage animal and weapons in large numbers at the gallop (unlike the real battle of Waterloo, where deep mud significantly slowed the horses). The Gary Cooper movie They Came to Cordura contains a scene of a cavalry regiment deploying from march to battle line formation. A smaller-scale cavalry charge can be seen in The Lord of the Rings: The Return of the King (2003); although the finished scene has substantial computer-generated imagery, raw footage and reactions of the riders are shown in the Extended Version DVD Appendices.",
"title": "On film"
},
{
"paragraph_id": 152,
"text": "Other films that show cavalry actions include:",
"title": "On film"
}
] | Historically, cavalry are soldiers or warriors who fight mounted on horseback. Cavalry were the most mobile of the combat arms, operating as light cavalry in the roles of reconnaissance, screening, and skirmishing in many armies, or as heavy cavalry for decisive shock attacks in other armies. An individual soldier in the cavalry is known by a number of designations depending on era and tactics, such as a cavalryman, horseman, trooper, cataphract, knight, drabant, hussar, uhlan, mamluk, cuirassier, lancer, dragoon, or horse archer. The designation of cavalry was not usually given to any military forces that used other animals for mounts, such as camels or elephants. Infantry who moved on horseback, but dismounted to fight on foot, were known in the early 17th to the early 18th century as dragoons, a class of mounted infantry which in most armies later evolved into standard cavalry while retaining their historic designation. Cavalry had the advantage of improved mobility, and a soldier fighting from horseback also had the advantages of greater height, speed, and inertial mass over an opponent on foot. Another element of horse mounted warfare is the psychological impact a mounted soldier can inflict on an opponent. The speed, mobility, and shock value of cavalry was greatly appreciated and exploited in armed forces in the Ancient and Middle Ages; some forces were mostly cavalry, particularly in nomadic societies of Asia, notably the Huns of Attila and the later Mongol armies. In Europe, cavalry became increasingly armoured (heavy), and eventually evolving into the mounted knights of the medieval period. During the 17th century, cavalry in Europe discarded most of its armor, which was ineffective against the muskets and cannons that were coming into common use, and by the mid-18th century armor had mainly fallen into obsolescence, although some regiments retained a small thickened cuirass that offered protection against lances, sabres, and bayonets; including some protection against a shot from distance. In the interwar period many cavalry units were converted into motorized infantry and mechanized infantry units, or reformed as tank troops. The cavalry tank or cruiser tank was one designed with a speed and purpose beyond that of infantry tanks and would subsequently develop into the main battle tank. Nonetheless, some cavalry still served during World War II. Most cavalry units that are horse-mounted in modern armies serve in purely ceremonial roles, or as mounted infantry in difficult terrain such as mountains or heavily forested areas. Modern usage of the term generally refers to units performing the role of reconnaissance, surveillance, and target acquisition or main battle tank units. | 2001-10-17T00:20:48Z | 2023-12-23T03:02:45Z | [
"Template:Cite book",
"Template:Cite magazine",
"Template:Commons category",
"Template:Short description",
"Template:Further",
"Template:From whom?",
"Template:Reflist",
"Template:Cite news",
"Template:Circa",
"Template:Efn",
"Template:Div col",
"Template:Cite web",
"Template:Full citation needed",
"Template:Main",
"Template:Multiple image",
"Template:Anchor",
"Template:TOC limit",
"Template:See also",
"Template:Circular reference",
"Template:Div col end",
"Template:Cite journal",
"Template:Military and war",
"Template:Distinguish",
"Template:Redirect",
"Template:War",
"Template:Authority control",
"Template:Webarchive",
"Template:Sfnp",
"Template:More citations needed",
"Template:Citation needed",
"Template:Citation",
"Template:Notelist",
"Template:ISBN",
"Template:Page?"
] | https://en.wikipedia.org/wiki/Cavalry |
6,818 | Citric acid cycle | The citric acid cycle —also known as the Krebs cycle, Szent-Györgyi-Krebs cycle or the TCA cycle (tricarboxylic acid cycle)—is a series of biochemical reactions to release the energy stored in nutrients through the oxidation of acetyl-CoA derived from carbohydrates, fats, and proteins. The chemical energy released is available under the form of ATP. The Krebs cycle is used by organisms that respire (as opposed to organisms that ferment) to generate energy, either by anaerobic respiration or aerobic respiration. In addition, the cycle provides precursors of certain amino acids, as well as the reducing agent NADH, that are used in numerous other reactions. Its central importance to many biochemical pathways suggests that it was one of the earliest components of metabolism. Even though it is branded as a 'cycle', it is not necessary for metabolites to follow only one specific route; at least three alternative segments of the citric acid cycle have been recognized.
The name of this metabolic pathway is derived from the citric acid (a tricarboxylic acid, often called citrate, as the ionized form predominates at biological pH) that is consumed and then regenerated by this sequence of reactions to complete the cycle. The cycle consumes acetate (in the form of acetyl-CoA) and water, reduces NAD to NADH, releasing carbon dioxide. The NADH generated by the citric acid cycle is fed into the oxidative phosphorylation (electron transport) pathway. The net result of these two closely linked pathways is the oxidation of nutrients to produce usable chemical energy in the form of ATP.
In eukaryotic cells, the citric acid cycle occurs in the matrix of the mitochondrion. In prokaryotic cells, such as bacteria, which lack mitochondria, the citric acid cycle reaction sequence is performed in the cytosol with the proton gradient for ATP production being across the cell's surface (plasma membrane) rather than the inner membrane of the mitochondrion.
For each pyruvate molecule (from glycolysis), the overall yield of energy-containing compounds from the citric acid cycle is three NADH, one FADH2, and one GTP.
Several of the components and reactions of the citric acid cycle were established in the 1930s by the research of Albert Szent-Györgyi, who received the Nobel Prize in Physiology or Medicine in 1937 specifically for his discoveries pertaining to fumaric acid, a component of the cycle. He made this discovery by studying pigeon breast muscle. Because this tissue maintains its oxidative capacity well after breaking down in the Latapie mill and releasing in aqueous solutions, breast muscle of the pigeon was very well qualified for the study of oxidative reactions. The citric acid cycle itself was finally identified in 1937 by Hans Adolf Krebs and William Arthur Johnson while at the University of Sheffield, for which the former received the Nobel Prize for Physiology or Medicine in 1953, and for whom the cycle is sometimes named the "Krebs cycle".
The citric acid cycle is a metabolic pathway that connects carbohydrate, fat, and protein metabolism. The reactions of the cycle are carried out by eight enzymes that completely oxidize acetate (a two carbon molecule), in the form of acetyl-CoA, into two molecules each of carbon dioxide and water. Through catabolism of sugars, fats, and proteins, the two-carbon organic product acetyl-CoA is produced which enters the citric acid cycle. The reactions of the cycle also convert three equivalents of nicotinamide adenine dinucleotide (NAD) into three equivalents of reduced NAD (NADH), one equivalent of flavin adenine dinucleotide (FAD) into one equivalent of FADH2, and one equivalent each of guanosine diphosphate (GDP) and inorganic phosphate (Pi) into one equivalent of guanosine triphosphate (GTP). The NADH and FADH2 generated by the citric acid cycle are, in turn, used by the oxidative phosphorylation pathway to generate energy-rich ATP.
One of the primary sources of acetyl-CoA is from the breakdown of sugars by glycolysis which yield pyruvate that in turn is decarboxylated by the pyruvate dehydrogenase complex generating acetyl-CoA according to the following reaction scheme:
The product of this reaction, acetyl-CoA, is the starting point for the citric acid cycle. Acetyl-CoA may also be obtained from the oxidation of fatty acids. Below is a schematic outline of the cycle:
There are ten basic steps in the citric acid cycle, as outlined below. The cycle is continuously supplied with new carbon in the form of acetyl-CoA, entering at step 0 in the table.
Two carbon atoms are oxidized to CO2, the energy from these reactions is transferred to other metabolic processes through GTP (or ATP), and as electrons in NADH and QH2. The NADH generated in the citric acid cycle may later be oxidized (donate its electrons) to drive ATP synthesis in a type of process called oxidative phosphorylation. FADH2 is covalently attached to succinate dehydrogenase, an enzyme which functions both in the citric acid cycle and the mitochondrial electron transport chain in oxidative phosphorylation. FADH2, therefore, facilitates transfer of electrons to coenzyme Q, which is the final electron acceptor of the reaction catalyzed by the succinate:ubiquinone oxidoreductase complex, also acting as an intermediate in the electron transport chain.
Mitochondria in animals, including humans, possess two succinyl-CoA synthetases: one that produces GTP from GDP, and another that produces ATP from ADP. Plants have the type that produces ATP (ADP-forming succinyl-CoA synthetase). Several of the enzymes in the cycle may be loosely associated in a multienzyme protein complex within the mitochondrial matrix.
The GTP that is formed by GDP-forming succinyl-CoA synthetase may be utilized by nucleoside-diphosphate kinase to form ATP (the catalyzed reaction is GTP + ADP → GDP + ATP).
Products of the first turn of the cycle are one GTP (or ATP), three NADH, one FADH2 and two CO2.
Because two acetyl-CoA molecules are produced from each glucose molecule, two cycles are required per glucose molecule. Therefore, at the end of two cycles, the products are: two GTP, six NADH, two FADH2, and four CO2.
The above reactions are balanced if Pi represents the H2PO4 ion, ADP and GDP the ADP and GDP ions, respectively, and ATP and GTP the ATP and GTP ions, respectively.
The total number of ATP molecules obtained after complete oxidation of one glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is estimated to be between 30 and 38.
The theoretical maximum yield of ATP through oxidation of one molecule of glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is 38 (assuming 3 molar equivalents of ATP per equivalent NADH and 2 ATP per FADH2). In eukaryotes, two equivalents of NADH and two equivalents of ATP are generated in glycolysis, which takes place in the cytoplasm. If transported using the glycerol phosphate shuttle rather than the malate-aspartate shuttle, transport of two of these equivalents of NADH into the mitochondria effectively consumes two equivalents of ATP, thus reducing the net production of ATP to 36. Furthermore, inefficiencies in oxidative phosphorylation due to leakage of protons across the mitochondrial membrane and slippage of the ATP synthase/proton pump commonly reduces the ATP yield from NADH and FADH2 to less than the theoretical maximum yield. The observed yields are, therefore, closer to ~2.5 ATP per NADH and ~1.5 ATP per FADH2, further reducing the total net production of ATP to approximately 30. An assessment of the total ATP yield with newly revised proton-to-ATP ratios provides an estimate of 29.85 ATP per glucose molecule.
While the citric acid cycle is in general highly conserved, there is significant variability in the enzymes found in different taxa (note that the diagrams on this page are specific to the mammalian pathway variant).
Some differences exist between eukaryotes and prokaryotes. The conversion of D-threo-isocitrate to 2-oxoglutarate is catalyzed in eukaryotes by the NAD-dependent EC 1.1.1.41, while prokaryotes employ the NADP-dependent EC 1.1.1.42. Similarly, the conversion of (S)-malate to oxaloacetate is catalyzed in eukaryotes by the NAD-dependent EC 1.1.1.37, while most prokaryotes utilize a quinone-dependent enzyme, EC 1.1.5.4.
A step with significant variability is the conversion of succinyl-CoA to succinate. Most organisms utilize EC 6.2.1.5, succinate–CoA ligase (ADP-forming) (despite its name, the enzyme operates in the pathway in the direction of ATP formation). In mammals a GTP-forming enzyme, succinate–CoA ligase (GDP-forming) (EC 6.2.1.4) also operates. The level of utilization of each isoform is tissue dependent. In some acetate-producing bacteria, such as Acetobacter aceti, an entirely different enzyme catalyzes this conversion – EC 2.8.3.18, succinyl-CoA:acetate CoA-transferase. This specialized enzyme links the TCA cycle with acetate metabolism in these organisms. Some bacteria, such as Helicobacter pylori, employ yet another enzyme for this conversion – succinyl-CoA:acetoacetate CoA-transferase (EC 2.8.3.5).
Some variability also exists at the previous step – the conversion of 2-oxoglutarate to succinyl-CoA. While most organisms utilize the ubiquitous NAD-dependent 2-oxoglutarate dehydrogenase, some bacteria utilize a ferredoxin-dependent 2-oxoglutarate synthase (EC 1.2.7.3). Other organisms, including obligately autotrophic and methanotrophic bacteria and archaea, bypass succinyl-CoA entirely, and convert 2-oxoglutarate to succinate via succinate semialdehyde, using EC 4.1.1.71, 2-oxoglutarate decarboxylase, and EC 1.2.1.79, succinate-semialdehyde dehydrogenase.
In cancer, there are substantial metabolic derangements that occur to ensure the proliferation of tumor cells, and consequently metabolites can accumulate which serve to facilitate tumorigenesis, dubbed oncometabolites. Among the best characterized oncometabolites is 2-hydroxyglutarate which is produced through a heterozygous gain-of-function mutation (specifically a neomorphic one) in isocitrate dehydrogenase (IDH) (which under normal circumstances catalyzes the oxidation of isocitrate to oxalosuccinate, which then spontaneously decarboxylates to alpha-ketoglutarate, as discussed above; in this case an additional reduction step occurs after the formation of alpha-ketoglutarate via NADPH to yield 2-hydroxyglutarate), and hence IDH is considered an oncogene. Under physiological conditions, 2-hydroxyglutarate is a minor product of several metabolic pathways as an error but readily converted to alpha-ketoglutarate via hydroxyglutarate dehydrogenase enzymes (L2HGDH and D2HGDH) but does not have a known physiologic role in mammalian cells; of note, in cancer, 2-hydroxyglutarate is likely a terminal metabolite as isotope labelling experiments of colorectal cancer cell lines show that its conversion back to alpha-ketoglutarate is too low to measure. In cancer, 2-hydroxyglutarate serves as a competitive inhibitor for a number of enzymes that facilitate reactions via alpha-ketoglutarate in alpha-ketoglutarate-dependent dioxygenases. This mutation results in several important changes to the metabolism of the cell. For one thing, because there is an extra NADPH-catalyzed reduction, this can contribute to depletion of cellular stores of NADPH and also reduce levels of alpha-ketoglutarate available to the cell. In particular, the depletion of NADPH is problematic because NADPH is highly compartmentalized and cannot freely diffuse between the organelles in the cell. It is produced largely via the pentose phosphate pathway in the cytoplasm. The depletion of NADPH results in increased oxidative stress within the cell as it is a required cofactor in the production of GSH, and this oxidative stress can result in DNA damage. There are also changes on the genetic and epigenetic level through the function of histone lysine demethylases (KDMs) and ten-eleven translocation (TET) enzymes; ordinarily TETs hydroxylate 5-methylcytosines to prime them for demethylation. However, in the absence of alpha-ketoglutarate this cannot be done and there is hence hypermethylation of the cell's DNA, serving to promote epithelial-mesenchymal transition (EMT) and inhibit cellular differentiation. A similar phenomenon is observed for the Jumonji C family of KDMs which require a hydroxylation to perform demethylation at the epsilon-amino methyl group. Additionally, the inability of prolyl hydroxylases to catalyze reactions results in stabilization of hypoxia-inducible factor alpha, which is necessary to promote degradation of the latter (as under conditions of low oxygen there will not be adequate substrate for hydroxylation). This results in a pseudohypoxic phenotype in the cancer cell that promotes angiogenesis, metabolic reprogramming, cell growth, and migration.
Allosteric regulation by metabolites. The regulation of the citric acid cycle is largely determined by product inhibition and substrate availability. If the cycle were permitted to run unchecked, large amounts of metabolic energy could be wasted in overproduction of reduced coenzyme such as NADH and ATP. The major eventual substrate of the cycle is ADP which gets converted to ATP. A reduced amount of ADP causes accumulation of precursor NADH which in turn can inhibit a number of enzymes. NADH, a product of all dehydrogenases in the citric acid cycle with the exception of succinate dehydrogenase, inhibits pyruvate dehydrogenase, isocitrate dehydrogenase, α-ketoglutarate dehydrogenase, and also citrate synthase. Acetyl-coA inhibits pyruvate dehydrogenase, while succinyl-CoA inhibits alpha-ketoglutarate dehydrogenase and citrate synthase. When tested in vitro with TCA enzymes, ATP inhibits citrate synthase and α-ketoglutarate dehydrogenase; however, ATP levels do not change more than 10% in vivo between rest and vigorous exercise. There is no known allosteric mechanism that can account for large changes in reaction rate from an allosteric effector whose concentration changes less than 10%.
Citrate is used for feedback inhibition, as it inhibits phosphofructokinase, an enzyme involved in glycolysis that catalyses formation of fructose 1,6-bisphosphate, a precursor of pyruvate. This prevents a constant high rate of flux when there is an accumulation of citrate and a decrease in substrate for the enzyme.
Regulation by calcium. Calcium is also used as a regulator in the citric acid cycle. Calcium levels in the mitochondrial matrix can reach up to the tens of micromolar levels during cellular activation. It activates pyruvate dehydrogenase phosphatase which in turn activates the pyruvate dehydrogenase complex. Calcium also activates isocitrate dehydrogenase and α-ketoglutarate dehydrogenase. This increases the reaction rate of many of the steps in the cycle, and therefore increases flux throughout the pathway.
Transcriptional regulation. Recent work has demonstrated an important link between intermediates of the citric acid cycle and the regulation of hypoxia-inducible factors (HIF). HIF plays a role in the regulation of oxygen homeostasis, and is a transcription factor that targets angiogenesis, vascular remodeling, glucose utilization, iron transport and apoptosis. HIF is synthesized constitutively, and hydroxylation of at least one of two critical proline residues mediates their interaction with the von Hippel Lindau E3 ubiquitin ligase complex, which targets them for rapid degradation. This reaction is catalysed by prolyl 4-hydroxylases. Fumarate and succinate have been identified as potent inhibitors of prolyl hydroxylases, thus leading to the stabilisation of HIF.
Several catabolic pathways converge on the citric acid cycle. Most of these reactions add intermediates to the citric acid cycle, and are therefore known as anaplerotic reactions, from the Greek meaning to "fill up". These increase the amount of acetyl CoA that the cycle is able to carry, increasing the mitochondrion's capability to carry out respiration if this is otherwise a limiting factor. Processes that remove intermediates from the cycle are termed "cataplerotic" reactions.
In this section and in the next, the citric acid cycle intermediates are indicated in italics to distinguish them from other substrates and end-products.
Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix. Here they can be oxidized and combined with coenzyme A to form CO2, acetyl-CoA, and NADH, as in the normal cycle.
However, it is also possible for pyruvate to be carboxylated by pyruvate carboxylase to form oxaloacetate. This latter reaction "fills up" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in muscle) are suddenly increased by activity.
In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate, and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of oxaloacetate available to combine with acetyl-CoA to form citric acid. This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell.
Acetyl-CoA, on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of acetyl-CoA is consumed for every molecule of oxaloacetate present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of acetyl-CoA that produces CO2 and water, with the energy thus released captured in the form of ATP. The three steps of beta-oxidation resemble the steps that occur in the production of oxaloacetate from succinate in the TCA cycle. Acyl-CoA is oxidized to trans-Enoyl-CoA while FAD is reduced to FADH2, which is similar to the oxidation of succinate to fumarate. Following, trans-Enoyl-CoA is hydrated across the double bond to beta-hydroxyacyl-CoA, just like fumarate is hydrated to malate. Lastly, beta-hydroxyacyl-CoA is oxidized to beta-ketoacyl-CoA while NAD+ is reduced to NADH, which follows the same process as the oxidation of malate to oxaloacetate.
In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial oxaloacetate is an early step in the gluconeogenic pathway which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here the addition of oxaloacetate to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate (malate) is immediately removed from the mitochondrion to be converted into cytosolic oxaloacetate, which is ultimately converted into glucose, in a process that is almost the reverse of glycolysis.
In protein catabolism, proteins are broken down by proteases into their constituent amino acids. Their carbon skeletons (i.e. the de-aminated amino acids) may either enter the citric acid cycle as intermediates (e.g. alpha-ketoglutarate derived from glutamate or glutamine), having an anaplerotic effect on the cycle, or, in the case of leucine, isoleucine, lysine, phenylalanine, tryptophan, and tyrosine, they are converted into acetyl-CoA which can be burned to CO2 and water, or used to form ketone bodies, which too can only be burned in tissues other than the liver where they are formed, or excreted via the urine or breath. These latter amino acids are therefore termed "ketogenic" amino acids, whereas those that enter the citric acid cycle as intermediates can only be cataplerotically removed by entering the gluconeogenic pathway via malate which is transported out of the mitochondrion to be converted into cytosolic oxaloacetate and ultimately into glucose. These are the so-called "glucogenic" amino acids. De-aminated alanine, cysteine, glycine, serine, and threonine are converted to pyruvate and can consequently either enter the citric acid cycle as oxaloacetate (an anaplerotic reaction) or as acetyl-CoA to be disposed of as CO2 and water.
In fat catabolism, triglycerides are hydrolyzed to break them into fatty acids and glycerol. In the liver the glycerol can be converted into glucose via dihydroxyacetone phosphate and glyceraldehyde-3-phosphate by way of gluconeogenesis. In skeletal muscle, glycerol is used in glycolysis by converting glycerol into glycerol-3-phosphate, then into dihydroxyacetone phosphate (DHAP), then into glyceraldehyde-3-phosphate.
In many tissues, especially heart and skeletal muscle tissue, fatty acids are broken down through a process known as beta oxidation, which results in the production of mitochondrial acetyl-CoA, which can be used in the citric acid cycle. Beta oxidation of fatty acids with an odd number of methylene bridges produces propionyl-CoA, which is then converted into succinyl-CoA and fed into the citric acid cycle as an anaplerotic intermediate.
The total energy gained from the complete breakdown of one (six-carbon) molecule of glucose by glycolysis, the formation of 2 acetyl-CoA molecules, their catabolism in the citric acid cycle, and oxidative phosphorylation equals about 30 ATP molecules, in eukaryotes. The number of ATP molecules derived from the beta oxidation of a 6 carbon segment of a fatty acid chain, and the subsequent oxidation of the resulting 3 molecules of acetyl-CoA is 40.
In this subheading, as in the previous one, the TCA intermediates are identified by italics.
Several of the citric acid cycle intermediates are used for the synthesis of important compounds, which will have significant cataplerotic effects on the cycle. Acetyl-CoA cannot be transported out of the mitochondrion. To obtain cytosolic acetyl-CoA, citrate is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then converted back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA is used for fatty acid synthesis and the production of cholesterol. Cholesterol can, in turn, be used to synthesize the steroid hormones, bile salts, and vitamin D.
The carbon skeletons of many non-essential amino acids are made from citric acid cycle intermediates. To turn them into amino acids the alpha keto-acids formed from the citric acid cycle intermediates have to acquire their amino groups from glutamate in a transamination reaction, in which pyridoxal phosphate is a cofactor. In this reaction the glutamate is converted into alpha-ketoglutarate, which is a citric acid cycle intermediate. The intermediates that can provide the carbon skeletons for amino acid synthesis are oxaloacetate which forms aspartate and asparagine; and alpha-ketoglutarate which forms glutamine, proline, and arginine.
Of these amino acids, aspartate and glutamine are used, together with carbon and nitrogen atoms from other sources, to form the purines that are used as the bases in DNA and RNA, as well as in ATP, AMP, GTP, NAD, FAD and CoA.
The pyrimidines are partly assembled from aspartate (derived from oxaloacetate). The pyrimidines, thymine, cytosine and uracil, form the complementary bases to the purine bases in DNA and RNA, and are also components of CTP, UMP, UDP and UTP.
The majority of the carbon atoms in the porphyrins come from the citric acid cycle intermediate, succinyl-CoA. These molecules are an important component of the hemoproteins, such as hemoglobin, myoglobin and various cytochromes.
During gluconeogenesis mitochondrial oxaloacetate is reduced to malate which is then transported out of the mitochondrion, to be oxidized back to oxaloacetate in the cytosol. Cytosolic oxaloacetate is then decarboxylated to phosphoenolpyruvate by phosphoenolpyruvate carboxykinase, which is the rate limiting step in the conversion of nearly all the gluconeogenic precursors (such as the glucogenic amino acids and lactate) into glucose by the liver and kidney.
Because the citric acid cycle is involved in both catabolic and anabolic processes, it is known as an amphibolic pathway. Evan M.W.Duo Click on genes, proteins and metabolites below to link to respective articles.
The metabolic role of lactate is well recognized as a fuel for tissues, mitochondrial cytopathies such as DPH Cytopathy, and the scientific field of oncology (tumors). In the classical Cori cycle, muscles produce lactate which is then taken up by the liver for gluconeogenesis. New studies suggest that lactate can be used as a source of carbon for the TCA cycle.
It is believed that components of the citric acid cycle were derived from anaerobic bacteria, and that the TCA cycle itself may have evolved more than once. Theoretically, several alternatives to the TCA cycle exist; however, the TCA cycle appears to be the most efficient. If several TCA alternatives had evolved independently, they all appear to have converged to the TCA cycle. | [
{
"paragraph_id": 0,
"text": "The citric acid cycle —also known as the Krebs cycle, Szent-Györgyi-Krebs cycle or the TCA cycle (tricarboxylic acid cycle)—is a series of biochemical reactions to release the energy stored in nutrients through the oxidation of acetyl-CoA derived from carbohydrates, fats, and proteins. The chemical energy released is available under the form of ATP. The Krebs cycle is used by organisms that respire (as opposed to organisms that ferment) to generate energy, either by anaerobic respiration or aerobic respiration. In addition, the cycle provides precursors of certain amino acids, as well as the reducing agent NADH, that are used in numerous other reactions. Its central importance to many biochemical pathways suggests that it was one of the earliest components of metabolism. Even though it is branded as a 'cycle', it is not necessary for metabolites to follow only one specific route; at least three alternative segments of the citric acid cycle have been recognized.",
"title": ""
},
{
"paragraph_id": 1,
"text": "The name of this metabolic pathway is derived from the citric acid (a tricarboxylic acid, often called citrate, as the ionized form predominates at biological pH) that is consumed and then regenerated by this sequence of reactions to complete the cycle. The cycle consumes acetate (in the form of acetyl-CoA) and water, reduces NAD to NADH, releasing carbon dioxide. The NADH generated by the citric acid cycle is fed into the oxidative phosphorylation (electron transport) pathway. The net result of these two closely linked pathways is the oxidation of nutrients to produce usable chemical energy in the form of ATP.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In eukaryotic cells, the citric acid cycle occurs in the matrix of the mitochondrion. In prokaryotic cells, such as bacteria, which lack mitochondria, the citric acid cycle reaction sequence is performed in the cytosol with the proton gradient for ATP production being across the cell's surface (plasma membrane) rather than the inner membrane of the mitochondrion.",
"title": ""
},
{
"paragraph_id": 3,
"text": "For each pyruvate molecule (from glycolysis), the overall yield of energy-containing compounds from the citric acid cycle is three NADH, one FADH2, and one GTP.",
"title": ""
},
{
"paragraph_id": 4,
"text": "Several of the components and reactions of the citric acid cycle were established in the 1930s by the research of Albert Szent-Györgyi, who received the Nobel Prize in Physiology or Medicine in 1937 specifically for his discoveries pertaining to fumaric acid, a component of the cycle. He made this discovery by studying pigeon breast muscle. Because this tissue maintains its oxidative capacity well after breaking down in the Latapie mill and releasing in aqueous solutions, breast muscle of the pigeon was very well qualified for the study of oxidative reactions. The citric acid cycle itself was finally identified in 1937 by Hans Adolf Krebs and William Arthur Johnson while at the University of Sheffield, for which the former received the Nobel Prize for Physiology or Medicine in 1953, and for whom the cycle is sometimes named the \"Krebs cycle\".",
"title": "Discovery"
},
{
"paragraph_id": 5,
"text": "The citric acid cycle is a metabolic pathway that connects carbohydrate, fat, and protein metabolism. The reactions of the cycle are carried out by eight enzymes that completely oxidize acetate (a two carbon molecule), in the form of acetyl-CoA, into two molecules each of carbon dioxide and water. Through catabolism of sugars, fats, and proteins, the two-carbon organic product acetyl-CoA is produced which enters the citric acid cycle. The reactions of the cycle also convert three equivalents of nicotinamide adenine dinucleotide (NAD) into three equivalents of reduced NAD (NADH), one equivalent of flavin adenine dinucleotide (FAD) into one equivalent of FADH2, and one equivalent each of guanosine diphosphate (GDP) and inorganic phosphate (Pi) into one equivalent of guanosine triphosphate (GTP). The NADH and FADH2 generated by the citric acid cycle are, in turn, used by the oxidative phosphorylation pathway to generate energy-rich ATP.",
"title": "Overview"
},
{
"paragraph_id": 6,
"text": "One of the primary sources of acetyl-CoA is from the breakdown of sugars by glycolysis which yield pyruvate that in turn is decarboxylated by the pyruvate dehydrogenase complex generating acetyl-CoA according to the following reaction scheme:",
"title": "Overview"
},
{
"paragraph_id": 7,
"text": "The product of this reaction, acetyl-CoA, is the starting point for the citric acid cycle. Acetyl-CoA may also be obtained from the oxidation of fatty acids. Below is a schematic outline of the cycle:",
"title": "Overview"
},
{
"paragraph_id": 8,
"text": "There are ten basic steps in the citric acid cycle, as outlined below. The cycle is continuously supplied with new carbon in the form of acetyl-CoA, entering at step 0 in the table.",
"title": "Steps"
},
{
"paragraph_id": 9,
"text": "Two carbon atoms are oxidized to CO2, the energy from these reactions is transferred to other metabolic processes through GTP (or ATP), and as electrons in NADH and QH2. The NADH generated in the citric acid cycle may later be oxidized (donate its electrons) to drive ATP synthesis in a type of process called oxidative phosphorylation. FADH2 is covalently attached to succinate dehydrogenase, an enzyme which functions both in the citric acid cycle and the mitochondrial electron transport chain in oxidative phosphorylation. FADH2, therefore, facilitates transfer of electrons to coenzyme Q, which is the final electron acceptor of the reaction catalyzed by the succinate:ubiquinone oxidoreductase complex, also acting as an intermediate in the electron transport chain.",
"title": "Steps"
},
{
"paragraph_id": 10,
"text": "Mitochondria in animals, including humans, possess two succinyl-CoA synthetases: one that produces GTP from GDP, and another that produces ATP from ADP. Plants have the type that produces ATP (ADP-forming succinyl-CoA synthetase). Several of the enzymes in the cycle may be loosely associated in a multienzyme protein complex within the mitochondrial matrix.",
"title": "Steps"
},
{
"paragraph_id": 11,
"text": "The GTP that is formed by GDP-forming succinyl-CoA synthetase may be utilized by nucleoside-diphosphate kinase to form ATP (the catalyzed reaction is GTP + ADP → GDP + ATP).",
"title": "Steps"
},
{
"paragraph_id": 12,
"text": "Products of the first turn of the cycle are one GTP (or ATP), three NADH, one FADH2 and two CO2.",
"title": "Products"
},
{
"paragraph_id": 13,
"text": "Because two acetyl-CoA molecules are produced from each glucose molecule, two cycles are required per glucose molecule. Therefore, at the end of two cycles, the products are: two GTP, six NADH, two FADH2, and four CO2.",
"title": "Products"
},
{
"paragraph_id": 14,
"text": "The above reactions are balanced if Pi represents the H2PO4 ion, ADP and GDP the ADP and GDP ions, respectively, and ATP and GTP the ATP and GTP ions, respectively.",
"title": "Products"
},
{
"paragraph_id": 15,
"text": "The total number of ATP molecules obtained after complete oxidation of one glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is estimated to be between 30 and 38.",
"title": "Products"
},
{
"paragraph_id": 16,
"text": "The theoretical maximum yield of ATP through oxidation of one molecule of glucose in glycolysis, citric acid cycle, and oxidative phosphorylation is 38 (assuming 3 molar equivalents of ATP per equivalent NADH and 2 ATP per FADH2). In eukaryotes, two equivalents of NADH and two equivalents of ATP are generated in glycolysis, which takes place in the cytoplasm. If transported using the glycerol phosphate shuttle rather than the malate-aspartate shuttle, transport of two of these equivalents of NADH into the mitochondria effectively consumes two equivalents of ATP, thus reducing the net production of ATP to 36. Furthermore, inefficiencies in oxidative phosphorylation due to leakage of protons across the mitochondrial membrane and slippage of the ATP synthase/proton pump commonly reduces the ATP yield from NADH and FADH2 to less than the theoretical maximum yield. The observed yields are, therefore, closer to ~2.5 ATP per NADH and ~1.5 ATP per FADH2, further reducing the total net production of ATP to approximately 30. An assessment of the total ATP yield with newly revised proton-to-ATP ratios provides an estimate of 29.85 ATP per glucose molecule.",
"title": "Efficiency"
},
{
"paragraph_id": 17,
"text": "While the citric acid cycle is in general highly conserved, there is significant variability in the enzymes found in different taxa (note that the diagrams on this page are specific to the mammalian pathway variant).",
"title": "Variation"
},
{
"paragraph_id": 18,
"text": "Some differences exist between eukaryotes and prokaryotes. The conversion of D-threo-isocitrate to 2-oxoglutarate is catalyzed in eukaryotes by the NAD-dependent EC 1.1.1.41, while prokaryotes employ the NADP-dependent EC 1.1.1.42. Similarly, the conversion of (S)-malate to oxaloacetate is catalyzed in eukaryotes by the NAD-dependent EC 1.1.1.37, while most prokaryotes utilize a quinone-dependent enzyme, EC 1.1.5.4.",
"title": "Variation"
},
{
"paragraph_id": 19,
"text": "A step with significant variability is the conversion of succinyl-CoA to succinate. Most organisms utilize EC 6.2.1.5, succinate–CoA ligase (ADP-forming) (despite its name, the enzyme operates in the pathway in the direction of ATP formation). In mammals a GTP-forming enzyme, succinate–CoA ligase (GDP-forming) (EC 6.2.1.4) also operates. The level of utilization of each isoform is tissue dependent. In some acetate-producing bacteria, such as Acetobacter aceti, an entirely different enzyme catalyzes this conversion – EC 2.8.3.18, succinyl-CoA:acetate CoA-transferase. This specialized enzyme links the TCA cycle with acetate metabolism in these organisms. Some bacteria, such as Helicobacter pylori, employ yet another enzyme for this conversion – succinyl-CoA:acetoacetate CoA-transferase (EC 2.8.3.5).",
"title": "Variation"
},
{
"paragraph_id": 20,
"text": "Some variability also exists at the previous step – the conversion of 2-oxoglutarate to succinyl-CoA. While most organisms utilize the ubiquitous NAD-dependent 2-oxoglutarate dehydrogenase, some bacteria utilize a ferredoxin-dependent 2-oxoglutarate synthase (EC 1.2.7.3). Other organisms, including obligately autotrophic and methanotrophic bacteria and archaea, bypass succinyl-CoA entirely, and convert 2-oxoglutarate to succinate via succinate semialdehyde, using EC 4.1.1.71, 2-oxoglutarate decarboxylase, and EC 1.2.1.79, succinate-semialdehyde dehydrogenase.",
"title": "Variation"
},
{
"paragraph_id": 21,
"text": "In cancer, there are substantial metabolic derangements that occur to ensure the proliferation of tumor cells, and consequently metabolites can accumulate which serve to facilitate tumorigenesis, dubbed oncometabolites. Among the best characterized oncometabolites is 2-hydroxyglutarate which is produced through a heterozygous gain-of-function mutation (specifically a neomorphic one) in isocitrate dehydrogenase (IDH) (which under normal circumstances catalyzes the oxidation of isocitrate to oxalosuccinate, which then spontaneously decarboxylates to alpha-ketoglutarate, as discussed above; in this case an additional reduction step occurs after the formation of alpha-ketoglutarate via NADPH to yield 2-hydroxyglutarate), and hence IDH is considered an oncogene. Under physiological conditions, 2-hydroxyglutarate is a minor product of several metabolic pathways as an error but readily converted to alpha-ketoglutarate via hydroxyglutarate dehydrogenase enzymes (L2HGDH and D2HGDH) but does not have a known physiologic role in mammalian cells; of note, in cancer, 2-hydroxyglutarate is likely a terminal metabolite as isotope labelling experiments of colorectal cancer cell lines show that its conversion back to alpha-ketoglutarate is too low to measure. In cancer, 2-hydroxyglutarate serves as a competitive inhibitor for a number of enzymes that facilitate reactions via alpha-ketoglutarate in alpha-ketoglutarate-dependent dioxygenases. This mutation results in several important changes to the metabolism of the cell. For one thing, because there is an extra NADPH-catalyzed reduction, this can contribute to depletion of cellular stores of NADPH and also reduce levels of alpha-ketoglutarate available to the cell. In particular, the depletion of NADPH is problematic because NADPH is highly compartmentalized and cannot freely diffuse between the organelles in the cell. It is produced largely via the pentose phosphate pathway in the cytoplasm. The depletion of NADPH results in increased oxidative stress within the cell as it is a required cofactor in the production of GSH, and this oxidative stress can result in DNA damage. There are also changes on the genetic and epigenetic level through the function of histone lysine demethylases (KDMs) and ten-eleven translocation (TET) enzymes; ordinarily TETs hydroxylate 5-methylcytosines to prime them for demethylation. However, in the absence of alpha-ketoglutarate this cannot be done and there is hence hypermethylation of the cell's DNA, serving to promote epithelial-mesenchymal transition (EMT) and inhibit cellular differentiation. A similar phenomenon is observed for the Jumonji C family of KDMs which require a hydroxylation to perform demethylation at the epsilon-amino methyl group. Additionally, the inability of prolyl hydroxylases to catalyze reactions results in stabilization of hypoxia-inducible factor alpha, which is necessary to promote degradation of the latter (as under conditions of low oxygen there will not be adequate substrate for hydroxylation). This results in a pseudohypoxic phenotype in the cancer cell that promotes angiogenesis, metabolic reprogramming, cell growth, and migration.",
"title": "Variation"
},
{
"paragraph_id": 22,
"text": "Allosteric regulation by metabolites. The regulation of the citric acid cycle is largely determined by product inhibition and substrate availability. If the cycle were permitted to run unchecked, large amounts of metabolic energy could be wasted in overproduction of reduced coenzyme such as NADH and ATP. The major eventual substrate of the cycle is ADP which gets converted to ATP. A reduced amount of ADP causes accumulation of precursor NADH which in turn can inhibit a number of enzymes. NADH, a product of all dehydrogenases in the citric acid cycle with the exception of succinate dehydrogenase, inhibits pyruvate dehydrogenase, isocitrate dehydrogenase, α-ketoglutarate dehydrogenase, and also citrate synthase. Acetyl-coA inhibits pyruvate dehydrogenase, while succinyl-CoA inhibits alpha-ketoglutarate dehydrogenase and citrate synthase. When tested in vitro with TCA enzymes, ATP inhibits citrate synthase and α-ketoglutarate dehydrogenase; however, ATP levels do not change more than 10% in vivo between rest and vigorous exercise. There is no known allosteric mechanism that can account for large changes in reaction rate from an allosteric effector whose concentration changes less than 10%.",
"title": "Regulation"
},
{
"paragraph_id": 23,
"text": "Citrate is used for feedback inhibition, as it inhibits phosphofructokinase, an enzyme involved in glycolysis that catalyses formation of fructose 1,6-bisphosphate, a precursor of pyruvate. This prevents a constant high rate of flux when there is an accumulation of citrate and a decrease in substrate for the enzyme.",
"title": "Regulation"
},
{
"paragraph_id": 24,
"text": "Regulation by calcium. Calcium is also used as a regulator in the citric acid cycle. Calcium levels in the mitochondrial matrix can reach up to the tens of micromolar levels during cellular activation. It activates pyruvate dehydrogenase phosphatase which in turn activates the pyruvate dehydrogenase complex. Calcium also activates isocitrate dehydrogenase and α-ketoglutarate dehydrogenase. This increases the reaction rate of many of the steps in the cycle, and therefore increases flux throughout the pathway.",
"title": "Regulation"
},
{
"paragraph_id": 25,
"text": "Transcriptional regulation. Recent work has demonstrated an important link between intermediates of the citric acid cycle and the regulation of hypoxia-inducible factors (HIF). HIF plays a role in the regulation of oxygen homeostasis, and is a transcription factor that targets angiogenesis, vascular remodeling, glucose utilization, iron transport and apoptosis. HIF is synthesized constitutively, and hydroxylation of at least one of two critical proline residues mediates their interaction with the von Hippel Lindau E3 ubiquitin ligase complex, which targets them for rapid degradation. This reaction is catalysed by prolyl 4-hydroxylases. Fumarate and succinate have been identified as potent inhibitors of prolyl hydroxylases, thus leading to the stabilisation of HIF.",
"title": "Regulation"
},
{
"paragraph_id": 26,
"text": "Several catabolic pathways converge on the citric acid cycle. Most of these reactions add intermediates to the citric acid cycle, and are therefore known as anaplerotic reactions, from the Greek meaning to \"fill up\". These increase the amount of acetyl CoA that the cycle is able to carry, increasing the mitochondrion's capability to carry out respiration if this is otherwise a limiting factor. Processes that remove intermediates from the cycle are termed \"cataplerotic\" reactions.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 27,
"text": "In this section and in the next, the citric acid cycle intermediates are indicated in italics to distinguish them from other substrates and end-products.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 28,
"text": "Pyruvate molecules produced by glycolysis are actively transported across the inner mitochondrial membrane, and into the matrix. Here they can be oxidized and combined with coenzyme A to form CO2, acetyl-CoA, and NADH, as in the normal cycle.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 29,
"text": "However, it is also possible for pyruvate to be carboxylated by pyruvate carboxylase to form oxaloacetate. This latter reaction \"fills up\" the amount of oxaloacetate in the citric acid cycle, and is therefore an anaplerotic reaction, increasing the cycle's capacity to metabolize acetyl-CoA when the tissue's energy needs (e.g. in muscle) are suddenly increased by activity.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 30,
"text": "In the citric acid cycle all the intermediates (e.g. citrate, iso-citrate, alpha-ketoglutarate, succinate, fumarate, malate, and oxaloacetate) are regenerated during each turn of the cycle. Adding more of any of these intermediates to the mitochondrion therefore means that that additional amount is retained within the cycle, increasing all the other intermediates as one is converted into the other. Hence the addition of any one of them to the cycle has an anaplerotic effect, and its removal has a cataplerotic effect. These anaplerotic and cataplerotic reactions will, during the course of the cycle, increase or decrease the amount of oxaloacetate available to combine with acetyl-CoA to form citric acid. This in turn increases or decreases the rate of ATP production by the mitochondrion, and thus the availability of ATP to the cell.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 31,
"text": "Acetyl-CoA, on the other hand, derived from pyruvate oxidation, or from the beta-oxidation of fatty acids, is the only fuel to enter the citric acid cycle. With each turn of the cycle one molecule of acetyl-CoA is consumed for every molecule of oxaloacetate present in the mitochondrial matrix, and is never regenerated. It is the oxidation of the acetate portion of acetyl-CoA that produces CO2 and water, with the energy thus released captured in the form of ATP. The three steps of beta-oxidation resemble the steps that occur in the production of oxaloacetate from succinate in the TCA cycle. Acyl-CoA is oxidized to trans-Enoyl-CoA while FAD is reduced to FADH2, which is similar to the oxidation of succinate to fumarate. Following, trans-Enoyl-CoA is hydrated across the double bond to beta-hydroxyacyl-CoA, just like fumarate is hydrated to malate. Lastly, beta-hydroxyacyl-CoA is oxidized to beta-ketoacyl-CoA while NAD+ is reduced to NADH, which follows the same process as the oxidation of malate to oxaloacetate.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 32,
"text": "In the liver, the carboxylation of cytosolic pyruvate into intra-mitochondrial oxaloacetate is an early step in the gluconeogenic pathway which converts lactate and de-aminated alanine into glucose, under the influence of high levels of glucagon and/or epinephrine in the blood. Here the addition of oxaloacetate to the mitochondrion does not have a net anaplerotic effect, as another citric acid cycle intermediate (malate) is immediately removed from the mitochondrion to be converted into cytosolic oxaloacetate, which is ultimately converted into glucose, in a process that is almost the reverse of glycolysis.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 33,
"text": "In protein catabolism, proteins are broken down by proteases into their constituent amino acids. Their carbon skeletons (i.e. the de-aminated amino acids) may either enter the citric acid cycle as intermediates (e.g. alpha-ketoglutarate derived from glutamate or glutamine), having an anaplerotic effect on the cycle, or, in the case of leucine, isoleucine, lysine, phenylalanine, tryptophan, and tyrosine, they are converted into acetyl-CoA which can be burned to CO2 and water, or used to form ketone bodies, which too can only be burned in tissues other than the liver where they are formed, or excreted via the urine or breath. These latter amino acids are therefore termed \"ketogenic\" amino acids, whereas those that enter the citric acid cycle as intermediates can only be cataplerotically removed by entering the gluconeogenic pathway via malate which is transported out of the mitochondrion to be converted into cytosolic oxaloacetate and ultimately into glucose. These are the so-called \"glucogenic\" amino acids. De-aminated alanine, cysteine, glycine, serine, and threonine are converted to pyruvate and can consequently either enter the citric acid cycle as oxaloacetate (an anaplerotic reaction) or as acetyl-CoA to be disposed of as CO2 and water.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 34,
"text": "In fat catabolism, triglycerides are hydrolyzed to break them into fatty acids and glycerol. In the liver the glycerol can be converted into glucose via dihydroxyacetone phosphate and glyceraldehyde-3-phosphate by way of gluconeogenesis. In skeletal muscle, glycerol is used in glycolysis by converting glycerol into glycerol-3-phosphate, then into dihydroxyacetone phosphate (DHAP), then into glyceraldehyde-3-phosphate.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 35,
"text": "In many tissues, especially heart and skeletal muscle tissue, fatty acids are broken down through a process known as beta oxidation, which results in the production of mitochondrial acetyl-CoA, which can be used in the citric acid cycle. Beta oxidation of fatty acids with an odd number of methylene bridges produces propionyl-CoA, which is then converted into succinyl-CoA and fed into the citric acid cycle as an anaplerotic intermediate.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 36,
"text": "The total energy gained from the complete breakdown of one (six-carbon) molecule of glucose by glycolysis, the formation of 2 acetyl-CoA molecules, their catabolism in the citric acid cycle, and oxidative phosphorylation equals about 30 ATP molecules, in eukaryotes. The number of ATP molecules derived from the beta oxidation of a 6 carbon segment of a fatty acid chain, and the subsequent oxidation of the resulting 3 molecules of acetyl-CoA is 40.",
"title": "Major metabolic pathways converging on the citric acid cycle"
},
{
"paragraph_id": 37,
"text": "In this subheading, as in the previous one, the TCA intermediates are identified by italics.",
"title": "Citric acid cycle intermediates serve as substrates for biosynthetic processes"
},
{
"paragraph_id": 38,
"text": "Several of the citric acid cycle intermediates are used for the synthesis of important compounds, which will have significant cataplerotic effects on the cycle. Acetyl-CoA cannot be transported out of the mitochondrion. To obtain cytosolic acetyl-CoA, citrate is removed from the citric acid cycle and carried across the inner mitochondrial membrane into the cytosol. There it is cleaved by ATP citrate lyase into acetyl-CoA and oxaloacetate. The oxaloacetate is returned to mitochondrion as malate (and then converted back into oxaloacetate to transfer more acetyl-CoA out of the mitochondrion). The cytosolic acetyl-CoA is used for fatty acid synthesis and the production of cholesterol. Cholesterol can, in turn, be used to synthesize the steroid hormones, bile salts, and vitamin D.",
"title": "Citric acid cycle intermediates serve as substrates for biosynthetic processes"
},
{
"paragraph_id": 39,
"text": "The carbon skeletons of many non-essential amino acids are made from citric acid cycle intermediates. To turn them into amino acids the alpha keto-acids formed from the citric acid cycle intermediates have to acquire their amino groups from glutamate in a transamination reaction, in which pyridoxal phosphate is a cofactor. In this reaction the glutamate is converted into alpha-ketoglutarate, which is a citric acid cycle intermediate. The intermediates that can provide the carbon skeletons for amino acid synthesis are oxaloacetate which forms aspartate and asparagine; and alpha-ketoglutarate which forms glutamine, proline, and arginine.",
"title": "Citric acid cycle intermediates serve as substrates for biosynthetic processes"
},
{
"paragraph_id": 40,
"text": "Of these amino acids, aspartate and glutamine are used, together with carbon and nitrogen atoms from other sources, to form the purines that are used as the bases in DNA and RNA, as well as in ATP, AMP, GTP, NAD, FAD and CoA.",
"title": "Citric acid cycle intermediates serve as substrates for biosynthetic processes"
},
{
"paragraph_id": 41,
"text": "The pyrimidines are partly assembled from aspartate (derived from oxaloacetate). The pyrimidines, thymine, cytosine and uracil, form the complementary bases to the purine bases in DNA and RNA, and are also components of CTP, UMP, UDP and UTP.",
"title": "Citric acid cycle intermediates serve as substrates for biosynthetic processes"
},
{
"paragraph_id": 42,
"text": "The majority of the carbon atoms in the porphyrins come from the citric acid cycle intermediate, succinyl-CoA. These molecules are an important component of the hemoproteins, such as hemoglobin, myoglobin and various cytochromes.",
"title": "Citric acid cycle intermediates serve as substrates for biosynthetic processes"
},
{
"paragraph_id": 43,
"text": "During gluconeogenesis mitochondrial oxaloacetate is reduced to malate which is then transported out of the mitochondrion, to be oxidized back to oxaloacetate in the cytosol. Cytosolic oxaloacetate is then decarboxylated to phosphoenolpyruvate by phosphoenolpyruvate carboxykinase, which is the rate limiting step in the conversion of nearly all the gluconeogenic precursors (such as the glucogenic amino acids and lactate) into glucose by the liver and kidney.",
"title": "Citric acid cycle intermediates serve as substrates for biosynthetic processes"
},
{
"paragraph_id": 44,
"text": "Because the citric acid cycle is involved in both catabolic and anabolic processes, it is known as an amphibolic pathway. Evan M.W.Duo Click on genes, proteins and metabolites below to link to respective articles.",
"title": "Citric acid cycle intermediates serve as substrates for biosynthetic processes"
},
{
"paragraph_id": 45,
"text": "The metabolic role of lactate is well recognized as a fuel for tissues, mitochondrial cytopathies such as DPH Cytopathy, and the scientific field of oncology (tumors). In the classical Cori cycle, muscles produce lactate which is then taken up by the liver for gluconeogenesis. New studies suggest that lactate can be used as a source of carbon for the TCA cycle.",
"title": "Glucose feeds the TCA cycle via circulating lactate"
},
{
"paragraph_id": 46,
"text": "It is believed that components of the citric acid cycle were derived from anaerobic bacteria, and that the TCA cycle itself may have evolved more than once. Theoretically, several alternatives to the TCA cycle exist; however, the TCA cycle appears to be the most efficient. If several TCA alternatives had evolved independently, they all appear to have converged to the TCA cycle.",
"title": "Evolution"
}
] | The citric acid cycle —also known as the Krebs cycle, Szent-Györgyi-Krebs cycle or the TCA cycle (tricarboxylic acid cycle)—is a series of biochemical reactions to release the energy stored in nutrients through the oxidation of acetyl-CoA derived from carbohydrates, fats, and proteins. The chemical energy released is available under the form of ATP. The Krebs cycle is used by organisms that respire (as opposed to organisms that ferment) to generate energy, either by anaerobic respiration or aerobic respiration. In addition, the cycle provides precursors of certain amino acids, as well as the reducing agent NADH, that are used in numerous other reactions. Its central importance to many biochemical pathways suggests that it was one of the earliest components of metabolism. Even though it is branded as a 'cycle', it is not necessary for metabolites to follow only one specific route; at least three alternative segments of the citric acid cycle have been recognized. The name of this metabolic pathway is derived from the citric acid (a tricarboxylic acid, often called citrate, as the ionized form predominates at biological pH) that is consumed and then regenerated by this sequence of reactions to complete the cycle. The cycle consumes acetate (in the form of acetyl-CoA) and water, reduces NAD+ to NADH, releasing carbon dioxide. The NADH generated by the citric acid cycle is fed into the oxidative phosphorylation (electron transport) pathway. The net result of these two closely linked pathways is the oxidation of nutrients to produce usable chemical energy in the form of ATP. In eukaryotic cells, the citric acid cycle occurs in the matrix of the mitochondrion. In prokaryotic cells, such as bacteria, which lack mitochondria, the citric acid cycle reaction sequence is performed in the cytosol with the proton gradient for ATP production being across the cell's surface (plasma membrane) rather than the inner membrane of the mitochondrion. For each pyruvate molecule (from glycolysis), the overall yield of energy-containing compounds from the citric acid cycle is three NADH, one FADH2, and one GTP. | 2001-10-16T21:15:49Z | 2023-12-28T18:35:55Z | [
"Template:MetabolismMap",
"Template:Authority control",
"Template:More citations needed",
"Template:Cite web",
"Template:Cn",
"Template:Scholia",
"Template:Webarchive",
"Template:Cellular respiration",
"Template:Citric acid cycle enzymes",
"Template:Block indent",
"Template:Nowrap",
"Template:Cite book",
"Template:Citation needed",
"Template:Reflist",
"Template:Clear",
"Template:TCACycle WP78",
"Template:Cite journal",
"Template:Citric acid cycle",
"Template:Short description",
"Template:Clarify"
] | https://en.wikipedia.org/wiki/Citric_acid_cycle |
6,821 | Military engineering vehicle | A military engineering vehicle is a vehicle built for construction work or for the transportation of combat engineers on the battlefield. These vehicles may be modified civilian equipment (such as the armoured bulldozers that many nations field) or purpose-built military vehicles (such as the AVRE). The first appearance of such vehicles coincided with the appearance of the first tanks, these vehicles were modified Mark V tanks for bridging and mine clearance. Modern military engineering vehicles are expected to fulfill numerous roles, as such they undertake numerous forms, examples of roles include; bulldozers, cranes, graders, excavators, dump trucks, breaching vehicles, bridging vehicles, military ferries, amphibious crossing vehicles, and combat engineer section carriers.
A Heavy RE tank was developed shortly after World War I by Major Giffard LeQuesne Martel RE. This vehicle was a modified Mark V tank. Two support functions for these Engineer Tanks were developed: bridging and mine clearance. The bridging component involved an assault bridge, designed by Major Charles Inglis RE, called the Canal Lock Bridge, which had sufficient length to span a canal lock. Major Martel mated the bridge with the tank and used hydraulic power generated by the tank's engine to maneuver the bridge into place. For mine clearance the tanks were equipped with 2 ton rollers.
Between the wars various experimental bridging tanks were used to test a series of methods for bridging obstacles and developed by the Experimental Bridging Establishment (EBE). Captain SG Galpin RE conceived a prototype Light Tank Mk V to test the Scissors Assault Bridge. This concept was realised by Captain SA Stewart RE with significant input from a Mr DM Delany, a scientific civil servant in the employ of the EBE. MB Wild & Co, Birmingham, also developed a bridge that could span gaps of 26 feet using a complex system of steel wire ropes and a traveling jib, where the front section was projected and then attached to the rear section prior to launching the bridge. This system had to be abandoned due to lack of success in getting it to work, however the idea was later used successfully on the Beaver Bridge Laying Tank.
Once World War Two had begun, the development of armoured vehicles for use by engineers in the field was accelerated under Delaney's direction. The EBE rapidly developed an assault bridge carried on a modified Covenanter tank capable of deploying a 24-ton tracked load capacity bridge (Class 24) that could span gaps of 30 feet. However, it did not see service in the British armed forces, and all vehicles were passed onto Allied forces such as Australia and Czechoslovakia.
A Class 30 design superseded the Class 24 with no real re-design, simply the substitution of the Covenanter tank with a suitably modified Valentine. As tanks in the war got heavier, a new bridge capable of supporting them was developed. A heavily modified Churchill used a single-piece bridge mounted on a turret-less tank and was able to lay the bridge in 90 seconds; this bridge was able to carry a 60-ton tracked or 40-ton wheeled load.
Hobart's Funnies were a number of unusually modified tanks operated during the Second World War by the 79th Armoured Division of the British Army or by specialists from the Royal Engineers. They were designed in light of problems that more standard tanks experienced during the amphibious Dieppe Raid, so that the new models would be able to overcome the problems of the planned Invasion of Normandy. These tanks played a major part on the Commonwealth beaches during the landings. They were forerunners of the modern combat engineering vehicle and were named after their commander, Major General Percy Hobart.
Hobart's unusual, specialized tanks, nicknamed "funnies", included:
In U.S. Forces, Sherman tanks were also fitted with dozer blades, and anti-mine roller devices were developed, enabling engineering operations and providing similar capabilities.
Post war, the value of the combat engineering vehicles had been proven, and armoured multi-role engineering vehicles have been added to the majority of armoured forces.
Military engineering can employ a wide variety of heavy equipment in the same or similar ways to how this equipment is used outside the military. Bulldozers, cranes, graders, excavators, dump trucks, loaders, and backhoes all see extensive use by military engineers.
Military engineers may also use civilian heavy equipment which was modified for military applications. Typically, this involves adding armour for protection from battlefield hazards such as artillery, unexploded ordnance, mines, and small arms fire. Often this protection is provided by armour plates and steel jackets. Some examples of armoured civilian heavy equipment are the IDF Caterpillar D9, American D7 TPK, Canadian D6 armoured bulldozer, cranes, graders, excavators, and M35 2-1/2 ton cargo truck.
Militarized heavy equipment may also take on the form of traditional civilian equipment designed and built to unique military specifications. These vehicles typically sacrifice some depth of capability from civilian models in order to gain greater speed and independence from prime movers. Examples of this type of vehicle include high speed backhoes such as the Australian Army's High Mobility Engineering Vehicle (HMEV) from Thales or the Canadian Army's Multi-Purpose Engineer Vehicle (MPEV) from Arva.
The main article for civilian heavy equipment is: Heavy equipment (construction)
Typically based on the platform of a main battle tank, these vehicles go by different names depending upon the country of use or manufacture. In the US the term "combat engineer vehicle (CEV)" is used, in the UK the terms "Armoured Vehicle Royal Engineers (AVRE)" or Armoured Repair and Recovery Vehicle (ARRV) are used, while in Canada and other commonwealth nations the term "armoured engineer vehicle (AEV)" is used. There is no set template for what such a vehicle will look like, yet likely features include a large dozer blade or mine ploughs, a large caliber demolition cannon, augers, winches, excavator arms and cranes or lifting booms.
These vehicles are designed to directly conduct obstacle breaching operations and to conduct other earth-moving and engineering work on the battlefield. Good examples of this type of vehicle include the UK Trojan AVRE, the Russian IMR, and the US M728 Combat Engineer Vehicle. Although the term "armoured engineer vehicle" is used specifically to describe these multi-purpose tank based engineering vehicles, that term is also used more generically in British and Commonwealth militaries to describe all heavy tank based engineering vehicles used in the support of mechanized forces. Thus, "armoured engineer vehicle" used generically would refer to AEV, AVLB, Assault Breachers, and so on.
Lighter and less multi-functional than the CEVs or AEVs described above, these vehicles are designed to conduct earth-moving work on the battlefield. These vehicles have greater high speed mobility than traditional heavy equipment and are protected against the effects of blast and fragmentation. Good examples are the American M9 ACE and the UK FV180 Combat Engineer Tractor.
These vehicles are equipped with mechanical or other means for the breaching of man made obstacles. Common types of breaching vehicles include mechanical flails, mine plough vehicles, and mine roller vehicles. In some cases, these vehicles will also mount Mine-clearing line charges. Breaching vehicles may be either converted armoured fighting vehicles or purpose built vehicles. In larger militaries, converted AFV are likely to be used as assault breachers while the breached obstacle is still covered by enemy observation and fire, and then purpose built breaching vehicles will create additional lanes for following forces.
Good examples of breaching vehicles include the US M1150 Assault Breacher Vehicle, the UK Aardvark JSFU, and the Singaporean Trailblazer.
Several types of military bridging vehicles have been developed. An armoured vehicle-launched bridge (AVLB) is typically a modified tank hull converted to carry a bridge into battle in order to support crossing ditches, small waterways, or other gap obstacles.
Another type of bridging vehicle is the truck launched bridge. The Soviet TMM bridging truck could carry and launch a 10-meter bridge that could be daisy-chained with other TMM bridges to cross larger obstacles. More recent developments have seen the conversion of AVLB and truck launched bridge with launching systems that can be mounted on either tank or truck for bridges that are capable of supporting heavy main battle tanks.
Earlier examples of bridging vehicles include a type in which a converted tank hull is the bridge. On these vehicles, the hull deck comprises the main portion of the tread way while ramps extend from the front and rear of the vehicle to allow other vehicles to climb over the bridging vehicle and cross obstacles. An example of this type of armoured bridging vehicle was the Churchill Ark used in the Second World War.
Another type of CEVs are armoured fighting vehicles which are used to transport sappers (combat engineers) and can be fitted with a bulldozer's blade and other mine-breaching devices. They are often used as APCs because of their carrying ability and heavy protection. They are usually armed with machine guns and grenade launchers and usually tracked to provide enough tractive force to push blades and rakes. Some examples are the U.S. M113 APC, IDF Puma, Nagmachon, Husky, and U.S. M1132 ESV (a Stryker variant).
One of the major tasks of military engineering is crossing major rivers. Several military engineering vehicles have been developed in various nations to achieve this task. One of the more common types is the amphibious ferry such as the M3 Amphibious Rig. These vehicles are self-propelled on land, they can transform into raft type ferries when in the water, and often multiple vehicles can connect to form larger rafts or floating bridges. Other types of military ferries, such as the Soviet Plavayushij Transportyor - Srednyj, are able to load while still on land and transport other vehicles cross country and over water.
In addition to amphibious crossing vehicles, military engineers may also employ several types of boats. Military assault boats are small boats propelled by oars or an outboard motor and used to ferry dismounted infantry across water.
Most CEVs are armoured fighting vehicles that may be based on a tank chassis and have special attachments in order to breach obstacles. Such attachments may include dozer blades, mine rollers, cranes etc. An example of an engineering vehicle of this kind is a bridgelaying tank, which replaces the turret with a segmented hydraulic bridge. The Hobart's Funnies of the Second World War were a wide variety of armoured vehicles for combat engineering tasks. They were allocated to the initial beachhead assaults by the British and Commonwealth forces in the D-Day landings.
The British Churchill tank because of its good cross-country performance and capacious interior with side hatches became the most adapted with modifications, the base unit being the AVRE carrying a large demolition gun. | [
{
"paragraph_id": 0,
"text": "A military engineering vehicle is a vehicle built for construction work or for the transportation of combat engineers on the battlefield. These vehicles may be modified civilian equipment (such as the armoured bulldozers that many nations field) or purpose-built military vehicles (such as the AVRE). The first appearance of such vehicles coincided with the appearance of the first tanks, these vehicles were modified Mark V tanks for bridging and mine clearance. Modern military engineering vehicles are expected to fulfill numerous roles, as such they undertake numerous forms, examples of roles include; bulldozers, cranes, graders, excavators, dump trucks, breaching vehicles, bridging vehicles, military ferries, amphibious crossing vehicles, and combat engineer section carriers.",
"title": ""
},
{
"paragraph_id": 1,
"text": "A Heavy RE tank was developed shortly after World War I by Major Giffard LeQuesne Martel RE. This vehicle was a modified Mark V tank. Two support functions for these Engineer Tanks were developed: bridging and mine clearance. The bridging component involved an assault bridge, designed by Major Charles Inglis RE, called the Canal Lock Bridge, which had sufficient length to span a canal lock. Major Martel mated the bridge with the tank and used hydraulic power generated by the tank's engine to maneuver the bridge into place. For mine clearance the tanks were equipped with 2 ton rollers.",
"title": "History"
},
{
"paragraph_id": 2,
"text": "Between the wars various experimental bridging tanks were used to test a series of methods for bridging obstacles and developed by the Experimental Bridging Establishment (EBE). Captain SG Galpin RE conceived a prototype Light Tank Mk V to test the Scissors Assault Bridge. This concept was realised by Captain SA Stewart RE with significant input from a Mr DM Delany, a scientific civil servant in the employ of the EBE. MB Wild & Co, Birmingham, also developed a bridge that could span gaps of 26 feet using a complex system of steel wire ropes and a traveling jib, where the front section was projected and then attached to the rear section prior to launching the bridge. This system had to be abandoned due to lack of success in getting it to work, however the idea was later used successfully on the Beaver Bridge Laying Tank.",
"title": "History"
},
{
"paragraph_id": 3,
"text": "Once World War Two had begun, the development of armoured vehicles for use by engineers in the field was accelerated under Delaney's direction. The EBE rapidly developed an assault bridge carried on a modified Covenanter tank capable of deploying a 24-ton tracked load capacity bridge (Class 24) that could span gaps of 30 feet. However, it did not see service in the British armed forces, and all vehicles were passed onto Allied forces such as Australia and Czechoslovakia.",
"title": "History"
},
{
"paragraph_id": 4,
"text": "A Class 30 design superseded the Class 24 with no real re-design, simply the substitution of the Covenanter tank with a suitably modified Valentine. As tanks in the war got heavier, a new bridge capable of supporting them was developed. A heavily modified Churchill used a single-piece bridge mounted on a turret-less tank and was able to lay the bridge in 90 seconds; this bridge was able to carry a 60-ton tracked or 40-ton wheeled load.",
"title": "History"
},
{
"paragraph_id": 5,
"text": "Hobart's Funnies were a number of unusually modified tanks operated during the Second World War by the 79th Armoured Division of the British Army or by specialists from the Royal Engineers. They were designed in light of problems that more standard tanks experienced during the amphibious Dieppe Raid, so that the new models would be able to overcome the problems of the planned Invasion of Normandy. These tanks played a major part on the Commonwealth beaches during the landings. They were forerunners of the modern combat engineering vehicle and were named after their commander, Major General Percy Hobart.",
"title": "History"
},
{
"paragraph_id": 6,
"text": "Hobart's unusual, specialized tanks, nicknamed \"funnies\", included:",
"title": "History"
},
{
"paragraph_id": 7,
"text": "In U.S. Forces, Sherman tanks were also fitted with dozer blades, and anti-mine roller devices were developed, enabling engineering operations and providing similar capabilities.",
"title": "History"
},
{
"paragraph_id": 8,
"text": "Post war, the value of the combat engineering vehicles had been proven, and armoured multi-role engineering vehicles have been added to the majority of armoured forces.",
"title": "History"
},
{
"paragraph_id": 9,
"text": "Military engineering can employ a wide variety of heavy equipment in the same or similar ways to how this equipment is used outside the military. Bulldozers, cranes, graders, excavators, dump trucks, loaders, and backhoes all see extensive use by military engineers.",
"title": "Types"
},
{
"paragraph_id": 10,
"text": "Military engineers may also use civilian heavy equipment which was modified for military applications. Typically, this involves adding armour for protection from battlefield hazards such as artillery, unexploded ordnance, mines, and small arms fire. Often this protection is provided by armour plates and steel jackets. Some examples of armoured civilian heavy equipment are the IDF Caterpillar D9, American D7 TPK, Canadian D6 armoured bulldozer, cranes, graders, excavators, and M35 2-1/2 ton cargo truck.",
"title": "Types"
},
{
"paragraph_id": 11,
"text": "Militarized heavy equipment may also take on the form of traditional civilian equipment designed and built to unique military specifications. These vehicles typically sacrifice some depth of capability from civilian models in order to gain greater speed and independence from prime movers. Examples of this type of vehicle include high speed backhoes such as the Australian Army's High Mobility Engineering Vehicle (HMEV) from Thales or the Canadian Army's Multi-Purpose Engineer Vehicle (MPEV) from Arva.",
"title": "Types"
},
{
"paragraph_id": 12,
"text": "The main article for civilian heavy equipment is: Heavy equipment (construction)",
"title": "Types"
},
{
"paragraph_id": 13,
"text": "Typically based on the platform of a main battle tank, these vehicles go by different names depending upon the country of use or manufacture. In the US the term \"combat engineer vehicle (CEV)\" is used, in the UK the terms \"Armoured Vehicle Royal Engineers (AVRE)\" or Armoured Repair and Recovery Vehicle (ARRV) are used, while in Canada and other commonwealth nations the term \"armoured engineer vehicle (AEV)\" is used. There is no set template for what such a vehicle will look like, yet likely features include a large dozer blade or mine ploughs, a large caliber demolition cannon, augers, winches, excavator arms and cranes or lifting booms.",
"title": "Types"
},
{
"paragraph_id": 14,
"text": "These vehicles are designed to directly conduct obstacle breaching operations and to conduct other earth-moving and engineering work on the battlefield. Good examples of this type of vehicle include the UK Trojan AVRE, the Russian IMR, and the US M728 Combat Engineer Vehicle. Although the term \"armoured engineer vehicle\" is used specifically to describe these multi-purpose tank based engineering vehicles, that term is also used more generically in British and Commonwealth militaries to describe all heavy tank based engineering vehicles used in the support of mechanized forces. Thus, \"armoured engineer vehicle\" used generically would refer to AEV, AVLB, Assault Breachers, and so on.",
"title": "Types"
},
{
"paragraph_id": 15,
"text": "Lighter and less multi-functional than the CEVs or AEVs described above, these vehicles are designed to conduct earth-moving work on the battlefield. These vehicles have greater high speed mobility than traditional heavy equipment and are protected against the effects of blast and fragmentation. Good examples are the American M9 ACE and the UK FV180 Combat Engineer Tractor.",
"title": "Types"
},
{
"paragraph_id": 16,
"text": "These vehicles are equipped with mechanical or other means for the breaching of man made obstacles. Common types of breaching vehicles include mechanical flails, mine plough vehicles, and mine roller vehicles. In some cases, these vehicles will also mount Mine-clearing line charges. Breaching vehicles may be either converted armoured fighting vehicles or purpose built vehicles. In larger militaries, converted AFV are likely to be used as assault breachers while the breached obstacle is still covered by enemy observation and fire, and then purpose built breaching vehicles will create additional lanes for following forces.",
"title": "Types"
},
{
"paragraph_id": 17,
"text": "Good examples of breaching vehicles include the US M1150 Assault Breacher Vehicle, the UK Aardvark JSFU, and the Singaporean Trailblazer.",
"title": "Types"
},
{
"paragraph_id": 18,
"text": "Several types of military bridging vehicles have been developed. An armoured vehicle-launched bridge (AVLB) is typically a modified tank hull converted to carry a bridge into battle in order to support crossing ditches, small waterways, or other gap obstacles.",
"title": "Types"
},
{
"paragraph_id": 19,
"text": "Another type of bridging vehicle is the truck launched bridge. The Soviet TMM bridging truck could carry and launch a 10-meter bridge that could be daisy-chained with other TMM bridges to cross larger obstacles. More recent developments have seen the conversion of AVLB and truck launched bridge with launching systems that can be mounted on either tank or truck for bridges that are capable of supporting heavy main battle tanks.",
"title": "Types"
},
{
"paragraph_id": 20,
"text": "Earlier examples of bridging vehicles include a type in which a converted tank hull is the bridge. On these vehicles, the hull deck comprises the main portion of the tread way while ramps extend from the front and rear of the vehicle to allow other vehicles to climb over the bridging vehicle and cross obstacles. An example of this type of armoured bridging vehicle was the Churchill Ark used in the Second World War.",
"title": "Types"
},
{
"paragraph_id": 21,
"text": "Another type of CEVs are armoured fighting vehicles which are used to transport sappers (combat engineers) and can be fitted with a bulldozer's blade and other mine-breaching devices. They are often used as APCs because of their carrying ability and heavy protection. They are usually armed with machine guns and grenade launchers and usually tracked to provide enough tractive force to push blades and rakes. Some examples are the U.S. M113 APC, IDF Puma, Nagmachon, Husky, and U.S. M1132 ESV (a Stryker variant).",
"title": "Types"
},
{
"paragraph_id": 22,
"text": "One of the major tasks of military engineering is crossing major rivers. Several military engineering vehicles have been developed in various nations to achieve this task. One of the more common types is the amphibious ferry such as the M3 Amphibious Rig. These vehicles are self-propelled on land, they can transform into raft type ferries when in the water, and often multiple vehicles can connect to form larger rafts or floating bridges. Other types of military ferries, such as the Soviet Plavayushij Transportyor - Srednyj, are able to load while still on land and transport other vehicles cross country and over water.",
"title": "Types"
},
{
"paragraph_id": 23,
"text": "In addition to amphibious crossing vehicles, military engineers may also employ several types of boats. Military assault boats are small boats propelled by oars or an outboard motor and used to ferry dismounted infantry across water.",
"title": "Types"
},
{
"paragraph_id": 24,
"text": "Most CEVs are armoured fighting vehicles that may be based on a tank chassis and have special attachments in order to breach obstacles. Such attachments may include dozer blades, mine rollers, cranes etc. An example of an engineering vehicle of this kind is a bridgelaying tank, which replaces the turret with a segmented hydraulic bridge. The Hobart's Funnies of the Second World War were a wide variety of armoured vehicles for combat engineering tasks. They were allocated to the initial beachhead assaults by the British and Commonwealth forces in the D-Day landings.",
"title": "Tank-based combat engineering vehicles"
},
{
"paragraph_id": 25,
"text": "The British Churchill tank because of its good cross-country performance and capacious interior with side hatches became the most adapted with modifications, the base unit being the AVRE carrying a large demolition gun.",
"title": "Tank-based combat engineering vehicles"
}
] | A military engineering vehicle is a vehicle built for construction work or for the transportation of combat engineers on the battlefield. These vehicles may be modified civilian equipment or purpose-built military vehicles. The first appearance of such vehicles coincided with the appearance of the first tanks, these vehicles were modified Mark V tanks for bridging and mine clearance. Modern military engineering vehicles are expected to fulfill numerous roles, as such they undertake numerous forms, examples of roles include; bulldozers, cranes, graders, excavators, dump trucks, breaching vehicles, bridging vehicles, military ferries, amphibious crossing vehicles, and combat engineer section carriers. | 2023-07-26T00:00:05Z | [
"Template:Registration required",
"Template:Commons category-inline",
"Template:Short description",
"Template:Reflist",
"Template:Cite web",
"Template:Unreliable source?",
"Template:Authority control",
"Template:Main",
"Template:Convert",
"Template:Webarchive"
] | https://en.wikipedia.org/wiki/Military_engineering_vehicle |
|
6,822 | Catalonia | Catalonia (/ˌkætəˈloʊniə/; Catalan: Catalunya [kətəˈluɲə]; Spanish: Cataluña [kataˈluɲa] ; Occitan: Catalonha [kataˈluɲa]) is an autonomous community of Spain, designated as a nationality by its Statute of Autonomy.
Most of its territory (except the Val d'Aran) lies on the northeast of the Iberian Peninsula, to the south of the Pyrenees mountain range. Catalonia is administratively divided into four provinces: Barcelona, Girona, Lleida, and Tarragona. The capital and largest city, Barcelona, is the second-most populated municipality in Spain and the fifth-most populous urban area in the European Union. Modern-day Catalonia comprises most of the medieval and early modern Principality of Catalonia (with the remainder Roussillon now part of France's Pyrénées-Orientales). It is bordered by France (Occitanie) and Andorra to the north, the Mediterranean Sea to the east, and the Spanish autonomous communities of Aragon to the west and Valencia to the south. The official languages are Catalan, Spanish, and the Aranese dialect of Occitan.
In the late 8th century, various counties across the eastern Pyrenees were established by the Frankish kingdom as a defensive barrier against Muslim invasions. In the 10th century, the County of Barcelona became progressively independent. In 1137, Barcelona and the Kingdom of Aragon were united by marriage under the Crown of Aragon. Within the Crown, the Catalan counties adopted a common polity, the Principality of Catalonia, developing its institutional system, such as Courts, Generalitat and constitutions, becoming the base for the Crown's Mediterranean trade and expansionism. In the later Middle Ages, Catalan literature flourished. In 1469, the king of Aragon and the queen of Castile were married and ruled their realms together, retaining all of their distinct institutions and legislation.
During the Franco-Spanish War (1635–1659), Catalonia revolted (1640–1652) against a large and burdensome presence of the royal army, being briefly proclaimed a republic under French protection until it was largely reconquered by the Spanish army. By the Treaty of the Pyrenees (1659), the northern parts of Catalonia, mostly the Roussillon, were ceded to France. During the War of the Spanish Succession (1701–1714), the Crown of Aragon sided against the Bourbon Philip V of Spain, but the Catalans were defeated with the fall of Barcelona on 11 September 1714. Philip V subsequently imposed a unifying administration across Spain, enacting the Nueva Planta decrees which, like in the other realms of the Crown of Aragon, suppressed Catalan institutions and rights. As a consequence, Catalan as a language of government and literature was eclipsed by Spanish. Throughout the 18th century, Catalonia experienced economic growth.
In the 19th century, Catalonia was severely affected by the Napoleonic and Carlist Wars. In the second third of the century, it experienced industrialisation. As wealth from the industrial expansion grew, it saw a cultural renaissance coupled with incipient nationalism while several workers' movements appeared. With the establishment of the Second Spanish Republic (1931–1939), the Generalitat was restored as a Catalan autonomous government. After the Spanish Civil War, the Francoist dictatorship enacted repressive measures, abolishing Catalan self-government and banning the official use of the Catalan language. After a period of autarky, from the late 1950s through to the 1970s Catalonia saw rapid economic growth, drawing many workers from across Spain, making Barcelona one of Europe's largest industrial metropolitan areas and turning Catalonia into a major tourist destination. During the Spanish transition to democracy (1975–1982), Catalonia regained self-government and is now one of the most economically dynamic communities in Spain.
Since the 2010s, there has been growing support for Catalan independence. On 27 October 2017, the Catalan Parliament unilaterally declared independence following a referendum that was deemed unconstitutional by the Spanish state. The Spanish Senate voted in favour of enforcing direct rule by removing the Catalan government and calling a snap regional election. The Spanish Supreme Court imprisoned seven former ministers of the Catalan government on charges of rebellion and misuse of public funds, while several others—including then-President Carles Puigdemont—fled to other European countries. Those in prison were pardoned by the Spanish government in 2021.
The name "Catalonia" (Medieval Latin: Cathalaunia; Catalan: Catalunya), spelled Cathalonia, began to be used for the homeland of the Catalans (Cathalanenses) in the late 11th century and was probably used before as a territorial reference to the group of counties that comprised part of the March of Gothia and the March of Hispania under the control of the Count of Barcelona and his relatives. The origin of the name Catalunya is subject to diverse interpretations because of a lack of evidence.
One theory suggests that Catalunya derives from the name Gothia (or Gauthia) Launia ("Land of the Goths"), since the origins of the Catalan counts, lords and people were found in the March of Gothia, known as Gothia, whence Gothland > Gothlandia > Gothalania > Cathalaunia > Catalonia theoretically derived. During the Middle Ages, Byzantine chroniclers claimed that Catalania derives from the local medley of Goths with Alans, initially constituting a Goth-Alania.
Other theories suggest:
In English, Catalonia is pronounced /kætəˈloʊniə/. The native name, Catalunya, is pronounced [kətəˈluɲə] in Central Catalan, the most widely spoken variety, and [kataˈluɲa] in North-Western Catalan. The Spanish name is Cataluña ([kataˈluɲa]), and the Aranese name is Catalonha ([kataˈluɲa]).
The first known human settlements in what is now Catalonia were at the beginning of the Middle Paleolithic. The oldest known trace of human occupation is a mandible found in Banyoles, described by some sources as pre-Neanderthal, that is, some 200,000 years old; other sources suggest it to be only about one third that old. From the next prehistoric era, the Epipalaeolithic or Mesolithic, important remains survive, the greater part dated between 8000 and 5000 BC, such as those of Sant Gregori (Falset) and el Filador (Margalef de Montsant). The most important sites from these eras, all excavated in the region of Moianès, are the Balma del Gai (Epipaleolithic) and the Balma de l'Espluga (late Epipaleolithic and Early Neolithic).
The Neolithic era began in Catalonia around 5000 BC, although the population was slower to develop fixed settlements than in other places, thanks to the abundance of woods, which allowed the continuation of a fundamentally hunter-gatherer culture. An example of such settlements would be La Draga at Banyoles, an "early Neolithic village which dates from the end of the 6th millennium BC."
The Chalcolithic period developed in Catalonia between 2500 and 1800 BC, with the beginning of the construction of copper objects. The Bronze Age occurred between 1800 and 700 BC. There are few remnants of this era, but there were some known settlements in the low Segre zone. The Bronze Age coincided with the arrival of the Indo-Europeans through the Urnfield Culture, whose successive waves of migration began around 1200 BC, and they were responsible for the creation of the first proto-urban settlements. Around the middle of the 7th century BC, the Iron Age arrived in Catalonia.
In pre-Roman times, the area that is now called Catalonia in the north-east of Iberian Peninsula – like the rest of the Mediterranean side of the peninsula – was populated by the Iberians. The Iberians of this area – the Ilergetes, Indigetes and Lacetani (Cerretains) – also maintained relations with the peoples of the Mediterranean. Some urban agglomerations became relevant, including Ilerda (Lleida) inland, Hibera (perhaps Amposta or Tortosa) or Indika (Ullastret). Coastal trading colonies were established by the ancient Greeks, who settled around the Gulf of Roses, in Emporion (Empúries) and Roses in the 8th century BC. The Carthaginians briefly ruled the territory in the course of the Second Punic War and traded with the surrounding Iberian population.
After the Carthaginian defeat by the Roman Republic, the north-east of Iberia became the first to come under Roman rule and became part of Hispania, the westernmost part of the Roman Empire. Tarraco (modern Tarragona) was one of the most important Roman cities in Hispania and the capital of the province of Tarraconensis. Other important cities of the Roman period are Ilerda (Lleida), Dertosa (Tortosa), Gerunda (Girona) as well as the ports of Empuriæ (former Emporion) and Barcino (Barcelona). As for the rest of Hispania, Latin law was granted to all cities under the reign of Vespasian (69–79 AD), while Roman citizenship was granted to all free men of the empire by the Edict of Caracalla in 212 AD (Tarraco, the capital, was already a colony of Roman law since 45 BC). It was a rich agricultural province (olive oil, wine, wheat), and the first centuries of the Empire saw the construction of roads (the most important being the Via Augusta, parallel to Mediterranean coastline) and infrastructure like aqueducts.
Conversion to Christianity, attested in the 3rd century, was completed in urban areas in the 4th century. Although Hispania remained under Roman rule and did not fall under the rule of Vandals, Suebi and Alans in the 5th century, the main cities suffered frequent sacking and some deurbanization.
After the fall of the Western Roman Empire, the area was conquered by the Visigoths and was ruled as part of the Visigothic Kingdom for almost two and a half centuries. In 718, it came under Muslim control and became part of Al-Andalus, a province of the Umayyad Caliphate. From the conquest of Roussillon in 760, to the conquest of Barcelona in 801, the Frankish empire took control of the area between Septimania and the Llobregat river from the Muslims and created heavily militarised, self-governing counties. These counties formed part of the historiographically known as the Gothic and Hispanic Marches, a buffer zone in the south of the Frankish empire in the former province of Septimania and in the northeast of the Iberian Peninsula, to act as a defensive barrier for the Frankish Empire against further Muslim invasions from Al-Andalus.
These counties came under the rule of the counts of Barcelona, who were Frankish vassals nominated by the emperor of the Franks, to whom they were feudatories (801–988). The earliest known use of the name "Catalonia" for these counties dates to 1117. At the end of the 9th century, the Count of Barcelona Wilfred the Hairy (878–897) made his titles hereditaries and thus founded the dynasty of the House of Barcelona, which ruled Catalonia until 1410.
In 988 Borrell II, Count of Barcelona, did not recognise the new French king Hugh Capet as his king, evidencing the loss of dependency from Frankish rule and confirming his successors (from Ramon Borrell I onwards) as independent of the Capetian crown whom they regarded as usurpers of the Carolingian Frankish realm. At the beginning of eleventh century the Catalan counties suffered an important process of feudalisation, partially controlled by the efforts of church's sponsored Peace and Truce Assemblies and the negotiation skills of the Count of Barcelona Ramon Berenguer I (1035–1076), which began the codification of feudal law in the written Usages of Barcelona, becoming the basis of the Catalan law. In 1137, Ramon Berenguer IV, Count of Barcelona decided to accept King Ramiro II of Aragon's proposal to marry Queen Petronila, establishing the dynastic union of the County of Barcelona with the Kingdom of Aragon, creating the composite monarchy known as the Crown of Aragon and making the Catalan counties that were united under the County of Barcelona into a principality of the Aragonese Crown.
In 1258, by means of the Treaty of Corbeil, James I of Aragon King of Aragon and Count of Barcelona, king of Mallorca and of Valencia, renounced his family rights and dominions in Occitania and recognised the king of France as heir of the Carolingian dynasty. The king of France, Louis IX, formally relinquished his claims of feudal lordship over all the Catalan counties, except the County of Foix, despite the opposition of king James. This treaty confirmed, from French point of view, the independence of the Catalan counties established and exercised during the previous three centuries, but also meant the irremediable separation between the geographical areas of Catalonia and Languedoc.
As a coastal territory, Catalonia became the base of the Aragonese Crown's maritime forces, which spread the power of the Crown in the Mediterranean, turning Barcelona into a powerful and wealthy city. In the period of 1164–1410, new territories, the Kingdom of Valencia, the Kingdom of Majorca, the Kingdom of Sardinia, the Kingdom of Sicily, and, briefly, the Duchies of Athens and Neopatras, were incorporated into the dynastic domains of the House of Aragon. The expansion was accompanied by a great development of the Catalan trade, creating an extensive trade network across the Mediterranean which competed with those of the maritime republics of Genoa and Venice.
At the same time, the Principality of Catalonia developed a complex institutional and political system based in the concept of a pact between the estates of the realm and the king. Laws had to be approved in the Catalan Courts (Corts Catalanes), one of the first parliamentary bodies of Europe that, since 1283, obtained the power to create legislation with the monarch. The Courts were composed of the three Estates organized into "arms" (braços), were presided over by the monarch, and approved the Catalan constitutions, which established a compilation of rights for the inhabitants of the Principality. In order to collect general taxes, the Courts of 1359 established a permanent representative of deputies position, called the Deputation of the General (and later usually known as Generalitat), which gained considerable political power over the next centuries.
The domains of the Aragonese Crown were severely affected by the Black Death pandemic and by later outbreaks of the plague. Between 1347 and 1497 Catalonia lost 37 percent of its population. In 1410, the last reigning monarch of the House of Barcelona, King Martin I died without surviving descendants. Under the Compromise of Caspe (1412), Ferdinand from the Castilian House of Trastámara received the Crown of Aragon as Ferdinand I of Aragon. During the reign of his son, John II, social and political tensions caused the Catalan Civil War (1462–1472) and the War of the Remences (1462-1486). The Sentencia Arbitral de Guadalupe (1486) liberated the remença peasants from the feudal evil customs.
In the later Middle Ages, Catalan literature flourished in Catalonia proper and in the kingdoms of Majorca and Valencia, with such remarkable authors as the philosopher Ramon Llull, the Valencian poet Ausiàs March, and Joanot Martorell, author of the novel Tirant lo Blanch, published in 1490.
Ferdinand II of Aragon, the grandson of Ferdinand I, and Queen Isabella I of Castile were married in 1469, later taking the title the Catholic Monarchs; subsequently, this event was seen by historiographers as the dawn of a unified Spain. At this time, though united by marriage, the Crowns of Castile and Aragon maintained distinct territories, each keeping its own traditional institutions, parliaments, laws and currency. Castile commissioned expeditions to the Americas and benefited from the riches acquired in the Spanish colonisation of the Americas, but, in time, also carried the main burden of military expenses of the united Spanish kingdoms. After Isabella's death, Ferdinand II personally ruled both crowns.
By virtue of descent from his maternal grandparents, Ferdinand II of Aragon and Isabella I of Castile, in 1516 Charles I of Spain became the first king to rule the Crowns of Castile and Aragon simultaneously by his own right. Following the death of his paternal (House of Habsburg) grandfather, Maximilian I, Holy Roman Emperor, he was also elected Charles V, Holy Roman Emperor, in 1519.
Over the next few centuries, the Principality of Catalonia was generally on the losing side of a series of wars that led steadily to an increased centralization of power in Spain. Despite this fact, between the 16th and 18th centuries, the participation of the political community in the local and the general Catalan government grew (thus consolidating its constitutional system), while the kings remained absent, represented by a viceroy. Tensions between Catalan institutions and the monarchy began to arise. The large and burdensome presence of the Spanish royal army in the Principality due to the Franco-Spanish War led to an uprising of peasants, provoking the Reapers' War (1640–1652), which saw Catalonia rebel (briefly as a republic led by the chairman of the Generalitat, Pau Claris) with French help against the Spanish Crown for overstepping Catalonia's rights during the Thirty Years' War. Within a brief period France took full control of Catalonia. Most of Catalonia was reconquered by the Spanish monarchy but Catalan rights were recognised. Roussillon and half of Cerdanya was lost to France by the Treaty of the Pyrenees (1659).
The most significant conflict concerning the governing monarchy was the War of the Spanish Succession (1701–1715), which began when the childless Charles II of Spain, the last Spanish Habsburg, died without an heir in 1700. Charles II had chosen Philip V of Spain from the French House of Bourbon. Catalonia, like other territories that formed the Crown of Aragon, rose up in support of the Austrian Habsburg pretender Charles VI, Holy Roman Emperor, in his claim for the Spanish throne as Charles III of Spain. The fight between the houses of Bourbon and Habsburg for the Spanish Crown split Spain and Europe.
The fall of Barcelona on 11 September 1714 to the Bourbon king Philip V militarily ended the Habsburg claim to the Spanish Crown, which became legal fact in the Treaty of Utrecht. Philip felt that he had been betrayed by the Catalan Courts, as it had initially sworn its loyalty to him when he had presided over it in 1701. In retaliation for the betrayal, and inspired by the French absolutist style of government, the first Bourbon king introduced the Nueva Planta decrees, that incorporated the realms of the Crown of Aragon, including the Principality of Catalonia, as province of the Crown of Castile in 1716, terminating their separate institutions, laws and rights, as well as their pactist politics, within a united kingdom of Spain. From the second third of 18th century onwards Catalonia carried out a successful process of proto-industrialization, reinforced in the late quarter of the century when Castile's trade monopoly with American colonies ended.
After the War of the Spanish Succession, the assimilation of the Crown of Aragon by the Castilian Crown through the Nueva Planta Decrees, was the first step in the creation of the Spanish nation state. And like other European nation-states in formation, it was not on a uniform ethnic basis, but by imposing the political and cultural characteristics of the capital, in this case Madrid and Central Spain, on those of the other areas, whose inhabitants would become national minorities to be assimilated through nationalist policies. These nationalist policies, sometimes very aggressive, and still in force, have been and are the seed of repeated territorial conflicts within the state.
At the beginning of the nineteenth century, Catalonia was severely affected by the Napoleonic Wars. In 1808, it was occupied by French troops; the resistance against the occupation eventually developed into the Peninsular War. The rejection of French dominion was institutionalized with the creation of "juntas" (councils) who, remaining loyal to the Bourbons, exercised the sovereignty and representation of the territory due to the disappearance of the old institutions. Napoleon took direct control of Catalonia to reestablish order, creating the Government of Catalonia under the rule of Marshall Augereau, and making Catalan briefly an official language again. Between 1812 and 1814, Catalonia was annexed to France and organized as four departments. The French troops evacuated Catalan territory at the end of 1814. After the Bourbon restoration in Spain and the death of the absolutist king Ferdinand VII (1833), Carlist Wars erupted against the newly established liberal state of Isabella II. Catalonia was divided, with the coastal and most industrialized areas supporting liberalism, while many inland areas were in the hands of the Carlist faction; the latter proposed to reestablish the institutional systems suppressed by the Nueva Planta decrees in the ancient realms of the Crown of Aragon. The consolidation of the liberal state saw a new territorial division of Spain into provinces, including Catalonia, which was divided into four (Barcelona, Girona, Lleida and Tarragona).
In the second third of the 19th century, Catalonia became an important industrial center, particularly focused on textiles. This process was a consequence of the conditions of proto-industrialisation of textile production in the prior two centuries, growing capital from wine and brandy export, and was boosted by the government support for domestic manufacturing. In 1832, the Bonaplata Factory in Barcelona became the first factory in the country to make use of the steam engine. The first railway on the Iberian Peninsula was built between Barcelona and Mataró in 1848. A policy to encourage company towns also saw the textile industry flourish in the countryside in the 1860s and 1870s. Although the policy of Spanish governments oscillated between free trade and protectionism, protectionist laws become more common. To this day Catalonia remains one of the most industrialised areas of Spain.
In the same period, Barcelona was the focus of industrial conflict and revolutionary uprisings known as "bullangues". In Catalonia, a republican current began to develop and inevitably, many Catalans favored the federalisation of Spain. Meanwhile, the Catalan language saw a cultural renaissance from the second third of the century onwards, the Renaixença, among both the working class and the bourgeoisie. Right after the fall of the First Spanish Republic (1873–1874) and the subsequent restoration of the Bourbon dynasty (1874), Catalan nationalism began to be organized politically under the leadership of the republican federalist Valentí Almirall.
The anarchist movement had been active throughout the last quarter of the 19th century and the early 20th century, founding the CNT trade union in 1910 and achieving one of the first eight-hour workdays in Europe in 1919. Growing resentment of conscription and of the military culminated in the Tragic Week in Barcelona in 1909. Under the hegemony of the Regionalist League, Catalonia gained a degree of administrative unity for the first time in the Modern era. In 1914, the four Catalan provinces were authorized to create a commonwealth (Catalan: Mancomunitat de Catalunya), without any legislative power or specific political autonomy, which carried out an ambitious program of modernization, but it was disbanded in 1925 by the dictatorship of Primo de Rivera (1923–1930). During the final stage of the Dictatorship, with Spain beginning to suffer an economic crisis, Barcelona hosted the 1929 International Exposition.
After the fall of the dictatorship and a brief proclamation of the Catalan Republic, during the events of the proclamation of the Second Spanish Republic (14–17 April 1931), Catalonia received in 1932, its first Statute of Autonomy from the Spanish Republic's Parliament, granting it a considerable degree of self-government, establishing an autonomous body, the Generalitat of Catalonia, which included a parliament, an executive council and a court of appeal. The left-wing independentist leader Francesc Macià was appointed its first president. Under the Statute, Catalan became an official language. The governments of the Republican Generalitat, led by the Republican Left of Catalonia (ERC) members Francesc Macià (1931–1933) and Lluís Companys (1933–1940), sought to implement an advanced and progressive social agenda, despite the internal difficulties. This period was marked by political unrest, the effects of the economic crisis and their social repercussions. The Statute of Autonomy was suspended in 1934, due to the Events of 6 October in Barcelona, as a response to the accession of right-wing Spanish nationalist party CEDA to the government of the Republic, considered close to fascism. After the electoral victory of the left wing Popular Front in February 1936, the Government of Catalonia was pardoned and the self-government was restored.
The defeat of the military rebellion against the Republican government in Barcelona placed Catalonia firmly in the Republican side of the Spanish Civil War. During the war, there were two rival powers in Catalonia: the de jure power of the Generalitat and the de facto power of the armed popular militias. Violent confrontations between the workers' parties (CNT-FAI and POUM against the PSUC) culminated in the defeat of the first ones in 1937. The situation resolved itself progressively in favor of the Generalitat, but at the same time the Generalitat was partially losing its autonomous power within Republican Spain. In 1938 Franco's troops broke the Republican territory in two, isolating Catalonia from the rest of the Republic. The defeat of the Republican army in the Battle of the Ebro led in 1938 and 1939 to the occupation of Catalonia by Franco's forces.
The defeat of the Spanish Republic in the Spanish Civil War brought to power the dictatorship of Francisco Franco, whose first ten-year rule was particularly violent, autocratic, and repressive both in a political, cultural, social, and economical sense. In Catalonia, any kind of public activities associated with Catalan nationalism, republicanism, anarchism, socialism, liberalism, democracy or communism, including the publication of books on those subjects or simply discussion of them in open meetings, was banned.
Franco's regime banned the use of Catalan in government-run institutions and during public events, and also the Catalan institutions of self-government were abolished. The pro-Republic of Spain president of Catalonia, Lluís Companys, was taken to Spain from his exile in the German-occupied France and was tortured and executed in the Montjuïc Castle of Barcelona for the crime of 'military rebellion'.
During later stages of Francoist Spain, certain folkloric and religious celebrations in Catalan resumed and were tolerated. Use of Catalan in the mass media had been forbidden but was permitted from the early 1950s in the theatre. Despite the ban during the first years and the difficulties of the next period, publishing in Catalan continued throughout his rule.
The years after the war were extremely hard. Catalonia, like many other parts of Spain, had been devastated by the war. Recovery from the war damage was slow and made more difficult by the international trade embargo and the autarkic politics of Franco's regime. By the late 1950s, the region had recovered its pre-war economic levels and in the 1960s was the second-fastest growing economy in the world in what became known as the Spanish miracle. During this period there was a spectacular growth of industry and tourism in Catalonia that drew large numbers of workers to the region from across Spain and made the area around Barcelona one of Europe's largest industrial metropolitan areas.
After Franco's death in 1975, Catalonia voted for the adoption of a democratic Spanish Constitution in 1978, in which Catalonia recovered political and cultural autonomy, restoring the Generalitat (exiled since the end of the Civil War in 1939) in 1977 and adopting a new Statute of Autonomy in 1979, which defined Catalonia as a "nationality". The first elections to the Parliament of Catalonia under this Statute gave the Catalan presidency to Jordi Pujol, leader of Convergència i Unió (CiU), a center-right Catalan nationalist electoral coalition, with Pujol re-elected until 2003. Throughout the 1980s and 1990s, the institutions of Catalan autonomy were deployed, among them an autonomous police force, the Mossos d'Esquadra, in 1983, and the broadcasting network Televisió de Catalunya and its first channel TV3, created in 1983. An extensive program of normalization of Catalan language was carried out. Today, Catalonia remains one of the most economically dynamic communities of Spain. The Catalan capital and largest city, Barcelona, is a major international cultural centre and a major tourist destination. In 1992, Barcelona hosted the Summer Olympic Games.
In November 2003, elections to the Parliament of Catalonia gave the government to a left-wing Catalanist coalition formed by the Socialists' Party of Catalonia (PSC-PSOE), Republican Left of Catalonia (ERC) and Initiative for Catalonia Greens (ICV), and the socialist Pasqual Maragall was appointed president. The new government redacted a new version of the Statute of Autonomy, with the aim of consolidate and expand certain aspects of self-government.
The new Statute of Autonomy of Catalonia, approved after a referendum in 2006, was contested by important sectors of the Spanish society, especially by the conservative People's Party, which sent the law to the Constitutional Court of Spain. In 2010, the Court declared non-valid some of the articles that established an autonomous Catalan system of Justice, improved aspects of the financing, a new territorial division, the status of Catalan language or the symbolical declaration of Catalonia as a nation. This decision was severely contested by large sectors of Catalan society, which increased the demands of independence.
A controversial independence referendum was held in Catalonia on 1 October 2017, using a disputed voting process. It was declared illegal and suspended by the Constitutional Court of Spain, because it breached the 1978 Constitution. Subsequent developments saw, on 27 October 2017, a symbolic declaration of independence by the Parliament of Catalonia, the enforcement of direct rule by the Spanish government through the use of Article 155 of the Constitution, the dismissal of the Executive Council and the dissolution of the Parliament, with a snap regional election called for 21 December 2017, which ended with a victory of pro-independence parties. Former President Carles Puigdemont and five former cabinet ministers fled Spain and took refuge in other European countries (such as Belgium, in Puigdemont's case), whereas nine other cabinet members, including vice-president Oriol Junqueras, were sentenced to prison under various charges of rebellion, sedition, and misuse of public funds. Quim Torra became the 131st President of the Government of Catalonia on 17 May 2018, after the Spanish courts blocked three other candidates.
In 2018, the Assemblea Nacional Catalana joined the Unrepresented Nations and Peoples Organization (UNPO) on behalf of Catalonia.
On 14 October 2019, the Spanish Supreme court sentenced several Catalan political leaders, involved in organizing a referendum on Catalonia's independence from Spain, and convicted them on charges ranging from sedition to misuse of public funds, with sentences ranging from 9 to 13 years in prison. This decision sparked demonstrations around Catalonia. They were later pardoned by the Spanish government and left prison in June 2021.
The climate of Catalonia is diverse. The populated areas lying by the coast in Tarragona, Barcelona and Girona provinces feature a Hot-summer Mediterranean climate (Köppen Csa). The inland part (including the Lleida province and the inner part of Barcelona province) show a mostly Mediterranean climate (Köppen Csa). The Pyrenean peaks have a continental (Köppen D) or even Alpine climate (Köppen ET) at the highest summits, while the valleys have a maritime or oceanic climate sub-type (Köppen Cfb).
In the Mediterranean area, summers are dry and hot with sea breezes, and the maximum temperature is around 26–31 °C (79–88 °F). Winter is cool or slightly cold depending on the location. It snows frequently in the Pyrenees, and it occasionally snows at lower altitudes, even by the coastline. Spring and autumn are typically the rainiest seasons, except for the Pyrenean valleys, where summer is typically stormy.
The inland part of Catalonia is hotter and drier in summer. Temperature may reach 35 °C (95 °F), some days even 40 °C (104 °F). Nights are cooler there than at the coast, with the temperature of around 14–17 °C (57–63 °F). Fog is not uncommon in valleys and plains; it can be especially persistent, with freezing drizzle episodes and subzero temperatures during winter, mainly along the Ebro and Segre valleys and in Plain of Vic.
Catalonia has a marked geographical diversity, considering the relatively small size of its territory. The geography is conditioned by the Mediterranean coast, with 580 kilometres (360 miles) of coastline, and the towering Pyrenees along the long northern border. Catalonia is divided into three main geomorphological units:
The Catalan Pyrenees represent almost half in length of the Pyrenees, as it extends more than 200 kilometres (120 miles). Traditionally differentiated the Axial Pyrenees (the main part) and the Pre-Pyrenees (southern from the Axial) which are mountainous formations parallel to the main mountain ranges but with lower altitudes, less steep and a different geological formation. The highest mountain of Catalonia, located north of the comarca of Pallars Sobirà is the Pica d'Estats (3,143 m), followed by the Puigpedrós (2,914 m). The Serra del Cadí comprises the highest peaks in the Pre-Pyrenees and forms the southern boundary of the Cerdanya valley.
The Central Catalan Depression is a plain located between the Pyrenees and Pre-Coastal Mountains. Elevation ranges from 200 to 600 metres (660 to 1,970 feet). The plains and the water that descend from the Pyrenees have made it fertile territory for agriculture and numerous irrigation canals have been built. Another major plain is the Empordà, located in the northeast.
The Catalan Mediterranean system is based on two ranges running roughly parallel to the coast (southwest–northeast), called the Coastal and the Pre-Coastal Ranges. The Coastal Range is both the shorter and the lower of the two, while the Pre-Coastal is greater in both length and elevation. Areas within the Pre-Coastal Range include Montserrat, Montseny and the Ports de Tortosa-Beseit. Lowlands alternate with the Coastal and Pre-Coastal Ranges. The Coastal Lowland is located to the East of the Coastal Range between it and the coast, while the Pre-Coastal Lowlands are located inland, between the Coastal and Pre-Coastal Ranges, and includes the Vallès and Penedès plains.
Catalonia is a showcase of European landscapes on a small scale. Just over 30,000 square kilometres (12,000 square miles) hosting a variety of substrates, soils, climates, directions, altitudes and distances to the sea. The area is of great ecological diversity and a remarkable wealth of landscapes, habitats and species.
The fauna of Catalonia comprises a minority of animals endemic to the region and a majority of non-endemic animals. Much of Catalonia enjoys a Mediterranean climate (except mountain areas), which makes many of the animals that live there adapted to Mediterranean ecosystems. Of mammals, there are plentiful wild boar, red foxes, as well as roe deer and in the Pyrenees, the Pyrenean chamois. Other large species such as the bear have been recently reintroduced.
The waters of the Balearic Sea are rich in biodiversity, and even the megafaunas of the oceans; various types of whales (such as fin, sperm, and pilot) and dolphins can be found in the area.
Most of Catalonia belongs to the Mediterranean Basin. The Catalan hydrographic network consists of two important basins, the one of the Ebro and the one that comprises the internal basins of Catalonia (respectively covering 46.84% and 51.43% of the territory), all of them flow to the Mediterranean. Furthermore, there is the Garona river basin that flows to the Atlantic Ocean, but it only covers 1.73% of the Catalan territory.
The hydrographic network can be divided in two sectors, an occidental slope or Ebro river slope and one oriental slope constituted by minor rivers that flow to the Mediterranean along the Catalan coast. The first slope provides an average of 18,700 cubic hectometres (4.5 cubic miles) per year, while the second only provides an average of 2,020 hm (0.48 cu mi)/year. The difference is due to the big contribution of the Ebro river, from which the Segre is an important tributary. Moreover, in Catalonia there is a relative wealth of groundwaters, although there is inequality between comarques, given the complex geological structure of the territory. In the Pyrenees there are many small lakes, remnants of the ice age. The biggest are the lake of Banyoles and the recently recovered lake of Ivars.
The Catalan coast is almost rectilinear, with a length of 580 kilometres (360 mi) and few landforms—the most relevant are the Cap de Creus and the Gulf of Roses to the north and the Ebro Delta to the south. The Catalan Coastal Range hugs the coastline, and it is split into two segments, one between L'Estartit and the town of Blanes (the Costa Brava), and the other at the south, at the Costes del Garraf.
The principal rivers in Catalonia are the Ter, Llobregat, and the Ebro (Catalan: Ebre), all of which run into the Mediterranean.
The majority of Catalan population is concentrated in 30% of the territory, mainly in the coastal plains. Intensive agriculture, livestock farming and industrial activities have been accompanied by a massive tourist influx (more than 20 million annual visitors), a rate of urbanization and even of major metropolisation which has led to a strong urban sprawl: two thirds of Catalans live in the urban area of Barcelona, while the proportion of urban land increased from 4.2% in 1993 to 6.2% in 2009, a growth of 48.6% in sixteen years, complemented with a dense network of transport infrastructure. This is accompanied by a certain agricultural abandonment (decrease of 15% of all areas cultivated in Catalonia between 1993 and 2009) and a global threat to natural environment. Human activities have also put some animal species at risk, or even led to their disappearance from the territory, like the gray wolf and probably the brown bear of the Pyrenees. The pressure created by this model of life means that the country's ecological footprint exceeds its administrative area.
Faced with these problems, Catalan authorities initiated several measures whose purpose is to protect natural ecosystems. Thus, in 1990, the Catalan government created the Nature Conservation Council (Catalan: Consell de Protecció de la Natura), an advisory body with the aim to study, protect and manage the natural environments and landscapes of Catalonia. In addition, the Generalitat has carried out the Plan of Spaces of Natural Interest (Pla d'Espais d'Interès Natural or PEIN) in 1992 while eighteen Natural Spaces of Special Protection (Espais Naturals de Protecció Especial or ENPE) have been instituted.
There's a National Park, Aigüestortes i Estany de Sant Maurici; fourteen Natural Parks, Alt Pirineu, Aiguamolls de l'Empordà, Cadí-Moixeró, Cap de Creus, Sources of Ter and Freser, Collserola, Ebro Delta, Ports, Montgrí, Medes Islands and Baix Ter, Montseny, Montserrat, Sant Llorenç del Munt and l'Obac, Serra de Montsant, and the Garrotxa Volcanic Zone; as well as three Natural Places of National Interest (Paratge Natural d'Interes Nacional or PNIN), the Pedraforca, the Poblet Forest and the Albères.
After Franco's death in 1975 and the adoption of a democratic constitution in Spain in 1978, Catalonia recovered and extended the powers that it had gained in the Statute of Autonomy of 1932 but lost with the fall of the Second Spanish Republic at the end of the Spanish Civil War in 1939.
This autonomous community has gradually achieved more autonomy since the approval of the Spanish Constitution of 1978. The Generalitat holds exclusive jurisdiction in education, health, culture, environment, communications, transportation, commerce, public safety and local government, and only shares jurisdiction with the Spanish government in justice. In all, some analysts argue that formally the current system grants Catalonia with "more self-government than almost any other corner in Europe".
The support for Catalan nationalism ranges from a demand for further autonomy and the federalisation of Spain to the desire for independence from the rest of Spain, expressed by Catalan independentists. The first survey following the Constitutional Court ruling that cut back elements of the 2006 Statute of Autonomy, published by La Vanguardia on 18 July 2010, found that 46% of the voters would support independence in a referendum. In February of the same year, a poll by the Open University of Catalonia gave more or less the same results. Other polls have shown lower support for independence, ranging from 40 to 49%. Although it is established in the whole of the territory, support for independence is significantly higher in the hinterland and the northeast, away from the more populated coastal areas such as Barcelona.
Since 2011 when the question started to be regularly surveyed by the governmental Center for Public Opinion Studies (CEO), support for Catalan independence has been on the rise. According to the CEO opinion poll from July 2016, 47.7% of Catalans would vote for independence and 42.4% against it while, about the question of preferences, according to the CEO opinion poll from March 2016, a 57.2 claim to be "absolutely" or "fairly" in favour of independence. Other polls have shown lower support for independence, ranging from 40 to 49%. Other polls show more variable results, according with the Spanish CIS, as of December 2016, 47% of Catalans rejected independence and 45% supported it.
In hundreds of non-binding local referendums on independence, organised across Catalonia from 13 September 2009, a large majority voted for independence, although critics argued that the polls were mostly held in pro-independence areas. In December 2009, 94% of those voting backed independence from Spain, on a turn-out of 25%. The final local referendum was held in Barcelona, in April 2011. On 11 September 2012, a pro-independence march pulled in a crowd of between 600,000 (according to the Spanish Government), 1.5 million (according to the Guàrdia Urbana de Barcelona), and 2 million (according to its promoters); whereas poll results revealed that half the population of Catalonia supported secession from Spain.
Two major factors were Spain's Constitutional Court's 2010 decision to declare part of the 2006 Statute of Autonomy of Catalonia unconstitutional, as well as the fact that Catalonia contributes 19.49% of the central government's tax revenue, but only receives 14.03% of central government's spending.
Parties that consider themselves either Catalan nationalist or independentist have been present in all Catalan governments since 1980. The largest Catalan nationalist party, Convergence and Union, ruled Catalonia from 1980 to 2003, and returned to power in the 2010 election. Between 2003 and 2010, a leftist coalition, composed by the Catalan Socialists' Party, the pro-independence Republican Left of Catalonia and the leftist-environmentalist Initiative for Catalonia-Greens, implemented policies that widened Catalan autonomy.
In the 25 November 2012 Catalan parliamentary election, sovereigntist parties supporting a secession referendum gathered 59.01% of the votes and held 87 of the 135 seats in the Catalan Parliament. Parties supporting independence from the rest of Spain obtained 49.12% of the votes and a majority of 74 seats.
Artur Mas, then the president of Catalonia, organised early elections that took place on 27 September 2015. In these elections, Convergència and Esquerra Republicana decided to join, and they presented themselves under the coalition named Junts pel Sí (in Catalan, Together for Yes). Junts pel Sí won 62 seats and was the most voted party, and CUP (Candidatura d'Unitat Popular, a far-left and independentist party) won another 10, so the sum of all the independentist forces/parties was 72 seats, reaching an absolute majority, but not in number of individual votes, comprising 47,74% of the total.
The Statute of Autonomy of Catalonia is the fundamental organic law, second only to the Spanish Constitution from which the Statute originates.
In the Spanish Constitution of 1978 Catalonia, along with the Basque Country and Galicia, was defined as a "nationality". The same constitution gave Catalonia the automatic right to autonomy, which resulted in the Statute of Autonomy of Catalonia of 1979.
Both the 1979 Statute of Autonomy and the current one, approved in 2006, state that "Catalonia, as a nationality, exercises its self-government constituted as an Autonomous Community in accordance with the Constitution and with the Statute of Autonomy of Catalonia, which is its basic institutional law, always under the law in Spain".
The Preamble of the 2006 Statute of Autonomy of Catalonia states that the Parliament of Catalonia has defined Catalonia as a nation, but that "the Spanish Constitution recognizes Catalonia's national reality as a nationality". While the Statute was approved by and sanctioned by both the Catalan and Spanish parliaments, and later by referendum in Catalonia, it has been subject to a legal challenge by the surrounding autonomous communities of Aragon, Balearic Islands and Valencia, as well as by the conservative People's Party. The objections are based on various issues such as disputed cultural heritage but, especially, on the Statute's alleged breaches of the principle of "solidarity between regions" in fiscal and educational matters enshrined by the Constitution.
Spain's Constitutional Court assessed the disputed articles and on 28 June 2010, issued its judgment on the principal allegation of unconstitutionality presented by the People's Party in 2006. The judgment granted clear passage to 182 articles of the 223 that make up the fundamental text. The court approved 73 of the 114 articles that the People's Party had contested, while declaring 14 articles unconstitutional in whole or in part and imposing a restrictive interpretation on 27 others. The court accepted the specific provision that described Catalonia as a "nation", however ruled that it was a historical and cultural term with no legal weight, and that Spain remained the only nation recognised by the constitution.
The Catalan Statute of Autonomy establishes that Catalonia, as an autonomous community, is organised politically through the Generalitat of Catalonia (Catalan: Generalitat de Catalunya), confirmed by the Parliament, the Presidency of the Generalitat, the Government or Executive Council and the other institutions established by the Parliament, among them the Ombudsman (Síndic de Greuges), the Office of Auditors (Sindicatura de Comptes) the Council for Statutory Guarantees (Consell de Garanties Estatutàries) or the Audiovisual Council of Catalonia (Consell de l'Audiovisual de Catalunya).
The Parliament of Catalonia (Catalan: Parlament de Catalunya) is the unicameral legislative body of the Generalitat and represents the people of Catalonia. Its 135 members (diputats) are elected by universal suffrage to serve for a four-year period. According to the Statute of Autonomy, it has powers to legislate over devolved matters such as education, health, culture, internal institutional and territorial organization, nomination of the President of the Generalitat and control the Government, budget and other affairs. The last Catalan election was held on 14 February 2021, and its current speaker (president) is Laura Borràs, incumbent since 12 March 2018.
The President of the Generalitat of Catalonia (Catalan: president de la Generalitat de Catalunya) is the highest representative of Catalonia, and is also responsible of leading the government's action, presiding the Executive Council. Since the restoration of the Generalitat on the return of democracy in Spain, the Presidents of Catalonia have been Josep Tarradellas (1977–1980, president in exile since 1954), Jordi Pujol (1980–2003), Pasqual Maragall (2003–2006), José Montilla (2006–2010), Artur Mas (2010–2016), Carles Puigdemont (2016–2017) and, after the imposition of direct rule from Madrid, Quim Torra (2018–2020) and Pere Aragonès (2021–).
The Executive Council (Catalan: Consell Executiu) or Government (Govern), is the body responsible of the government of the Generalitat, it holds executive and regulatory power, being accountable to the Catalan Parliament. It comprises the President of the Generalitat, the First Minister (conseller primer) or the Vice President, and the ministers (consellers) appointed by the president. Its seat is the Palau de la Generalitat, Barcelona. In 2021 the government was a coalition of two parties, the Republican Left of Catalonia (ERC) and Together for Catalonia (Junts) and is made up of 14 ministers, including the vice President, alongside to the president and a secretary of government, but in October 2022 Together for Catalonia (Junts) left the coalition and the government.
Catalonia has its own police force, the Mossos d'Esquadra (officially called Mossos d'Esquadra-Policia de la Generalitat de Catalunya), whose origins date back to the 18th century. Since 1980 they have been under the command of the Generalitat, and since 1994 they have expanded in number in order to replace the national Civil Guard and National Police Corps, which report directly to the Homeland Department of Spain. The national bodies retain personnel within Catalonia to exercise functions of national scope such as overseeing ports, airports, coasts, international borders, custom offices, the identification of documents and arms control, immigration control, terrorism prevention, arms trafficking prevention, amongst others.
Most of the justice system is administered by national judicial institutions, the highest body and last judicial instance in the Catalan jurisdiction, integrating the Spanish judiciary, is the High Court of Justice of Catalonia. The criminal justice system is uniform throughout Spain, while civil law is administered separately within Catalonia. The civil laws that are subject to autonomous legislation have been codified in the Civil Code of Catalonia (Codi civil de Catalunya) since 2002.
Catalonia, together with Navarre and the Basque Country, are the Spanish communities with the highest degree of autonomy in terms of law enforcement.
Catalonia is organised territorially into provinces, further subdivided into comarques and municipalities. The 2006 Statute of Autonomy of Catalonia establishes the administrative organisation of three local authorities: vegueries, comarques, and municipalities.
Catalonia is divided administratively into four provinces, the governing body of which is the Provincial Deputation (Catalan: Diputació Provincial, Spanish: Diputación Provincial). The four provinces and their populations are:
Comarques (singular: "comarca") are entities composed by the municipalities to manage their responsibilities and services. The current regional division has its roots in a decree of the Generalitat de Catalunya of 1936, in effect until 1939, when it was suppressed by Franco. In 1987 the Catalan Government reestablished the comarcal division and in 1988 three new comarques were added (Alta Ribagorça, Pla d'Urgell and Pla de l'Estany). In 2015 was created an additional comarca, the Moianès. At present there are 41, excluding Aran. Every comarca is administered by a comarcal council (consell comarcal).
The Aran Valley (Val d'Aran), previously considered a comarca, obtained in 1990 a particular status within Catalonia due to its differences in culture and language, as Occitan is the native language of the Valley, being administered by a body known as the Conselh Generau d'Aran (General Council of Aran). Since 2015 it is defined as "unique territorial entity", while the powers of the Conselh Generau were expanded.
There are at present 947 municipalities (municipis) in Catalonia. Each municipality is run by a council (ajuntament) elected every four years by the residents in local elections. The council consists of a number of members (regidors) depending on population, who elect the mayor (alcalde or batlle). Its seat is the town hall (ajuntament, casa de la ciutat or casa de la vila).
The vegueria is a proposed type of division defined as a specific territorial area for the exercise of government and inter-local cooperation with legal personality. The current Statute of Autonomy states vegueries are intended to supersede provinces in Catalonia and take over many of functions of the comarques.
The territorial plan of Catalonia (Pla territorial general de Catalunya) provided six general functional areas, but was amended by Law 24/2001, of 31 December, recognizing the Alt Pirineu i Aran as a new functional area differentiated of Ponent. On 14 July 2010 the Catalan Parliament approved the creation of the functional area of the Penedès.
A highly industrialized region, the nominal GDP of Catalonia in 2018 was €228 billion (second after the community of Madrid, €230 billion) and the per capita GDP was €30,426 ($32,888), behind Madrid (€35,041), the Basque Country (€33,223), and Navarre (€31,389). That year, the GDP growth was 2.3%.
Catalonia's long-term credit rating is BB (Non-Investment Grade) according to Standard & Poor's, Ba2 (Non-Investment Grade) according to Moody's, and BBB- (Low Investment Grade) according to Fitch Ratings. Catalonia's rating is tied for worst with between 1 and 5 other autonomous communities of Spain, depending on the rating agency.
The city of Barcelona occupies the eighth position as one of the world's best cities to live, work, research and visit in 2021, according to the report "The World's Best Cities 2021", prepared by Resonance Consultancy.
The Catalan capital, despite the current moment of crisis, is also one of the European bases of "reference for start-ups" and the fifth city in the world to establish one of these companies, behind London, Berlin, Paris and Amsterdam, according to the Eu-Starts-Up 2020 study. Barcelona is behind London, New York, Paris, Moscow, Tokyo, Dubai and Singapore and ahead of Los Angeles and Madrid.
In the context of the financial crisis of 2007–2008, Catalonia was expected to suffer a recession amounting to almost a 2% contraction of its regional GDP in 2009. Catalonia's debt in 2012 was the highest of all Spain's autonomous communities, reaching €13,476 million, i.e. 38% of the total debt of the 17 autonomous communities, but in recent years its economy recovered a positive evolution and the GDP grew a 3.3% in 2015.
Catalonia is amongst the List of country subdivisions by GDP over 100 billion US dollars and is a member of the Four Motors for Europe organisation.
The distribution of sectors is as follows:
The main tourist destinations in Catalonia are the city of Barcelona, the beaches of the Costa Brava in Girona, the beaches of the Costa del Maresme and Costa del Garraf from Malgrat de Mar to Vilanova i la Geltrú and the Costa Daurada in Tarragona. In the High Pyrenees there are several ski resorts, near Lleida. On 1 November 2012, Catalonia started charging a tourist tax. The revenue is used to promote tourism, and to maintain and upgrade tourism-related infrastructure.
Many of Spain's leading savings banks were based in Catalonia before the independence referendum of 2017. However, in the aftermath of the referendum, many of them moved their registered office to other parts of Spain. That includes the two biggest Catalan banks at that moment, La Caixa, which moved its office to Palma de Mallorca, and Banc Sabadell, ranked fourth among all Spanish private banks and which moved its office to Alicante. That happened after the Spanish government passed a law allowing companies to move their registered office without requiring the approval of the company's general meeting of shareholders. Overall, there was a negative net relocation rate of companies based in Catalonia moving to other autonomous communities of Spain. From the 2017 independence referendum until the end of 2018, for example, Catalonia lost 5454 companies to other parts of Spain (mainly Madrid), 2359 only in 2018, gaining 467 new ones from the rest of the country during 2018. It has been reported that the Spanish government and the Spanish King Felipe VI pressured some of the big Catalan companies to move their headquarters outside of the region.
The stock market of Barcelona, which in 2016 had a volume of around €152 billion, is the second largest of Spain after Madrid, and Fira de Barcelona organizes international exhibitions and congresses to do with different sectors of the economy.
The main economic cost for Catalan families is the purchase of a home. According to data from the Society of Appraisal on 31 December 2005 Catalonia is, after Madrid, the second most expensive region in Spain for housing: 3,397 €/m on average (see Spanish property bubble).
The unemployment rate stood at 10.5% in 2019 and was lower than the national average.
Airports in Catalonia are owned and operated by Aena (a Spanish Government entity) except two airports in Lleida which are operated by Aeroports de Catalunya (an entity belonging to the Government of Catalonia).
Since the Middle Ages, Catalonia has been well integrated into international maritime networks. The port of Barcelona (owned and operated by Puertos del Estado, a Spanish Government entity) is an industrial, commercial and tourist port of worldwide importance. With 1,950,000 TEUs in 2015, it is the first container port in Catalonia, the third in Spain after Valencia and Algeciras in Andalusia, the 9th in the Mediterranean Sea, the 14th in Europe and the 68th in the world. It is sixth largest cruise port in the world, the first in Europe and the Mediterranean with 2,364,292 passengers in 2014. The ports of Tarragona (owned and operated by Puertos del Estado) in the southwest and Palamós near Girona at northeast are much more modest. The port of Palamós and the other ports in Catalonia (26) are operated and administered by Ports de la Generalitat, a Catalan Government entity.
The development of these infrastructures, resulting from the topography and history of the Catalan territory, responds strongly to the administrative and political organization of this autonomous community.
There are 12,000 kilometres (7,500 mi) of roads throughout Catalonia.
The principal highways are AP-7 (Autopista de la Mediterrània) and A-7 (Autovia de la Mediterrània). They follow the coast from the French border to Valencia, Murcia and Andalusia. The main roads generally radiate from Barcelona. The AP-2 (Autopista del Nord-est) and A-2 (Autovia del Nord-est) connect inland and onward to Madrid.
Other major roads are:
Public-own roads in Catalonia are either managed by the autonomous government of Catalonia (e.g., C- roads) or the Spanish government (e.g., AP- , A- , N- roads).
Catalonia saw the first railway construction in the Iberian Peninsula in 1848, linking Barcelona with Mataró. Given the topography, most lines radiate from Barcelona. The city has both suburban and inter-city services. The main east coast line runs through the province connecting with the SNCF (French Railways) at Portbou on the coast.
There are two publicly owned railway companies operating in Catalonia: the Catalan FGC that operates commuter and regional services, and the Spanish national Renfe that operates long-distance and high-speed rail services (AVE and Avant) and the main commuter and regional service Rodalies de Catalunya, administered by the Catalan government since 2010.
High-speed rail (AVE) services from Madrid currently reach Barcelona, via Lleida and Tarragona. The official opening between Barcelona and Madrid took place 20 February 2008. The journey between Barcelona and Madrid now takes about two-and-a-half hours. A connection to the French high-speed TGV network has been completed (called the Perpignan–Barcelona high-speed rail line) and the Spanish AVE service began commercial services on the line 9 January 2013, later offering services to Marseille on their high speed network. This was shortly followed by the commencement of commercial service by the French TGV on 17 January 2013, leading to an average travel time on the Paris-Barcelona TGV route of 7h 42m. This new line passes through Girona and Figueres with a tunnel through the Pyrenees.
As of 2017, the official population of Catalonia was 7,522,596. 1,194,947 residents did not have Spanish citizenship, accounting for about 16% of the population.
The Urban Region of Barcelona includes 5,217,864 people and covers an area of 2,268 km (876 sq mi). The metropolitan area of the Urban Region includes cities such as L'Hospitalet de Llobregat, Sabadell, Terrassa, Badalona, Santa Coloma de Gramenet and Cornellà de Llobregat.
In 1900, the population of Catalonia was 1,966,382 people and in 1970 it was 5,122,567. The sizeable increase of the population was due to the demographic boom in Spain during the 1960s and early 1970s as well as in consequence of large-scale internal migration from the rural economically weak regions to its more prospering industrial cities. In Catalonia, that wave of internal migration arrived from several regions of Spain, especially from Andalusia, Murcia and Extremadura. As of 1999, it was estimated that over 60% of Catalans descended from 20th century migrations from other parts of Spain.
Immigrants from other countries settled in Catalonia since the 1990s; a large percentage comes from Africa, Latin America and Eastern Europe, and smaller numbers from Asia and Southern Europe, often settling in urban centers such as Barcelona and industrial areas. In 2017, Catalonia had 940,497 foreign residents (11.9% of the total population) with non-Spanish ID cards, without including those who acquired Spanish citizenship.
Religion in Catalonia (2020)
Historically, all the Catalan population was Christian, specifically Catholic, but since the 1980s there has been a trend of decline of Christianity. Nevertheless, according to the most recent study sponsored by the Government of Catalonia, as of 2020, 62.3% of the Catalans identify as Christians (up from 61.9% in 2016 and 56.5% in 2014) of whom 53.0% Catholics, 7.0% Protestants and Evangelicals, 1.3% Orthodox Christians and 1.0% Jehovah's Witnesses. At the same time, 18.6% of the population identify as atheists, 8.8% as agnostics, 4.3% as Muslims, and a further 3.4% as being of other religions.
According to the linguistic census held by the Government of Catalonia in 2013, Spanish is the most spoken language in Catalonia (46.53% claim Spanish as "their own language"), followed by Catalan (37.26% claim Catalan as "their own language"). In everyday use, 11.95% of the population claim to use both languages equally, whereas 45.92% mainly use Spanish and 35.54% mainly use Catalan. There is a significant difference between the Barcelona metropolitan area (and, to a lesser extent, the Tarragona area), where Spanish is more spoken than Catalan, and the more rural and small town areas, where Catalan clearly prevails over Spanish.
Originating in the historic territory of Catalonia, Catalan has enjoyed special status since the approval of the Statute of Autonomy of 1979 which declares it to be "Catalonia's own language", a term which signifies a language given special legal status within a Spanish territory, or which is historically spoken within a given region. The other languages with official status in Catalonia are Spanish, which has official status throughout Spain, and Aranese Occitan, which is spoken in Val d'Aran.
Since the Statute of Autonomy of 1979, Aranese (a Gascon dialect of Occitan) has also been official and subject to special protection in Val d'Aran. This small area of 7,000 inhabitants was the only place where a dialect of Occitan had received full official status. Then, on 9 August 2006, when the new Statute came into force, Occitan became official throughout Catalonia. Occitan is the mother tongue of 22.4% of the population of Val d'Aran, which has attracted heavy immigration from other Spanish regions to work in the service industry. Catalan Sign Language is also officially recognised.
Although not considered an "official language" in the same way as Catalan, Spanish, and Occitan, the Catalan Sign Language, with about 18,000 users in Catalonia, is granted official recognition and support: "The public authorities shall guarantee the use of Catalan sign language and conditions of equality for deaf people who choose to use this language, which shall be the subject of education, protection and respect."
As was the case since the ascent of the Bourbon dynasty to the throne of Spain after the War of the Spanish Succession, and with the exception of the short period of the Second Spanish Republic, under Francoist Spain Catalan was banned from schools and all other official use, so that for example families were not allowed to officially register children with Catalan names. Although never completely banned, Catalan language publishing was severely restricted during the early 1940s, with only religious texts and small-run self-published texts being released. Some books were published clandestinely or circumvented the restrictions by showing publishing dates prior to 1936. This policy was changed in 1946, when restricted publishing in Catalan resumed.
Rural–urban migration originating in other parts of Spain also reduced the social use of Catalan in urban areas and increased the use of Spanish. Lately, a similar sociolinguistic phenomenon has occurred with foreign immigration. Catalan cultural activity increased in the 1960s and the teaching of Catalan began thanks to the initiative of associations such as Òmnium Cultural.
After the end of Francoist Spain, the newly established self-governing democratic institutions in Catalonia embarked on a long-term language policy to recover the use of Catalan and has, since 1983, enforced laws which attempt to protect and extend the use of Catalan. This policy, known as the "linguistic normalisation" (normalització lingüística in Catalan, normalización lingüística in Spanish) has been supported by the vast majority of Catalan political parties through the last thirty years. Some groups consider these efforts a way to discourage the use of Spanish, whereas some others, including the Catalan government and the European Union consider the policies respectful, or even as an example which "should be disseminated throughout the Union".
Today, Catalan is the main language of the Catalan autonomous government and the other public institutions that fall under its jurisdiction. Basic public education is given mainly in Catalan, but also there are some hours per week of Spanish medium instruction. Although businesses are required by law to display all information (e.g. menus, posters) at least in Catalan, this not systematically enforced. There is no obligation to display this information in either Occitan or Spanish, although there is no restriction on doing so in these or other languages. The use of fines was introduced in a 1997 linguistic law that aims to increase the public use of Catalan and defend the rights of Catalan speakers. On the other hand, the Spanish Constitution does not recognize equal language rights for national minorities since it enshrined Spanish as the only official language of the state, the knowledge of which being compulsory. Numerous laws regarding for instance the labelling of pharmaceutical products, make in effect Spanish the only language of compulsory use.
The law ensures that both Catalan and Spanish – being official languages – can be used by the citizens without prejudice in all public and private activities. The Generalitat uses Catalan in its communications and notifications addressed to the general population, but citizens can also receive information from the Generalitat in Spanish if they so wish. Debates in the Catalan Parliament take place almost exclusively in Catalan and the Catalan public television broadcasts programs mainly in Catalan.
Due to the intense immigration which Spain in general and Catalonia in particular experienced in the first decade of the 21st century, many foreign languages are spoken in various cultural communities in Catalonia, of which Rif-Berber, Moroccan Arabic, Romanian and Urdu are the most common ones.
In Catalonia, there is a high social and political consensus on the language policies favoring Catalan, also among Spanish speakers and speakers of other languages. However, some of these policies have been criticised for trying to promote Catalan by imposing fines on businesses. For example, following the passage of the law on Catalan cinema in March 2010, which established that half of the movies shown in Catalan cinemas had to be in Catalan, a general strike of 75% of the cinemas took place. The Catalan government gave in and dropped the clause that forced 50% of the movies to be dubbed or subtitled in Catalan before the law came to effect. On the other hand, organisations such as Plataforma per la Llengua reported different violations of the linguistic rights of the Catalan speakers in Catalonia and the other Catalan-speaking territories in Spain, most of them caused by the institutions of the Spanish government in these territories.
The Catalan language policy has been challenged by some political parties in the Catalan Parliament. Citizens, currently the main opposition party, has been one of the most consistent critics of the Catalan language policy within Catalonia. The Catalan branch of the People's Party has a more ambiguous position on the issue: on one hand, it demands a bilingual Catalan–Spanish education and a more balanced language policy that would defend Catalan without favoring it over Spanish, whereas on the other hand, a few local PP politicians have supported in their municipalities measures privileging Catalan over Spanish and it has defended some aspects of the official language policies, sometimes against the positions of its colleagues from other parts of Spain.
Catalonia has given to the world many important figures in the area of the art. Catalan painters internationally known are, among others, Salvador Dalí, Joan Miró and Antoni Tàpies. Closely linked with the Catalan pictorial atmosphere, Pablo Picasso lived in Barcelona during his youth, training them as an artist and creating the movement of cubism. Other important artists are Claudi Lorenzale for the medieval Romanticism that marked the artistic Renaixença, Marià Fortuny for the Romanticism and Catalan Orientalism of the nineteenth century, Ramon Casas or Santiago Rusiñol, main representatives of the pictorial current of Catalan modernism from the end of the nineteenth century to the beginning of the twentieth century, Josep Maria Sert for early 20th-century Noucentisme, or Josep Maria Subirachs for expressionist or abstract sculpture and painting of the late twentieth century.
The most important painting museums of Catalonia are the Teatre-Museu Dalí in Figueres, the National Art Museum of Catalonia (MNAC), Picasso Museum, Fundació Antoni Tàpies, Joan Miró Foundation, the Barcelona Museum of Contemporary Art (MACBA), the Centre of Contemporary Culture of Barcelona (CCCB) and the CaixaForum.
In the field of architecture were developed and adapted to Catalonia different artistic styles prevalent in Europe, leaving footprints in many churches, monasteries and cathedrals, of Romanesque (the best examples of which are located in the northern half of the territory) and Gothic styles. The Gothic developed in Barcelona and its area of influence is known as Catalan Gothic, with some particular characteristics. The church of Santa Maria del Mar is an example of this kind of style. During the Middle Ages, many fortified castles were built by feudal nobles to mark their powers.
There are some examples of Renaissance (such as the Palau de la Generalitat), Baroque and Neoclassical architectures. In the late nineteenth century Modernism (Art Nouveau) appeared as the national art. The world-renowned Catalan architects of this style are Antoni Gaudí, Lluís Domènech i Montaner and Josep Puig i Cadafalch. Thanks to the urban expansion of Barcelona during the last decades of the century and the first ones of the next, many buildings of the Eixample are modernists. In the field of architectural rationalism, which turned especially relevant in Catalonia during the Republican era (1931–1939) highlighting Josep Lluís Sert and Josep Torres i Clavé, members of the GATCPAC and, in contemporany architecture, Ricardo Bofill and Enric Miralles.
There are several UNESCO World Heritage Sites in Catalonia:
The oldest surviving literary use of the Catalan language is considered to be the religious text known as Homilies d'Organyà, written either in late 11th or early 12th century.
There are two historical moments of splendor of Catalan literature. The first begins with the historiographic chronicles of the 13th century (chronicles written between the thirteenth and fourteenth centuries narrating the deeds of the monarchs and leading figures of the Crown of Aragon) and the subsequent Golden Age of the 14th and 15th centuries. After that period, between the 16th and 19th centuries the Romantic historiography defined this era as the Decadència, considered as the "decadent" period in Catalan literature because of a general falling into disuse of the vernacular language in cultural contexts and lack of patronage among the nobility.
The second moment of splendor began in the 19th century with the cultural and political Renaixença (Renaissance) represented by writers and poets such as Jacint Verdaguer, Víctor Català (pseudonym of Caterina Albert i Paradís), Narcís Oller, Joan Maragall and Àngel Guimerà. During the 20th century, avant-garde movements developed, initiated by the Generation of '14 (called Noucentisme in Catalonia), represented by Eugenio d'Ors, Joan Salvat-Papasseit, Josep Carner, Carles Riba, J.V. Foix and others. During the dictatorship of Primo de Rivera, the Civil War (Generation of '36) and the Francoist period, Catalan literature was maintained despite the repression against the Catalan language, being often produced in exile.
The most outstanding authors of this period are Salvador Espriu, Josep Pla, Josep Maria de Sagarra (who are considered mainly responsible for the renewal of Catalan prose), Mercè Rodoreda, Joan Oliver Sallarès or "Pere Quart", Pere Calders, Gabriel Ferrater, Manuel de Pedrolo, Agustí Bartra or Miquel Martí i Pol. In addition, several foreign writers who fought in the International Brigades, or other military units, have since recounted their experiences of fighting in their works, historical or fictional, with for example, George Orwell, in Homage to Catalonia (1938) or Claude Simon's Le Palace (1962) and Les Géorgiques (1981).
After the transition to democracy (1975–1978) and the restoration of the Generalitat (1977), literary life and the editorial market have returned to normality and literary production in Catalan is being bolstered with a number of language policies intended to protect Catalan culture. Besides the aforementioned authors, other relevant 20th-century writers of the Francoist and democracy periods include Joan Brossa, Agustí Bartra, Manuel de Pedrolo, Pere Calders or Quim Monzó.
Ana María Matute, Jaime Gil de Biedma, Manuel Vázquez Montalbán and Juan Goytisolo are among the most prominent Catalan writers in the Spanish language since the democratic restoration in Spain.
Castells are one of the main manifestations of Catalan popular culture. The activity consists in constructing human towers by competing colles castelleres (teams). This practice originated in Valls, on the region of the Camp de Tarragona, during the 18th century, and later it was extended to the rest of the territory, especially in the late 20th century. The tradition of els Castells i els Castellers was declared Masterpiece of the Oral and Intangible Heritage of Humanity by UNESCO in 2010.
In main celebrations, other elements of the Catalan popular culture are also usually present: parades with gegants (giants), bigheads, stick-dancers and musicians, and the correfoc, where devils and monsters dance and spray showers of sparks using firecrackers. Another traditional celebration in Catalonia is La Patum de Berga, declared a Masterpiece of the Oral and Intangible Heritage of Humanity by the UNESCO on 25 November 2005.
Christmas in Catalonia lasts two days, plus Christmas Eve. On the 25th, Christmas is celebrated, followed by a similar feast on the 26, called Sant Esteve (Saint Steve's Day). This allows families to visit and dine with different sectors of the extended family or get together with friends on the second day.
One of the most deeply rooted and curious Christmas traditions is the popular figure of the Tió de Nadal, consisting of an (often hollow) log with a face painted on it and often two little front legs appended, usually wearing a Catalan hat and scarf. The word has nothing to do with the Spanish word tío, meaning uncle. Tió means log in Catalan. The log is sometimes "found in the woods" (in an event staged for children) and then adopted and taken home, where it is fed and cared for during a month or so. On Christmas Day or on Christmas Eve, a game is played where children march around the house singing a song requesting the log to poop, then they hit the log with a stick, to make it poop, and lo and behold, as if through magic, it poops candy, and sometimes other small gifts. Usually, the larger or main gifts are brought by the Three Kings on 6 January, and the tió only brings small things.
Another custom is to make a pessebre (nativity scene) in the home or in shop windows, the latter sometimes competing in originality or sheer size and detail. Churches often host exhibits of numerous dioramas by nativity scene makers, or a single nativity scene they put out, and town halls generally put out a nativity scene in the central square. In Barcelona, every year, the main nativity scene is designed by different artists, and often ends up being an interesting, post-modern or conceptual and strange creation. In the home, the nativity scene often consists of strips of cork bark to represent cliffs or mountains in the background, moss as grass in the foreground, some wood chips or other as dirt, and aluminum foil for rivers and lakes. The traditional figurines often included are the three wise men on camels or horses, which are moved every day or so to go closer to the manger, a star with a long tail in the background to lead people to the spot, the annunciation with shepherds having a meal and an angel appearing (hanging from something), a washer lady washing clothes in the pond, sheep, ducks, people carrying packages on their backs, a donkey driver with a load of twigs, and atrezzo such as a starry sky, miniature towns placed in the distance, either Oriental-styled or local-looking, a bridge over the river, trees, etc.
One of the most astonishing and sui-generis figurines traditionally placed in the nativity scene, to the great glee of children, is the caganer, a person depicted in the act of defecating. This figurine is hidden in some corner of the nativity scene and the game is to detect it. Of course, churches forgo this figurine, and the main nativity scene of Barcelona, for instance, likewise does not feature it. The caganer is so popular it has, together with the tió, long been a major part of the Christmas markets, where they come in the guise of your favorite politicians or other famous people, as well as the traditional figures of a Catalan farmer. People often buy a figurine of a caganer in the guise of a famous person they are actually fond of, contrary to what one would imagine, though sometimes people buy a caganer in the guise of someone they dislike, although this means they have to look at them in the home.
Another (extended) Christmas tradition is the celebration of the Epiphany on 6 January, which is called Reis, meaning Three Kings Day. This is every important in Catalonia and the Catalan-speaking areas, and families go to watch major parades on the eve of the Epiphany, where they can greet the kings and watch them pass by in pomp and circumstance, on floats and preceded and followed by pages, musicians, dancers, etc. They often give the kings letters with their gift requests, which are collected by the pages. On the next day, the children find the gifts the three kings brought for them.
In addition to traditional local Catalan culture, traditions from other parts of Spain can be found as a result of migration from other regions, for instance the celebration of the Andalusian Feria de Abril in Catalonia.
On 28 July 2010, second only after the Canary Islands, Catalonia became another Spanish territory to forbid bullfighting. The ban, which went into effect on 1 January 2012, had originated in a popular petition supported by over 180,000 signatures.
The sardana is considered to be the most characteristic Catalan folk dance, interpreted to the rhythm of tamborí, tible and tenora (from the oboe family), trumpet, trombó (trombone), fiscorn (family of bugles) and contrabaix with three strings played by a cobla, and are danced in a circle dance. Other tunes and dances of the traditional music are the contrapàs (obsolete today), ball de bastons (the "dance of sticks"), the moixiganga, the goigs (popular songs), the galops or the jota in the southern part. The havaneres are characteristic in some marine localities of the Costa Brava, especially during the summer months when these songs are sung outdoors accompanied by a cremat of burned rum.
Art music was first developed, up to the nineteenth century and, as in much of Europe, in a liturgical setting, particularly marked by the Escolania de Montserrat. The main Western musical trends have marked these productions, medieval monodies or polyphonies, with the work of Abbot Oliba in the eleventh century or the compilation Llibre Vermell de Montserrat ("Red Book of Montserrat") from the fourteenth century. Through the Renaissance there were authors such as Pere Albert Vila, Joan Brudieu or the two Mateu Fletxa ("The Old" and "The Young"). Baroque had composers like Joan Cererols. The Romantic music was represented by composers such as Fernando Sor, Josep Anselm Clavé (father of choir movement in Catalonia and responsible of the music folk reviving) or Felip Pedrell.
Modernisme also expressed in musical terms from the end of the 19th century onwards, mixing folkloric and post-romantic influences, through the works of Isaac Albéniz and Enric Granados. The avant-garde spirit initiated by the modernists is prolonged throughout the twentieth century, thanks to the activities of the Orfeó Català, a choral society founded in 1891, with its monumental concert hall, the Palau de la Música Catalana in Catalan, built by Lluís Domènech i Montaner from 1905 to 1908, the Barcelona Symphony Orchestra created in 1944 and composers, conductors and musicians engaged against the Francoism like Robert Gerhard, Eduard Toldrà and Pau Casals.
Performances of opera, mostly imported from Italy, began in the 18th century, but some native operas were written as well, including the ones by Domènec Terradellas, Carles Baguer, Ramon Carles, Isaac Albéniz and Enric Granados. The Barcelona main opera house, Gran Teatre del Liceu (opened in 1847), remains one of the most important in Spain, hosting one of the most prestigious music schools in Barcelona, the Conservatori Superior de Música del Liceu. Several lyrical artists trained by this institution gained international renown during the 20th century, such as Victoria de los Ángeles, Montserrat Caballé, Giacomo Aragall and Josep Carreras.
Cellist Pau Casals is admired as an outstanding player. Other popular musical styles were born in the second half of the 20th century such as Nova Cançó from the 1960s with Lluís Llach and the group Els Setze Jutges, the Catalan rumba in the 1960s with Peret, Catalan Rock from the late 1970s with La Banda Trapera del Río and Decibelios for Punk Rock, Sau, Els Pets, Sopa de Cabra or Lax'n'Busto for pop rock or Sangtraït for hard rock, electropop since the 1990s with OBK and indie pop from the 1990s.
Catalonia is the autonomous community, along with Madrid, that has the most media (TV, magazines, newspapers etc.). In Catalonia there is a wide variety of local and comarcal media. With the restoration of democracy, many newspapers and magazines, until then in the hands of the Franco government, were recovered in order to convert them into free and democratic media, while local radios and televisions were implemented.
Televisió de Catalunya, which broadcasts entirely in the Catalan language, is the main Catalan public TV. It has five channels: TV3, El 33, Super3, 3/24, Esport3 and TV3CAT. In 2018, TV3 became the first television channel to be the most viewed one for nine consecutive years in Catalonia. State televisions that broadcast in Catalonia in Spanish language include Televisión Española (with few emissions in Catalan), Antena 3, Cuatro, Telecinco, and La Sexta. Other smaller Catalan television channels include 8TV (owned by Grup Godó), Barça TV and the local televisions, the greatest exponent of which is betevé, the TV channel of Barcelona, which also broadcasts in Catalan.
The two main Catalan newspapers of general information are El Periódico de Catalunya and La Vanguardia, both with editions in Catalan and Spanish. Catalan only published newspapers include Ara and El Punt Avui (from the fusion of El Punt and Avui in 2011), as well as most part of the local press. The Spanish newspapers, such as El País, El Mundo or La Razón, can be also acquired.
Catalonia has a long tradition of use of radio, the first regular radio broadcast in the country was from Ràdio Barcelona in 1924. Today, the public Catalunya Ràdio (owned by Catalan Media Corporation) and the private RAC 1 (belonging to Grup Godó) are the two main radios of Catalonia, both in Catalan.
Regarding the cinema, after the democratic transition, three styles have dominated since then. First, auteur cinema, in the continuity of the Barcelona School, emphasizes experimentation and form, while focusing on developing social and political themes. Worn first by Josep Maria Forn or Bigas Luna, then by Marc Recha, Jaime Rosales and Albert Serra, this genre has achieved some international recognition. Then, the documentary became another genre particularly representative of contemporary Catalan cinema, boosted by Joaquim Jordà i Català and José Luis Guerín. Later, horror films and thrillers have also emerged as a specialty of the Catalan film industry, thanks in particular to the vitality of the Sitges Film Festival, created in 1968. Several directors have gained worldwide renown thanks to this genre, starting with Jaume Balagueró and his series REC (co-directed with Valencian Paco Plaza), Juan Antonio Bayona and El Orfanato or Jaume Collet-Serra with Orphan, Unknown and Non-Stop.
Catalan actors have shot for Spanish and international productions, such as Sergi López.
The Museum of Cinema - Tomàs Mallol Collection (Museu del Cinema – Col.lecció Tomàs Mallol in Catalan) of Girona is home of important permanent exhibitions of cinema and pre-cinema objects. Other important institutions for the promotion of cinema are the Gaudí Awards (Premis Gaudí in Catalan, which replaced from 2009 Barcelona Film Awards themselves created in 2002), serving as equivalent for Catalonia to the Spanish Goya or French César.
Seny is a form of ancestral Catalan wisdom or sensibleness. It involves well-pondered perception of situations, level-headedness, awareness, integrity, and right action. Many Catalans consider seny something unique to their culture, is based on a set of ancestral local customs stemming from the scale of values and social norms of their society.
Sport has had a distinct importance in Catalan life and culture since the beginning of the 20th century; consequently, the region has a well-developed sports infrastructure. The main sports are football, basketball, handball, rink hockey, tennis and motorsport.
Despite the fact that the most popular sports are represented outside by the Spanish national teams, Catalonia can officially play as itself in some others, like korfball, futsal or rugby league. Most of Catalan Sports Federations have a long tradition and some of them participated in the foundation of international sports federations, as the Catalan Federation of Rugby, that was one of the founder members of the Fédération Internationale de Rugby Amateur (FIRA) in 1934. The majority of Catalan sport federations are part of the Sports Federation Union of Catalonia (Catalan: Unió de Federacions Esportives de Catalunya), founded in 1933.
The Catalan Football Federation also periodically fields a national team against international opposition, organizing friendly matches. In the recent years they have played with Bulgaria, Argentina, Brazil, Basque Country, Colombia, Nigeria, Cape Verde and Tunisia. The biggest football clubs are Barcelona (also known as Barça), who have won five European Cups (UEFA Champions League), and Espanyol, who have twice been runner-up of the UEFA Cup (now UEFA Europa League). Barcelona currently play in La Liga while Espanyol currently play in the Segunda División.
The Catalan waterpolo is one of the main powers of the Iberian Peninsula. The Catalans won triumphs in waterpolo competitions at European and world level by club (the Barcelona was champion of Europe in 1981/82 and the Catalonia in 1994/95) and national team (one gold and one silver in Olympic Games and World Championships). It also has many international synchronized swimming champions.
Motorsport has a long tradition in Catalonia, which involving many people, with some world champions and several competitions organized since the beginning of the 20th century. The Circuit de Catalunya, built in 1991, is one of the main motorsport venues, holding the Catalan motorcycle Grand Prix, the Spanish F1 Grand Prix, a DTM race, and several other races.
Catalonia hosted many relevant international sport events, such as the 1992 Summer Olympics in Barcelona, and also the 1955 Mediterranean Games, the 2013 World Aquatics Championships or the 2018 Mediterranean Games. It held annually the fourth-oldest still-existing cycling stage race in the world, the Volta a Catalunya (Tour of Catalonia).
Catalonia has its own representative and distinctive national symbols such as:
Catalan gastronomy has a long culinary tradition. Various local food recipes have been described in documents dating from the fifteenth century. As with all the cuisines of the Mediterranean, Catatonian dishes make abundant use of fish, seafood, olive oil, bread and vegetables. Regional specialties include the pa amb tomàquet (bread with tomato), which consists of bread (sometimes toasted), and tomato seasoned with olive oil and salt. Often the dish is accompanied with any number of sausages (cured botifarres, fuet, iberic ham, etc.), ham or cheeses. Others dishes include the calçotada, escudella i carn d'olla, suquet de peix (fish stew), and a dessert, Catalan cream.
Catalan vineyards also have several Denominacions d'Origen wines, such as: Priorat, Montsant, Penedès and Empordà. There is also a sparkling wine, the cava.
Catalonia is internationally recognized for its fine dining. Three of the World's 50 Best Restaurants are in Catalonia, and four restaurants have three Michelin stars, including restaurants like El Bulli or El Celler de Can Roca, both of which regularly dominate international rankings of restaurants. The region has been awarded the European Region of Gastronomy title for the year 2016.
This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). "Catalonia". Encyclopædia Britannica (11th ed.). Cambridge University Press. | [
{
"paragraph_id": 0,
"text": "Catalonia (/ˌkætəˈloʊniə/; Catalan: Catalunya [kətəˈluɲə]; Spanish: Cataluña [kataˈluɲa] ; Occitan: Catalonha [kataˈluɲa]) is an autonomous community of Spain, designated as a nationality by its Statute of Autonomy.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Most of its territory (except the Val d'Aran) lies on the northeast of the Iberian Peninsula, to the south of the Pyrenees mountain range. Catalonia is administratively divided into four provinces: Barcelona, Girona, Lleida, and Tarragona. The capital and largest city, Barcelona, is the second-most populated municipality in Spain and the fifth-most populous urban area in the European Union. Modern-day Catalonia comprises most of the medieval and early modern Principality of Catalonia (with the remainder Roussillon now part of France's Pyrénées-Orientales). It is bordered by France (Occitanie) and Andorra to the north, the Mediterranean Sea to the east, and the Spanish autonomous communities of Aragon to the west and Valencia to the south. The official languages are Catalan, Spanish, and the Aranese dialect of Occitan.",
"title": ""
},
{
"paragraph_id": 2,
"text": "In the late 8th century, various counties across the eastern Pyrenees were established by the Frankish kingdom as a defensive barrier against Muslim invasions. In the 10th century, the County of Barcelona became progressively independent. In 1137, Barcelona and the Kingdom of Aragon were united by marriage under the Crown of Aragon. Within the Crown, the Catalan counties adopted a common polity, the Principality of Catalonia, developing its institutional system, such as Courts, Generalitat and constitutions, becoming the base for the Crown's Mediterranean trade and expansionism. In the later Middle Ages, Catalan literature flourished. In 1469, the king of Aragon and the queen of Castile were married and ruled their realms together, retaining all of their distinct institutions and legislation.",
"title": ""
},
{
"paragraph_id": 3,
"text": "During the Franco-Spanish War (1635–1659), Catalonia revolted (1640–1652) against a large and burdensome presence of the royal army, being briefly proclaimed a republic under French protection until it was largely reconquered by the Spanish army. By the Treaty of the Pyrenees (1659), the northern parts of Catalonia, mostly the Roussillon, were ceded to France. During the War of the Spanish Succession (1701–1714), the Crown of Aragon sided against the Bourbon Philip V of Spain, but the Catalans were defeated with the fall of Barcelona on 11 September 1714. Philip V subsequently imposed a unifying administration across Spain, enacting the Nueva Planta decrees which, like in the other realms of the Crown of Aragon, suppressed Catalan institutions and rights. As a consequence, Catalan as a language of government and literature was eclipsed by Spanish. Throughout the 18th century, Catalonia experienced economic growth.",
"title": ""
},
{
"paragraph_id": 4,
"text": "In the 19th century, Catalonia was severely affected by the Napoleonic and Carlist Wars. In the second third of the century, it experienced industrialisation. As wealth from the industrial expansion grew, it saw a cultural renaissance coupled with incipient nationalism while several workers' movements appeared. With the establishment of the Second Spanish Republic (1931–1939), the Generalitat was restored as a Catalan autonomous government. After the Spanish Civil War, the Francoist dictatorship enacted repressive measures, abolishing Catalan self-government and banning the official use of the Catalan language. After a period of autarky, from the late 1950s through to the 1970s Catalonia saw rapid economic growth, drawing many workers from across Spain, making Barcelona one of Europe's largest industrial metropolitan areas and turning Catalonia into a major tourist destination. During the Spanish transition to democracy (1975–1982), Catalonia regained self-government and is now one of the most economically dynamic communities in Spain.",
"title": ""
},
{
"paragraph_id": 5,
"text": "Since the 2010s, there has been growing support for Catalan independence. On 27 October 2017, the Catalan Parliament unilaterally declared independence following a referendum that was deemed unconstitutional by the Spanish state. The Spanish Senate voted in favour of enforcing direct rule by removing the Catalan government and calling a snap regional election. The Spanish Supreme Court imprisoned seven former ministers of the Catalan government on charges of rebellion and misuse of public funds, while several others—including then-President Carles Puigdemont—fled to other European countries. Those in prison were pardoned by the Spanish government in 2021.",
"title": ""
},
{
"paragraph_id": 6,
"text": "The name \"Catalonia\" (Medieval Latin: Cathalaunia; Catalan: Catalunya), spelled Cathalonia, began to be used for the homeland of the Catalans (Cathalanenses) in the late 11th century and was probably used before as a territorial reference to the group of counties that comprised part of the March of Gothia and the March of Hispania under the control of the Count of Barcelona and his relatives. The origin of the name Catalunya is subject to diverse interpretations because of a lack of evidence.",
"title": "Etymology and pronunciation"
},
{
"paragraph_id": 7,
"text": "One theory suggests that Catalunya derives from the name Gothia (or Gauthia) Launia (\"Land of the Goths\"), since the origins of the Catalan counts, lords and people were found in the March of Gothia, known as Gothia, whence Gothland > Gothlandia > Gothalania > Cathalaunia > Catalonia theoretically derived. During the Middle Ages, Byzantine chroniclers claimed that Catalania derives from the local medley of Goths with Alans, initially constituting a Goth-Alania.",
"title": "Etymology and pronunciation"
},
{
"paragraph_id": 8,
"text": "Other theories suggest:",
"title": "Etymology and pronunciation"
},
{
"paragraph_id": 9,
"text": "In English, Catalonia is pronounced /kætəˈloʊniə/. The native name, Catalunya, is pronounced [kətəˈluɲə] in Central Catalan, the most widely spoken variety, and [kataˈluɲa] in North-Western Catalan. The Spanish name is Cataluña ([kataˈluɲa]), and the Aranese name is Catalonha ([kataˈluɲa]).",
"title": "Etymology and pronunciation"
},
{
"paragraph_id": 10,
"text": "The first known human settlements in what is now Catalonia were at the beginning of the Middle Paleolithic. The oldest known trace of human occupation is a mandible found in Banyoles, described by some sources as pre-Neanderthal, that is, some 200,000 years old; other sources suggest it to be only about one third that old. From the next prehistoric era, the Epipalaeolithic or Mesolithic, important remains survive, the greater part dated between 8000 and 5000 BC, such as those of Sant Gregori (Falset) and el Filador (Margalef de Montsant). The most important sites from these eras, all excavated in the region of Moianès, are the Balma del Gai (Epipaleolithic) and the Balma de l'Espluga (late Epipaleolithic and Early Neolithic).",
"title": "History"
},
{
"paragraph_id": 11,
"text": "The Neolithic era began in Catalonia around 5000 BC, although the population was slower to develop fixed settlements than in other places, thanks to the abundance of woods, which allowed the continuation of a fundamentally hunter-gatherer culture. An example of such settlements would be La Draga at Banyoles, an \"early Neolithic village which dates from the end of the 6th millennium BC.\"",
"title": "History"
},
{
"paragraph_id": 12,
"text": "The Chalcolithic period developed in Catalonia between 2500 and 1800 BC, with the beginning of the construction of copper objects. The Bronze Age occurred between 1800 and 700 BC. There are few remnants of this era, but there were some known settlements in the low Segre zone. The Bronze Age coincided with the arrival of the Indo-Europeans through the Urnfield Culture, whose successive waves of migration began around 1200 BC, and they were responsible for the creation of the first proto-urban settlements. Around the middle of the 7th century BC, the Iron Age arrived in Catalonia.",
"title": "History"
},
{
"paragraph_id": 13,
"text": "In pre-Roman times, the area that is now called Catalonia in the north-east of Iberian Peninsula – like the rest of the Mediterranean side of the peninsula – was populated by the Iberians. The Iberians of this area – the Ilergetes, Indigetes and Lacetani (Cerretains) – also maintained relations with the peoples of the Mediterranean. Some urban agglomerations became relevant, including Ilerda (Lleida) inland, Hibera (perhaps Amposta or Tortosa) or Indika (Ullastret). Coastal trading colonies were established by the ancient Greeks, who settled around the Gulf of Roses, in Emporion (Empúries) and Roses in the 8th century BC. The Carthaginians briefly ruled the territory in the course of the Second Punic War and traded with the surrounding Iberian population.",
"title": "History"
},
{
"paragraph_id": 14,
"text": "After the Carthaginian defeat by the Roman Republic, the north-east of Iberia became the first to come under Roman rule and became part of Hispania, the westernmost part of the Roman Empire. Tarraco (modern Tarragona) was one of the most important Roman cities in Hispania and the capital of the province of Tarraconensis. Other important cities of the Roman period are Ilerda (Lleida), Dertosa (Tortosa), Gerunda (Girona) as well as the ports of Empuriæ (former Emporion) and Barcino (Barcelona). As for the rest of Hispania, Latin law was granted to all cities under the reign of Vespasian (69–79 AD), while Roman citizenship was granted to all free men of the empire by the Edict of Caracalla in 212 AD (Tarraco, the capital, was already a colony of Roman law since 45 BC). It was a rich agricultural province (olive oil, wine, wheat), and the first centuries of the Empire saw the construction of roads (the most important being the Via Augusta, parallel to Mediterranean coastline) and infrastructure like aqueducts.",
"title": "History"
},
{
"paragraph_id": 15,
"text": "Conversion to Christianity, attested in the 3rd century, was completed in urban areas in the 4th century. Although Hispania remained under Roman rule and did not fall under the rule of Vandals, Suebi and Alans in the 5th century, the main cities suffered frequent sacking and some deurbanization.",
"title": "History"
},
{
"paragraph_id": 16,
"text": "After the fall of the Western Roman Empire, the area was conquered by the Visigoths and was ruled as part of the Visigothic Kingdom for almost two and a half centuries. In 718, it came under Muslim control and became part of Al-Andalus, a province of the Umayyad Caliphate. From the conquest of Roussillon in 760, to the conquest of Barcelona in 801, the Frankish empire took control of the area between Septimania and the Llobregat river from the Muslims and created heavily militarised, self-governing counties. These counties formed part of the historiographically known as the Gothic and Hispanic Marches, a buffer zone in the south of the Frankish empire in the former province of Septimania and in the northeast of the Iberian Peninsula, to act as a defensive barrier for the Frankish Empire against further Muslim invasions from Al-Andalus.",
"title": "History"
},
{
"paragraph_id": 17,
"text": "These counties came under the rule of the counts of Barcelona, who were Frankish vassals nominated by the emperor of the Franks, to whom they were feudatories (801–988). The earliest known use of the name \"Catalonia\" for these counties dates to 1117. At the end of the 9th century, the Count of Barcelona Wilfred the Hairy (878–897) made his titles hereditaries and thus founded the dynasty of the House of Barcelona, which ruled Catalonia until 1410.",
"title": "History"
},
{
"paragraph_id": 18,
"text": "In 988 Borrell II, Count of Barcelona, did not recognise the new French king Hugh Capet as his king, evidencing the loss of dependency from Frankish rule and confirming his successors (from Ramon Borrell I onwards) as independent of the Capetian crown whom they regarded as usurpers of the Carolingian Frankish realm. At the beginning of eleventh century the Catalan counties suffered an important process of feudalisation, partially controlled by the efforts of church's sponsored Peace and Truce Assemblies and the negotiation skills of the Count of Barcelona Ramon Berenguer I (1035–1076), which began the codification of feudal law in the written Usages of Barcelona, becoming the basis of the Catalan law. In 1137, Ramon Berenguer IV, Count of Barcelona decided to accept King Ramiro II of Aragon's proposal to marry Queen Petronila, establishing the dynastic union of the County of Barcelona with the Kingdom of Aragon, creating the composite monarchy known as the Crown of Aragon and making the Catalan counties that were united under the County of Barcelona into a principality of the Aragonese Crown.",
"title": "History"
},
{
"paragraph_id": 19,
"text": "In 1258, by means of the Treaty of Corbeil, James I of Aragon King of Aragon and Count of Barcelona, king of Mallorca and of Valencia, renounced his family rights and dominions in Occitania and recognised the king of France as heir of the Carolingian dynasty. The king of France, Louis IX, formally relinquished his claims of feudal lordship over all the Catalan counties, except the County of Foix, despite the opposition of king James. This treaty confirmed, from French point of view, the independence of the Catalan counties established and exercised during the previous three centuries, but also meant the irremediable separation between the geographical areas of Catalonia and Languedoc.",
"title": "History"
},
{
"paragraph_id": 20,
"text": "As a coastal territory, Catalonia became the base of the Aragonese Crown's maritime forces, which spread the power of the Crown in the Mediterranean, turning Barcelona into a powerful and wealthy city. In the period of 1164–1410, new territories, the Kingdom of Valencia, the Kingdom of Majorca, the Kingdom of Sardinia, the Kingdom of Sicily, and, briefly, the Duchies of Athens and Neopatras, were incorporated into the dynastic domains of the House of Aragon. The expansion was accompanied by a great development of the Catalan trade, creating an extensive trade network across the Mediterranean which competed with those of the maritime republics of Genoa and Venice.",
"title": "History"
},
{
"paragraph_id": 21,
"text": "At the same time, the Principality of Catalonia developed a complex institutional and political system based in the concept of a pact between the estates of the realm and the king. Laws had to be approved in the Catalan Courts (Corts Catalanes), one of the first parliamentary bodies of Europe that, since 1283, obtained the power to create legislation with the monarch. The Courts were composed of the three Estates organized into \"arms\" (braços), were presided over by the monarch, and approved the Catalan constitutions, which established a compilation of rights for the inhabitants of the Principality. In order to collect general taxes, the Courts of 1359 established a permanent representative of deputies position, called the Deputation of the General (and later usually known as Generalitat), which gained considerable political power over the next centuries.",
"title": "History"
},
{
"paragraph_id": 22,
"text": "The domains of the Aragonese Crown were severely affected by the Black Death pandemic and by later outbreaks of the plague. Between 1347 and 1497 Catalonia lost 37 percent of its population. In 1410, the last reigning monarch of the House of Barcelona, King Martin I died without surviving descendants. Under the Compromise of Caspe (1412), Ferdinand from the Castilian House of Trastámara received the Crown of Aragon as Ferdinand I of Aragon. During the reign of his son, John II, social and political tensions caused the Catalan Civil War (1462–1472) and the War of the Remences (1462-1486). The Sentencia Arbitral de Guadalupe (1486) liberated the remença peasants from the feudal evil customs.",
"title": "History"
},
{
"paragraph_id": 23,
"text": "In the later Middle Ages, Catalan literature flourished in Catalonia proper and in the kingdoms of Majorca and Valencia, with such remarkable authors as the philosopher Ramon Llull, the Valencian poet Ausiàs March, and Joanot Martorell, author of the novel Tirant lo Blanch, published in 1490.",
"title": "History"
},
{
"paragraph_id": 24,
"text": "Ferdinand II of Aragon, the grandson of Ferdinand I, and Queen Isabella I of Castile were married in 1469, later taking the title the Catholic Monarchs; subsequently, this event was seen by historiographers as the dawn of a unified Spain. At this time, though united by marriage, the Crowns of Castile and Aragon maintained distinct territories, each keeping its own traditional institutions, parliaments, laws and currency. Castile commissioned expeditions to the Americas and benefited from the riches acquired in the Spanish colonisation of the Americas, but, in time, also carried the main burden of military expenses of the united Spanish kingdoms. After Isabella's death, Ferdinand II personally ruled both crowns.",
"title": "History"
},
{
"paragraph_id": 25,
"text": "By virtue of descent from his maternal grandparents, Ferdinand II of Aragon and Isabella I of Castile, in 1516 Charles I of Spain became the first king to rule the Crowns of Castile and Aragon simultaneously by his own right. Following the death of his paternal (House of Habsburg) grandfather, Maximilian I, Holy Roman Emperor, he was also elected Charles V, Holy Roman Emperor, in 1519.",
"title": "History"
},
{
"paragraph_id": 26,
"text": "Over the next few centuries, the Principality of Catalonia was generally on the losing side of a series of wars that led steadily to an increased centralization of power in Spain. Despite this fact, between the 16th and 18th centuries, the participation of the political community in the local and the general Catalan government grew (thus consolidating its constitutional system), while the kings remained absent, represented by a viceroy. Tensions between Catalan institutions and the monarchy began to arise. The large and burdensome presence of the Spanish royal army in the Principality due to the Franco-Spanish War led to an uprising of peasants, provoking the Reapers' War (1640–1652), which saw Catalonia rebel (briefly as a republic led by the chairman of the Generalitat, Pau Claris) with French help against the Spanish Crown for overstepping Catalonia's rights during the Thirty Years' War. Within a brief period France took full control of Catalonia. Most of Catalonia was reconquered by the Spanish monarchy but Catalan rights were recognised. Roussillon and half of Cerdanya was lost to France by the Treaty of the Pyrenees (1659).",
"title": "History"
},
{
"paragraph_id": 27,
"text": "The most significant conflict concerning the governing monarchy was the War of the Spanish Succession (1701–1715), which began when the childless Charles II of Spain, the last Spanish Habsburg, died without an heir in 1700. Charles II had chosen Philip V of Spain from the French House of Bourbon. Catalonia, like other territories that formed the Crown of Aragon, rose up in support of the Austrian Habsburg pretender Charles VI, Holy Roman Emperor, in his claim for the Spanish throne as Charles III of Spain. The fight between the houses of Bourbon and Habsburg for the Spanish Crown split Spain and Europe.",
"title": "History"
},
{
"paragraph_id": 28,
"text": "The fall of Barcelona on 11 September 1714 to the Bourbon king Philip V militarily ended the Habsburg claim to the Spanish Crown, which became legal fact in the Treaty of Utrecht. Philip felt that he had been betrayed by the Catalan Courts, as it had initially sworn its loyalty to him when he had presided over it in 1701. In retaliation for the betrayal, and inspired by the French absolutist style of government, the first Bourbon king introduced the Nueva Planta decrees, that incorporated the realms of the Crown of Aragon, including the Principality of Catalonia, as province of the Crown of Castile in 1716, terminating their separate institutions, laws and rights, as well as their pactist politics, within a united kingdom of Spain. From the second third of 18th century onwards Catalonia carried out a successful process of proto-industrialization, reinforced in the late quarter of the century when Castile's trade monopoly with American colonies ended.",
"title": "History"
},
{
"paragraph_id": 29,
"text": "After the War of the Spanish Succession, the assimilation of the Crown of Aragon by the Castilian Crown through the Nueva Planta Decrees, was the first step in the creation of the Spanish nation state. And like other European nation-states in formation, it was not on a uniform ethnic basis, but by imposing the political and cultural characteristics of the capital, in this case Madrid and Central Spain, on those of the other areas, whose inhabitants would become national minorities to be assimilated through nationalist policies. These nationalist policies, sometimes very aggressive, and still in force, have been and are the seed of repeated territorial conflicts within the state.",
"title": "History"
},
{
"paragraph_id": 30,
"text": "At the beginning of the nineteenth century, Catalonia was severely affected by the Napoleonic Wars. In 1808, it was occupied by French troops; the resistance against the occupation eventually developed into the Peninsular War. The rejection of French dominion was institutionalized with the creation of \"juntas\" (councils) who, remaining loyal to the Bourbons, exercised the sovereignty and representation of the territory due to the disappearance of the old institutions. Napoleon took direct control of Catalonia to reestablish order, creating the Government of Catalonia under the rule of Marshall Augereau, and making Catalan briefly an official language again. Between 1812 and 1814, Catalonia was annexed to France and organized as four departments. The French troops evacuated Catalan territory at the end of 1814. After the Bourbon restoration in Spain and the death of the absolutist king Ferdinand VII (1833), Carlist Wars erupted against the newly established liberal state of Isabella II. Catalonia was divided, with the coastal and most industrialized areas supporting liberalism, while many inland areas were in the hands of the Carlist faction; the latter proposed to reestablish the institutional systems suppressed by the Nueva Planta decrees in the ancient realms of the Crown of Aragon. The consolidation of the liberal state saw a new territorial division of Spain into provinces, including Catalonia, which was divided into four (Barcelona, Girona, Lleida and Tarragona).",
"title": "History"
},
{
"paragraph_id": 31,
"text": "In the second third of the 19th century, Catalonia became an important industrial center, particularly focused on textiles. This process was a consequence of the conditions of proto-industrialisation of textile production in the prior two centuries, growing capital from wine and brandy export, and was boosted by the government support for domestic manufacturing. In 1832, the Bonaplata Factory in Barcelona became the first factory in the country to make use of the steam engine. The first railway on the Iberian Peninsula was built between Barcelona and Mataró in 1848. A policy to encourage company towns also saw the textile industry flourish in the countryside in the 1860s and 1870s. Although the policy of Spanish governments oscillated between free trade and protectionism, protectionist laws become more common. To this day Catalonia remains one of the most industrialised areas of Spain.",
"title": "History"
},
{
"paragraph_id": 32,
"text": "In the same period, Barcelona was the focus of industrial conflict and revolutionary uprisings known as \"bullangues\". In Catalonia, a republican current began to develop and inevitably, many Catalans favored the federalisation of Spain. Meanwhile, the Catalan language saw a cultural renaissance from the second third of the century onwards, the Renaixença, among both the working class and the bourgeoisie. Right after the fall of the First Spanish Republic (1873–1874) and the subsequent restoration of the Bourbon dynasty (1874), Catalan nationalism began to be organized politically under the leadership of the republican federalist Valentí Almirall.",
"title": "History"
},
{
"paragraph_id": 33,
"text": "The anarchist movement had been active throughout the last quarter of the 19th century and the early 20th century, founding the CNT trade union in 1910 and achieving one of the first eight-hour workdays in Europe in 1919. Growing resentment of conscription and of the military culminated in the Tragic Week in Barcelona in 1909. Under the hegemony of the Regionalist League, Catalonia gained a degree of administrative unity for the first time in the Modern era. In 1914, the four Catalan provinces were authorized to create a commonwealth (Catalan: Mancomunitat de Catalunya), without any legislative power or specific political autonomy, which carried out an ambitious program of modernization, but it was disbanded in 1925 by the dictatorship of Primo de Rivera (1923–1930). During the final stage of the Dictatorship, with Spain beginning to suffer an economic crisis, Barcelona hosted the 1929 International Exposition.",
"title": "History"
},
{
"paragraph_id": 34,
"text": "After the fall of the dictatorship and a brief proclamation of the Catalan Republic, during the events of the proclamation of the Second Spanish Republic (14–17 April 1931), Catalonia received in 1932, its first Statute of Autonomy from the Spanish Republic's Parliament, granting it a considerable degree of self-government, establishing an autonomous body, the Generalitat of Catalonia, which included a parliament, an executive council and a court of appeal. The left-wing independentist leader Francesc Macià was appointed its first president. Under the Statute, Catalan became an official language. The governments of the Republican Generalitat, led by the Republican Left of Catalonia (ERC) members Francesc Macià (1931–1933) and Lluís Companys (1933–1940), sought to implement an advanced and progressive social agenda, despite the internal difficulties. This period was marked by political unrest, the effects of the economic crisis and their social repercussions. The Statute of Autonomy was suspended in 1934, due to the Events of 6 October in Barcelona, as a response to the accession of right-wing Spanish nationalist party CEDA to the government of the Republic, considered close to fascism. After the electoral victory of the left wing Popular Front in February 1936, the Government of Catalonia was pardoned and the self-government was restored.",
"title": "History"
},
{
"paragraph_id": 35,
"text": "The defeat of the military rebellion against the Republican government in Barcelona placed Catalonia firmly in the Republican side of the Spanish Civil War. During the war, there were two rival powers in Catalonia: the de jure power of the Generalitat and the de facto power of the armed popular militias. Violent confrontations between the workers' parties (CNT-FAI and POUM against the PSUC) culminated in the defeat of the first ones in 1937. The situation resolved itself progressively in favor of the Generalitat, but at the same time the Generalitat was partially losing its autonomous power within Republican Spain. In 1938 Franco's troops broke the Republican territory in two, isolating Catalonia from the rest of the Republic. The defeat of the Republican army in the Battle of the Ebro led in 1938 and 1939 to the occupation of Catalonia by Franco's forces.",
"title": "History"
},
{
"paragraph_id": 36,
"text": "The defeat of the Spanish Republic in the Spanish Civil War brought to power the dictatorship of Francisco Franco, whose first ten-year rule was particularly violent, autocratic, and repressive both in a political, cultural, social, and economical sense. In Catalonia, any kind of public activities associated with Catalan nationalism, republicanism, anarchism, socialism, liberalism, democracy or communism, including the publication of books on those subjects or simply discussion of them in open meetings, was banned.",
"title": "History"
},
{
"paragraph_id": 37,
"text": "Franco's regime banned the use of Catalan in government-run institutions and during public events, and also the Catalan institutions of self-government were abolished. The pro-Republic of Spain president of Catalonia, Lluís Companys, was taken to Spain from his exile in the German-occupied France and was tortured and executed in the Montjuïc Castle of Barcelona for the crime of 'military rebellion'.",
"title": "History"
},
{
"paragraph_id": 38,
"text": "During later stages of Francoist Spain, certain folkloric and religious celebrations in Catalan resumed and were tolerated. Use of Catalan in the mass media had been forbidden but was permitted from the early 1950s in the theatre. Despite the ban during the first years and the difficulties of the next period, publishing in Catalan continued throughout his rule.",
"title": "History"
},
{
"paragraph_id": 39,
"text": "The years after the war were extremely hard. Catalonia, like many other parts of Spain, had been devastated by the war. Recovery from the war damage was slow and made more difficult by the international trade embargo and the autarkic politics of Franco's regime. By the late 1950s, the region had recovered its pre-war economic levels and in the 1960s was the second-fastest growing economy in the world in what became known as the Spanish miracle. During this period there was a spectacular growth of industry and tourism in Catalonia that drew large numbers of workers to the region from across Spain and made the area around Barcelona one of Europe's largest industrial metropolitan areas.",
"title": "History"
},
{
"paragraph_id": 40,
"text": "After Franco's death in 1975, Catalonia voted for the adoption of a democratic Spanish Constitution in 1978, in which Catalonia recovered political and cultural autonomy, restoring the Generalitat (exiled since the end of the Civil War in 1939) in 1977 and adopting a new Statute of Autonomy in 1979, which defined Catalonia as a \"nationality\". The first elections to the Parliament of Catalonia under this Statute gave the Catalan presidency to Jordi Pujol, leader of Convergència i Unió (CiU), a center-right Catalan nationalist electoral coalition, with Pujol re-elected until 2003. Throughout the 1980s and 1990s, the institutions of Catalan autonomy were deployed, among them an autonomous police force, the Mossos d'Esquadra, in 1983, and the broadcasting network Televisió de Catalunya and its first channel TV3, created in 1983. An extensive program of normalization of Catalan language was carried out. Today, Catalonia remains one of the most economically dynamic communities of Spain. The Catalan capital and largest city, Barcelona, is a major international cultural centre and a major tourist destination. In 1992, Barcelona hosted the Summer Olympic Games.",
"title": "History"
},
{
"paragraph_id": 41,
"text": "In November 2003, elections to the Parliament of Catalonia gave the government to a left-wing Catalanist coalition formed by the Socialists' Party of Catalonia (PSC-PSOE), Republican Left of Catalonia (ERC) and Initiative for Catalonia Greens (ICV), and the socialist Pasqual Maragall was appointed president. The new government redacted a new version of the Statute of Autonomy, with the aim of consolidate and expand certain aspects of self-government.",
"title": "History"
},
{
"paragraph_id": 42,
"text": "The new Statute of Autonomy of Catalonia, approved after a referendum in 2006, was contested by important sectors of the Spanish society, especially by the conservative People's Party, which sent the law to the Constitutional Court of Spain. In 2010, the Court declared non-valid some of the articles that established an autonomous Catalan system of Justice, improved aspects of the financing, a new territorial division, the status of Catalan language or the symbolical declaration of Catalonia as a nation. This decision was severely contested by large sectors of Catalan society, which increased the demands of independence.",
"title": "History"
},
{
"paragraph_id": 43,
"text": "A controversial independence referendum was held in Catalonia on 1 October 2017, using a disputed voting process. It was declared illegal and suspended by the Constitutional Court of Spain, because it breached the 1978 Constitution. Subsequent developments saw, on 27 October 2017, a symbolic declaration of independence by the Parliament of Catalonia, the enforcement of direct rule by the Spanish government through the use of Article 155 of the Constitution, the dismissal of the Executive Council and the dissolution of the Parliament, with a snap regional election called for 21 December 2017, which ended with a victory of pro-independence parties. Former President Carles Puigdemont and five former cabinet ministers fled Spain and took refuge in other European countries (such as Belgium, in Puigdemont's case), whereas nine other cabinet members, including vice-president Oriol Junqueras, were sentenced to prison under various charges of rebellion, sedition, and misuse of public funds. Quim Torra became the 131st President of the Government of Catalonia on 17 May 2018, after the Spanish courts blocked three other candidates.",
"title": "History"
},
{
"paragraph_id": 44,
"text": "In 2018, the Assemblea Nacional Catalana joined the Unrepresented Nations and Peoples Organization (UNPO) on behalf of Catalonia.",
"title": "History"
},
{
"paragraph_id": 45,
"text": "On 14 October 2019, the Spanish Supreme court sentenced several Catalan political leaders, involved in organizing a referendum on Catalonia's independence from Spain, and convicted them on charges ranging from sedition to misuse of public funds, with sentences ranging from 9 to 13 years in prison. This decision sparked demonstrations around Catalonia. They were later pardoned by the Spanish government and left prison in June 2021.",
"title": "History"
},
{
"paragraph_id": 46,
"text": "The climate of Catalonia is diverse. The populated areas lying by the coast in Tarragona, Barcelona and Girona provinces feature a Hot-summer Mediterranean climate (Köppen Csa). The inland part (including the Lleida province and the inner part of Barcelona province) show a mostly Mediterranean climate (Köppen Csa). The Pyrenean peaks have a continental (Köppen D) or even Alpine climate (Köppen ET) at the highest summits, while the valleys have a maritime or oceanic climate sub-type (Köppen Cfb).",
"title": "Geography"
},
{
"paragraph_id": 47,
"text": "In the Mediterranean area, summers are dry and hot with sea breezes, and the maximum temperature is around 26–31 °C (79–88 °F). Winter is cool or slightly cold depending on the location. It snows frequently in the Pyrenees, and it occasionally snows at lower altitudes, even by the coastline. Spring and autumn are typically the rainiest seasons, except for the Pyrenean valleys, where summer is typically stormy.",
"title": "Geography"
},
{
"paragraph_id": 48,
"text": "The inland part of Catalonia is hotter and drier in summer. Temperature may reach 35 °C (95 °F), some days even 40 °C (104 °F). Nights are cooler there than at the coast, with the temperature of around 14–17 °C (57–63 °F). Fog is not uncommon in valleys and plains; it can be especially persistent, with freezing drizzle episodes and subzero temperatures during winter, mainly along the Ebro and Segre valleys and in Plain of Vic.",
"title": "Geography"
},
{
"paragraph_id": 49,
"text": "Catalonia has a marked geographical diversity, considering the relatively small size of its territory. The geography is conditioned by the Mediterranean coast, with 580 kilometres (360 miles) of coastline, and the towering Pyrenees along the long northern border. Catalonia is divided into three main geomorphological units:",
"title": "Geography"
},
{
"paragraph_id": 50,
"text": "The Catalan Pyrenees represent almost half in length of the Pyrenees, as it extends more than 200 kilometres (120 miles). Traditionally differentiated the Axial Pyrenees (the main part) and the Pre-Pyrenees (southern from the Axial) which are mountainous formations parallel to the main mountain ranges but with lower altitudes, less steep and a different geological formation. The highest mountain of Catalonia, located north of the comarca of Pallars Sobirà is the Pica d'Estats (3,143 m), followed by the Puigpedrós (2,914 m). The Serra del Cadí comprises the highest peaks in the Pre-Pyrenees and forms the southern boundary of the Cerdanya valley.",
"title": "Geography"
},
{
"paragraph_id": 51,
"text": "The Central Catalan Depression is a plain located between the Pyrenees and Pre-Coastal Mountains. Elevation ranges from 200 to 600 metres (660 to 1,970 feet). The plains and the water that descend from the Pyrenees have made it fertile territory for agriculture and numerous irrigation canals have been built. Another major plain is the Empordà, located in the northeast.",
"title": "Geography"
},
{
"paragraph_id": 52,
"text": "The Catalan Mediterranean system is based on two ranges running roughly parallel to the coast (southwest–northeast), called the Coastal and the Pre-Coastal Ranges. The Coastal Range is both the shorter and the lower of the two, while the Pre-Coastal is greater in both length and elevation. Areas within the Pre-Coastal Range include Montserrat, Montseny and the Ports de Tortosa-Beseit. Lowlands alternate with the Coastal and Pre-Coastal Ranges. The Coastal Lowland is located to the East of the Coastal Range between it and the coast, while the Pre-Coastal Lowlands are located inland, between the Coastal and Pre-Coastal Ranges, and includes the Vallès and Penedès plains.",
"title": "Geography"
},
{
"paragraph_id": 53,
"text": "Catalonia is a showcase of European landscapes on a small scale. Just over 30,000 square kilometres (12,000 square miles) hosting a variety of substrates, soils, climates, directions, altitudes and distances to the sea. The area is of great ecological diversity and a remarkable wealth of landscapes, habitats and species.",
"title": "Geography"
},
{
"paragraph_id": 54,
"text": "The fauna of Catalonia comprises a minority of animals endemic to the region and a majority of non-endemic animals. Much of Catalonia enjoys a Mediterranean climate (except mountain areas), which makes many of the animals that live there adapted to Mediterranean ecosystems. Of mammals, there are plentiful wild boar, red foxes, as well as roe deer and in the Pyrenees, the Pyrenean chamois. Other large species such as the bear have been recently reintroduced.",
"title": "Geography"
},
{
"paragraph_id": 55,
"text": "The waters of the Balearic Sea are rich in biodiversity, and even the megafaunas of the oceans; various types of whales (such as fin, sperm, and pilot) and dolphins can be found in the area.",
"title": "Geography"
},
{
"paragraph_id": 56,
"text": "Most of Catalonia belongs to the Mediterranean Basin. The Catalan hydrographic network consists of two important basins, the one of the Ebro and the one that comprises the internal basins of Catalonia (respectively covering 46.84% and 51.43% of the territory), all of them flow to the Mediterranean. Furthermore, there is the Garona river basin that flows to the Atlantic Ocean, but it only covers 1.73% of the Catalan territory.",
"title": "Geography"
},
{
"paragraph_id": 57,
"text": "The hydrographic network can be divided in two sectors, an occidental slope or Ebro river slope and one oriental slope constituted by minor rivers that flow to the Mediterranean along the Catalan coast. The first slope provides an average of 18,700 cubic hectometres (4.5 cubic miles) per year, while the second only provides an average of 2,020 hm (0.48 cu mi)/year. The difference is due to the big contribution of the Ebro river, from which the Segre is an important tributary. Moreover, in Catalonia there is a relative wealth of groundwaters, although there is inequality between comarques, given the complex geological structure of the territory. In the Pyrenees there are many small lakes, remnants of the ice age. The biggest are the lake of Banyoles and the recently recovered lake of Ivars.",
"title": "Geography"
},
{
"paragraph_id": 58,
"text": "The Catalan coast is almost rectilinear, with a length of 580 kilometres (360 mi) and few landforms—the most relevant are the Cap de Creus and the Gulf of Roses to the north and the Ebro Delta to the south. The Catalan Coastal Range hugs the coastline, and it is split into two segments, one between L'Estartit and the town of Blanes (the Costa Brava), and the other at the south, at the Costes del Garraf.",
"title": "Geography"
},
{
"paragraph_id": 59,
"text": "The principal rivers in Catalonia are the Ter, Llobregat, and the Ebro (Catalan: Ebre), all of which run into the Mediterranean.",
"title": "Geography"
},
{
"paragraph_id": 60,
"text": "The majority of Catalan population is concentrated in 30% of the territory, mainly in the coastal plains. Intensive agriculture, livestock farming and industrial activities have been accompanied by a massive tourist influx (more than 20 million annual visitors), a rate of urbanization and even of major metropolisation which has led to a strong urban sprawl: two thirds of Catalans live in the urban area of Barcelona, while the proportion of urban land increased from 4.2% in 1993 to 6.2% in 2009, a growth of 48.6% in sixteen years, complemented with a dense network of transport infrastructure. This is accompanied by a certain agricultural abandonment (decrease of 15% of all areas cultivated in Catalonia between 1993 and 2009) and a global threat to natural environment. Human activities have also put some animal species at risk, or even led to their disappearance from the territory, like the gray wolf and probably the brown bear of the Pyrenees. The pressure created by this model of life means that the country's ecological footprint exceeds its administrative area.",
"title": "Geography"
},
{
"paragraph_id": 61,
"text": "Faced with these problems, Catalan authorities initiated several measures whose purpose is to protect natural ecosystems. Thus, in 1990, the Catalan government created the Nature Conservation Council (Catalan: Consell de Protecció de la Natura), an advisory body with the aim to study, protect and manage the natural environments and landscapes of Catalonia. In addition, the Generalitat has carried out the Plan of Spaces of Natural Interest (Pla d'Espais d'Interès Natural or PEIN) in 1992 while eighteen Natural Spaces of Special Protection (Espais Naturals de Protecció Especial or ENPE) have been instituted.",
"title": "Geography"
},
{
"paragraph_id": 62,
"text": "There's a National Park, Aigüestortes i Estany de Sant Maurici; fourteen Natural Parks, Alt Pirineu, Aiguamolls de l'Empordà, Cadí-Moixeró, Cap de Creus, Sources of Ter and Freser, Collserola, Ebro Delta, Ports, Montgrí, Medes Islands and Baix Ter, Montseny, Montserrat, Sant Llorenç del Munt and l'Obac, Serra de Montsant, and the Garrotxa Volcanic Zone; as well as three Natural Places of National Interest (Paratge Natural d'Interes Nacional or PNIN), the Pedraforca, the Poblet Forest and the Albères.",
"title": "Geography"
},
{
"paragraph_id": 63,
"text": "After Franco's death in 1975 and the adoption of a democratic constitution in Spain in 1978, Catalonia recovered and extended the powers that it had gained in the Statute of Autonomy of 1932 but lost with the fall of the Second Spanish Republic at the end of the Spanish Civil War in 1939.",
"title": "Politics"
},
{
"paragraph_id": 64,
"text": "This autonomous community has gradually achieved more autonomy since the approval of the Spanish Constitution of 1978. The Generalitat holds exclusive jurisdiction in education, health, culture, environment, communications, transportation, commerce, public safety and local government, and only shares jurisdiction with the Spanish government in justice. In all, some analysts argue that formally the current system grants Catalonia with \"more self-government than almost any other corner in Europe\".",
"title": "Politics"
},
{
"paragraph_id": 65,
"text": "The support for Catalan nationalism ranges from a demand for further autonomy and the federalisation of Spain to the desire for independence from the rest of Spain, expressed by Catalan independentists. The first survey following the Constitutional Court ruling that cut back elements of the 2006 Statute of Autonomy, published by La Vanguardia on 18 July 2010, found that 46% of the voters would support independence in a referendum. In February of the same year, a poll by the Open University of Catalonia gave more or less the same results. Other polls have shown lower support for independence, ranging from 40 to 49%. Although it is established in the whole of the territory, support for independence is significantly higher in the hinterland and the northeast, away from the more populated coastal areas such as Barcelona.",
"title": "Politics"
},
{
"paragraph_id": 66,
"text": "Since 2011 when the question started to be regularly surveyed by the governmental Center for Public Opinion Studies (CEO), support for Catalan independence has been on the rise. According to the CEO opinion poll from July 2016, 47.7% of Catalans would vote for independence and 42.4% against it while, about the question of preferences, according to the CEO opinion poll from March 2016, a 57.2 claim to be \"absolutely\" or \"fairly\" in favour of independence. Other polls have shown lower support for independence, ranging from 40 to 49%. Other polls show more variable results, according with the Spanish CIS, as of December 2016, 47% of Catalans rejected independence and 45% supported it.",
"title": "Politics"
},
{
"paragraph_id": 67,
"text": "In hundreds of non-binding local referendums on independence, organised across Catalonia from 13 September 2009, a large majority voted for independence, although critics argued that the polls were mostly held in pro-independence areas. In December 2009, 94% of those voting backed independence from Spain, on a turn-out of 25%. The final local referendum was held in Barcelona, in April 2011. On 11 September 2012, a pro-independence march pulled in a crowd of between 600,000 (according to the Spanish Government), 1.5 million (according to the Guàrdia Urbana de Barcelona), and 2 million (according to its promoters); whereas poll results revealed that half the population of Catalonia supported secession from Spain.",
"title": "Politics"
},
{
"paragraph_id": 68,
"text": "Two major factors were Spain's Constitutional Court's 2010 decision to declare part of the 2006 Statute of Autonomy of Catalonia unconstitutional, as well as the fact that Catalonia contributes 19.49% of the central government's tax revenue, but only receives 14.03% of central government's spending.",
"title": "Politics"
},
{
"paragraph_id": 69,
"text": "Parties that consider themselves either Catalan nationalist or independentist have been present in all Catalan governments since 1980. The largest Catalan nationalist party, Convergence and Union, ruled Catalonia from 1980 to 2003, and returned to power in the 2010 election. Between 2003 and 2010, a leftist coalition, composed by the Catalan Socialists' Party, the pro-independence Republican Left of Catalonia and the leftist-environmentalist Initiative for Catalonia-Greens, implemented policies that widened Catalan autonomy.",
"title": "Politics"
},
{
"paragraph_id": 70,
"text": "In the 25 November 2012 Catalan parliamentary election, sovereigntist parties supporting a secession referendum gathered 59.01% of the votes and held 87 of the 135 seats in the Catalan Parliament. Parties supporting independence from the rest of Spain obtained 49.12% of the votes and a majority of 74 seats.",
"title": "Politics"
},
{
"paragraph_id": 71,
"text": "Artur Mas, then the president of Catalonia, organised early elections that took place on 27 September 2015. In these elections, Convergència and Esquerra Republicana decided to join, and they presented themselves under the coalition named Junts pel Sí (in Catalan, Together for Yes). Junts pel Sí won 62 seats and was the most voted party, and CUP (Candidatura d'Unitat Popular, a far-left and independentist party) won another 10, so the sum of all the independentist forces/parties was 72 seats, reaching an absolute majority, but not in number of individual votes, comprising 47,74% of the total.",
"title": "Politics"
},
{
"paragraph_id": 72,
"text": "The Statute of Autonomy of Catalonia is the fundamental organic law, second only to the Spanish Constitution from which the Statute originates.",
"title": "Politics"
},
{
"paragraph_id": 73,
"text": "In the Spanish Constitution of 1978 Catalonia, along with the Basque Country and Galicia, was defined as a \"nationality\". The same constitution gave Catalonia the automatic right to autonomy, which resulted in the Statute of Autonomy of Catalonia of 1979.",
"title": "Politics"
},
{
"paragraph_id": 74,
"text": "Both the 1979 Statute of Autonomy and the current one, approved in 2006, state that \"Catalonia, as a nationality, exercises its self-government constituted as an Autonomous Community in accordance with the Constitution and with the Statute of Autonomy of Catalonia, which is its basic institutional law, always under the law in Spain\".",
"title": "Politics"
},
{
"paragraph_id": 75,
"text": "The Preamble of the 2006 Statute of Autonomy of Catalonia states that the Parliament of Catalonia has defined Catalonia as a nation, but that \"the Spanish Constitution recognizes Catalonia's national reality as a nationality\". While the Statute was approved by and sanctioned by both the Catalan and Spanish parliaments, and later by referendum in Catalonia, it has been subject to a legal challenge by the surrounding autonomous communities of Aragon, Balearic Islands and Valencia, as well as by the conservative People's Party. The objections are based on various issues such as disputed cultural heritage but, especially, on the Statute's alleged breaches of the principle of \"solidarity between regions\" in fiscal and educational matters enshrined by the Constitution.",
"title": "Politics"
},
{
"paragraph_id": 76,
"text": "Spain's Constitutional Court assessed the disputed articles and on 28 June 2010, issued its judgment on the principal allegation of unconstitutionality presented by the People's Party in 2006. The judgment granted clear passage to 182 articles of the 223 that make up the fundamental text. The court approved 73 of the 114 articles that the People's Party had contested, while declaring 14 articles unconstitutional in whole or in part and imposing a restrictive interpretation on 27 others. The court accepted the specific provision that described Catalonia as a \"nation\", however ruled that it was a historical and cultural term with no legal weight, and that Spain remained the only nation recognised by the constitution.",
"title": "Politics"
},
{
"paragraph_id": 77,
"text": "The Catalan Statute of Autonomy establishes that Catalonia, as an autonomous community, is organised politically through the Generalitat of Catalonia (Catalan: Generalitat de Catalunya), confirmed by the Parliament, the Presidency of the Generalitat, the Government or Executive Council and the other institutions established by the Parliament, among them the Ombudsman (Síndic de Greuges), the Office of Auditors (Sindicatura de Comptes) the Council for Statutory Guarantees (Consell de Garanties Estatutàries) or the Audiovisual Council of Catalonia (Consell de l'Audiovisual de Catalunya).",
"title": "Politics"
},
{
"paragraph_id": 78,
"text": "The Parliament of Catalonia (Catalan: Parlament de Catalunya) is the unicameral legislative body of the Generalitat and represents the people of Catalonia. Its 135 members (diputats) are elected by universal suffrage to serve for a four-year period. According to the Statute of Autonomy, it has powers to legislate over devolved matters such as education, health, culture, internal institutional and territorial organization, nomination of the President of the Generalitat and control the Government, budget and other affairs. The last Catalan election was held on 14 February 2021, and its current speaker (president) is Laura Borràs, incumbent since 12 March 2018.",
"title": "Politics"
},
{
"paragraph_id": 79,
"text": "The President of the Generalitat of Catalonia (Catalan: president de la Generalitat de Catalunya) is the highest representative of Catalonia, and is also responsible of leading the government's action, presiding the Executive Council. Since the restoration of the Generalitat on the return of democracy in Spain, the Presidents of Catalonia have been Josep Tarradellas (1977–1980, president in exile since 1954), Jordi Pujol (1980–2003), Pasqual Maragall (2003–2006), José Montilla (2006–2010), Artur Mas (2010–2016), Carles Puigdemont (2016–2017) and, after the imposition of direct rule from Madrid, Quim Torra (2018–2020) and Pere Aragonès (2021–).",
"title": "Politics"
},
{
"paragraph_id": 80,
"text": "The Executive Council (Catalan: Consell Executiu) or Government (Govern), is the body responsible of the government of the Generalitat, it holds executive and regulatory power, being accountable to the Catalan Parliament. It comprises the President of the Generalitat, the First Minister (conseller primer) or the Vice President, and the ministers (consellers) appointed by the president. Its seat is the Palau de la Generalitat, Barcelona. In 2021 the government was a coalition of two parties, the Republican Left of Catalonia (ERC) and Together for Catalonia (Junts) and is made up of 14 ministers, including the vice President, alongside to the president and a secretary of government, but in October 2022 Together for Catalonia (Junts) left the coalition and the government.",
"title": "Politics"
},
{
"paragraph_id": 81,
"text": "Catalonia has its own police force, the Mossos d'Esquadra (officially called Mossos d'Esquadra-Policia de la Generalitat de Catalunya), whose origins date back to the 18th century. Since 1980 they have been under the command of the Generalitat, and since 1994 they have expanded in number in order to replace the national Civil Guard and National Police Corps, which report directly to the Homeland Department of Spain. The national bodies retain personnel within Catalonia to exercise functions of national scope such as overseeing ports, airports, coasts, international borders, custom offices, the identification of documents and arms control, immigration control, terrorism prevention, arms trafficking prevention, amongst others.",
"title": "Politics"
},
{
"paragraph_id": 82,
"text": "Most of the justice system is administered by national judicial institutions, the highest body and last judicial instance in the Catalan jurisdiction, integrating the Spanish judiciary, is the High Court of Justice of Catalonia. The criminal justice system is uniform throughout Spain, while civil law is administered separately within Catalonia. The civil laws that are subject to autonomous legislation have been codified in the Civil Code of Catalonia (Codi civil de Catalunya) since 2002.",
"title": "Politics"
},
{
"paragraph_id": 83,
"text": "Catalonia, together with Navarre and the Basque Country, are the Spanish communities with the highest degree of autonomy in terms of law enforcement.",
"title": "Politics"
},
{
"paragraph_id": 84,
"text": "Catalonia is organised territorially into provinces, further subdivided into comarques and municipalities. The 2006 Statute of Autonomy of Catalonia establishes the administrative organisation of three local authorities: vegueries, comarques, and municipalities.",
"title": "Politics"
},
{
"paragraph_id": 85,
"text": "Catalonia is divided administratively into four provinces, the governing body of which is the Provincial Deputation (Catalan: Diputació Provincial, Spanish: Diputación Provincial). The four provinces and their populations are:",
"title": "Politics"
},
{
"paragraph_id": 86,
"text": "Comarques (singular: \"comarca\") are entities composed by the municipalities to manage their responsibilities and services. The current regional division has its roots in a decree of the Generalitat de Catalunya of 1936, in effect until 1939, when it was suppressed by Franco. In 1987 the Catalan Government reestablished the comarcal division and in 1988 three new comarques were added (Alta Ribagorça, Pla d'Urgell and Pla de l'Estany). In 2015 was created an additional comarca, the Moianès. At present there are 41, excluding Aran. Every comarca is administered by a comarcal council (consell comarcal).",
"title": "Politics"
},
{
"paragraph_id": 87,
"text": "The Aran Valley (Val d'Aran), previously considered a comarca, obtained in 1990 a particular status within Catalonia due to its differences in culture and language, as Occitan is the native language of the Valley, being administered by a body known as the Conselh Generau d'Aran (General Council of Aran). Since 2015 it is defined as \"unique territorial entity\", while the powers of the Conselh Generau were expanded.",
"title": "Politics"
},
{
"paragraph_id": 88,
"text": "There are at present 947 municipalities (municipis) in Catalonia. Each municipality is run by a council (ajuntament) elected every four years by the residents in local elections. The council consists of a number of members (regidors) depending on population, who elect the mayor (alcalde or batlle). Its seat is the town hall (ajuntament, casa de la ciutat or casa de la vila).",
"title": "Politics"
},
{
"paragraph_id": 89,
"text": "The vegueria is a proposed type of division defined as a specific territorial area for the exercise of government and inter-local cooperation with legal personality. The current Statute of Autonomy states vegueries are intended to supersede provinces in Catalonia and take over many of functions of the comarques.",
"title": "Politics"
},
{
"paragraph_id": 90,
"text": "The territorial plan of Catalonia (Pla territorial general de Catalunya) provided six general functional areas, but was amended by Law 24/2001, of 31 December, recognizing the Alt Pirineu i Aran as a new functional area differentiated of Ponent. On 14 July 2010 the Catalan Parliament approved the creation of the functional area of the Penedès.",
"title": "Politics"
},
{
"paragraph_id": 91,
"text": "A highly industrialized region, the nominal GDP of Catalonia in 2018 was €228 billion (second after the community of Madrid, €230 billion) and the per capita GDP was €30,426 ($32,888), behind Madrid (€35,041), the Basque Country (€33,223), and Navarre (€31,389). That year, the GDP growth was 2.3%.",
"title": "Economy"
},
{
"paragraph_id": 92,
"text": "Catalonia's long-term credit rating is BB (Non-Investment Grade) according to Standard & Poor's, Ba2 (Non-Investment Grade) according to Moody's, and BBB- (Low Investment Grade) according to Fitch Ratings. Catalonia's rating is tied for worst with between 1 and 5 other autonomous communities of Spain, depending on the rating agency.",
"title": "Economy"
},
{
"paragraph_id": 93,
"text": "The city of Barcelona occupies the eighth position as one of the world's best cities to live, work, research and visit in 2021, according to the report \"The World's Best Cities 2021\", prepared by Resonance Consultancy.",
"title": "Economy"
},
{
"paragraph_id": 94,
"text": "The Catalan capital, despite the current moment of crisis, is also one of the European bases of \"reference for start-ups\" and the fifth city in the world to establish one of these companies, behind London, Berlin, Paris and Amsterdam, according to the Eu-Starts-Up 2020 study. Barcelona is behind London, New York, Paris, Moscow, Tokyo, Dubai and Singapore and ahead of Los Angeles and Madrid.",
"title": "Economy"
},
{
"paragraph_id": 95,
"text": "In the context of the financial crisis of 2007–2008, Catalonia was expected to suffer a recession amounting to almost a 2% contraction of its regional GDP in 2009. Catalonia's debt in 2012 was the highest of all Spain's autonomous communities, reaching €13,476 million, i.e. 38% of the total debt of the 17 autonomous communities, but in recent years its economy recovered a positive evolution and the GDP grew a 3.3% in 2015.",
"title": "Economy"
},
{
"paragraph_id": 96,
"text": "Catalonia is amongst the List of country subdivisions by GDP over 100 billion US dollars and is a member of the Four Motors for Europe organisation.",
"title": "Economy"
},
{
"paragraph_id": 97,
"text": "The distribution of sectors is as follows:",
"title": "Economy"
},
{
"paragraph_id": 98,
"text": "The main tourist destinations in Catalonia are the city of Barcelona, the beaches of the Costa Brava in Girona, the beaches of the Costa del Maresme and Costa del Garraf from Malgrat de Mar to Vilanova i la Geltrú and the Costa Daurada in Tarragona. In the High Pyrenees there are several ski resorts, near Lleida. On 1 November 2012, Catalonia started charging a tourist tax. The revenue is used to promote tourism, and to maintain and upgrade tourism-related infrastructure.",
"title": "Economy"
},
{
"paragraph_id": 99,
"text": "Many of Spain's leading savings banks were based in Catalonia before the independence referendum of 2017. However, in the aftermath of the referendum, many of them moved their registered office to other parts of Spain. That includes the two biggest Catalan banks at that moment, La Caixa, which moved its office to Palma de Mallorca, and Banc Sabadell, ranked fourth among all Spanish private banks and which moved its office to Alicante. That happened after the Spanish government passed a law allowing companies to move their registered office without requiring the approval of the company's general meeting of shareholders. Overall, there was a negative net relocation rate of companies based in Catalonia moving to other autonomous communities of Spain. From the 2017 independence referendum until the end of 2018, for example, Catalonia lost 5454 companies to other parts of Spain (mainly Madrid), 2359 only in 2018, gaining 467 new ones from the rest of the country during 2018. It has been reported that the Spanish government and the Spanish King Felipe VI pressured some of the big Catalan companies to move their headquarters outside of the region.",
"title": "Economy"
},
{
"paragraph_id": 100,
"text": "The stock market of Barcelona, which in 2016 had a volume of around €152 billion, is the second largest of Spain after Madrid, and Fira de Barcelona organizes international exhibitions and congresses to do with different sectors of the economy.",
"title": "Economy"
},
{
"paragraph_id": 101,
"text": "The main economic cost for Catalan families is the purchase of a home. According to data from the Society of Appraisal on 31 December 2005 Catalonia is, after Madrid, the second most expensive region in Spain for housing: 3,397 €/m on average (see Spanish property bubble).",
"title": "Economy"
},
{
"paragraph_id": 102,
"text": "The unemployment rate stood at 10.5% in 2019 and was lower than the national average.",
"title": "Economy"
},
{
"paragraph_id": 103,
"text": "Airports in Catalonia are owned and operated by Aena (a Spanish Government entity) except two airports in Lleida which are operated by Aeroports de Catalunya (an entity belonging to the Government of Catalonia).",
"title": "Economy"
},
{
"paragraph_id": 104,
"text": "Since the Middle Ages, Catalonia has been well integrated into international maritime networks. The port of Barcelona (owned and operated by Puertos del Estado, a Spanish Government entity) is an industrial, commercial and tourist port of worldwide importance. With 1,950,000 TEUs in 2015, it is the first container port in Catalonia, the third in Spain after Valencia and Algeciras in Andalusia, the 9th in the Mediterranean Sea, the 14th in Europe and the 68th in the world. It is sixth largest cruise port in the world, the first in Europe and the Mediterranean with 2,364,292 passengers in 2014. The ports of Tarragona (owned and operated by Puertos del Estado) in the southwest and Palamós near Girona at northeast are much more modest. The port of Palamós and the other ports in Catalonia (26) are operated and administered by Ports de la Generalitat, a Catalan Government entity.",
"title": "Economy"
},
{
"paragraph_id": 105,
"text": "The development of these infrastructures, resulting from the topography and history of the Catalan territory, responds strongly to the administrative and political organization of this autonomous community.",
"title": "Economy"
},
{
"paragraph_id": 106,
"text": "There are 12,000 kilometres (7,500 mi) of roads throughout Catalonia.",
"title": "Economy"
},
{
"paragraph_id": 107,
"text": "The principal highways are AP-7 (Autopista de la Mediterrània) and A-7 (Autovia de la Mediterrània). They follow the coast from the French border to Valencia, Murcia and Andalusia. The main roads generally radiate from Barcelona. The AP-2 (Autopista del Nord-est) and A-2 (Autovia del Nord-est) connect inland and onward to Madrid.",
"title": "Economy"
},
{
"paragraph_id": 108,
"text": "Other major roads are:",
"title": "Economy"
},
{
"paragraph_id": 109,
"text": "Public-own roads in Catalonia are either managed by the autonomous government of Catalonia (e.g., C- roads) or the Spanish government (e.g., AP- , A- , N- roads).",
"title": "Economy"
},
{
"paragraph_id": 110,
"text": "Catalonia saw the first railway construction in the Iberian Peninsula in 1848, linking Barcelona with Mataró. Given the topography, most lines radiate from Barcelona. The city has both suburban and inter-city services. The main east coast line runs through the province connecting with the SNCF (French Railways) at Portbou on the coast.",
"title": "Economy"
},
{
"paragraph_id": 111,
"text": "There are two publicly owned railway companies operating in Catalonia: the Catalan FGC that operates commuter and regional services, and the Spanish national Renfe that operates long-distance and high-speed rail services (AVE and Avant) and the main commuter and regional service Rodalies de Catalunya, administered by the Catalan government since 2010.",
"title": "Economy"
},
{
"paragraph_id": 112,
"text": "High-speed rail (AVE) services from Madrid currently reach Barcelona, via Lleida and Tarragona. The official opening between Barcelona and Madrid took place 20 February 2008. The journey between Barcelona and Madrid now takes about two-and-a-half hours. A connection to the French high-speed TGV network has been completed (called the Perpignan–Barcelona high-speed rail line) and the Spanish AVE service began commercial services on the line 9 January 2013, later offering services to Marseille on their high speed network. This was shortly followed by the commencement of commercial service by the French TGV on 17 January 2013, leading to an average travel time on the Paris-Barcelona TGV route of 7h 42m. This new line passes through Girona and Figueres with a tunnel through the Pyrenees.",
"title": "Economy"
},
{
"paragraph_id": 113,
"text": "As of 2017, the official population of Catalonia was 7,522,596. 1,194,947 residents did not have Spanish citizenship, accounting for about 16% of the population.",
"title": "Demographics"
},
{
"paragraph_id": 114,
"text": "The Urban Region of Barcelona includes 5,217,864 people and covers an area of 2,268 km (876 sq mi). The metropolitan area of the Urban Region includes cities such as L'Hospitalet de Llobregat, Sabadell, Terrassa, Badalona, Santa Coloma de Gramenet and Cornellà de Llobregat.",
"title": "Demographics"
},
{
"paragraph_id": 115,
"text": "In 1900, the population of Catalonia was 1,966,382 people and in 1970 it was 5,122,567. The sizeable increase of the population was due to the demographic boom in Spain during the 1960s and early 1970s as well as in consequence of large-scale internal migration from the rural economically weak regions to its more prospering industrial cities. In Catalonia, that wave of internal migration arrived from several regions of Spain, especially from Andalusia, Murcia and Extremadura. As of 1999, it was estimated that over 60% of Catalans descended from 20th century migrations from other parts of Spain.",
"title": "Demographics"
},
{
"paragraph_id": 116,
"text": "Immigrants from other countries settled in Catalonia since the 1990s; a large percentage comes from Africa, Latin America and Eastern Europe, and smaller numbers from Asia and Southern Europe, often settling in urban centers such as Barcelona and industrial areas. In 2017, Catalonia had 940,497 foreign residents (11.9% of the total population) with non-Spanish ID cards, without including those who acquired Spanish citizenship.",
"title": "Demographics"
},
{
"paragraph_id": 117,
"text": "Religion in Catalonia (2020)",
"title": "Demographics"
},
{
"paragraph_id": 118,
"text": "Historically, all the Catalan population was Christian, specifically Catholic, but since the 1980s there has been a trend of decline of Christianity. Nevertheless, according to the most recent study sponsored by the Government of Catalonia, as of 2020, 62.3% of the Catalans identify as Christians (up from 61.9% in 2016 and 56.5% in 2014) of whom 53.0% Catholics, 7.0% Protestants and Evangelicals, 1.3% Orthodox Christians and 1.0% Jehovah's Witnesses. At the same time, 18.6% of the population identify as atheists, 8.8% as agnostics, 4.3% as Muslims, and a further 3.4% as being of other religions.",
"title": "Demographics"
},
{
"paragraph_id": 119,
"text": "According to the linguistic census held by the Government of Catalonia in 2013, Spanish is the most spoken language in Catalonia (46.53% claim Spanish as \"their own language\"), followed by Catalan (37.26% claim Catalan as \"their own language\"). In everyday use, 11.95% of the population claim to use both languages equally, whereas 45.92% mainly use Spanish and 35.54% mainly use Catalan. There is a significant difference between the Barcelona metropolitan area (and, to a lesser extent, the Tarragona area), where Spanish is more spoken than Catalan, and the more rural and small town areas, where Catalan clearly prevails over Spanish.",
"title": "Demographics"
},
{
"paragraph_id": 120,
"text": "Originating in the historic territory of Catalonia, Catalan has enjoyed special status since the approval of the Statute of Autonomy of 1979 which declares it to be \"Catalonia's own language\", a term which signifies a language given special legal status within a Spanish territory, or which is historically spoken within a given region. The other languages with official status in Catalonia are Spanish, which has official status throughout Spain, and Aranese Occitan, which is spoken in Val d'Aran.",
"title": "Demographics"
},
{
"paragraph_id": 121,
"text": "Since the Statute of Autonomy of 1979, Aranese (a Gascon dialect of Occitan) has also been official and subject to special protection in Val d'Aran. This small area of 7,000 inhabitants was the only place where a dialect of Occitan had received full official status. Then, on 9 August 2006, when the new Statute came into force, Occitan became official throughout Catalonia. Occitan is the mother tongue of 22.4% of the population of Val d'Aran, which has attracted heavy immigration from other Spanish regions to work in the service industry. Catalan Sign Language is also officially recognised.",
"title": "Demographics"
},
{
"paragraph_id": 122,
"text": "Although not considered an \"official language\" in the same way as Catalan, Spanish, and Occitan, the Catalan Sign Language, with about 18,000 users in Catalonia, is granted official recognition and support: \"The public authorities shall guarantee the use of Catalan sign language and conditions of equality for deaf people who choose to use this language, which shall be the subject of education, protection and respect.\"",
"title": "Demographics"
},
{
"paragraph_id": 123,
"text": "As was the case since the ascent of the Bourbon dynasty to the throne of Spain after the War of the Spanish Succession, and with the exception of the short period of the Second Spanish Republic, under Francoist Spain Catalan was banned from schools and all other official use, so that for example families were not allowed to officially register children with Catalan names. Although never completely banned, Catalan language publishing was severely restricted during the early 1940s, with only religious texts and small-run self-published texts being released. Some books were published clandestinely or circumvented the restrictions by showing publishing dates prior to 1936. This policy was changed in 1946, when restricted publishing in Catalan resumed.",
"title": "Demographics"
},
{
"paragraph_id": 124,
"text": "Rural–urban migration originating in other parts of Spain also reduced the social use of Catalan in urban areas and increased the use of Spanish. Lately, a similar sociolinguistic phenomenon has occurred with foreign immigration. Catalan cultural activity increased in the 1960s and the teaching of Catalan began thanks to the initiative of associations such as Òmnium Cultural.",
"title": "Demographics"
},
{
"paragraph_id": 125,
"text": "After the end of Francoist Spain, the newly established self-governing democratic institutions in Catalonia embarked on a long-term language policy to recover the use of Catalan and has, since 1983, enforced laws which attempt to protect and extend the use of Catalan. This policy, known as the \"linguistic normalisation\" (normalització lingüística in Catalan, normalización lingüística in Spanish) has been supported by the vast majority of Catalan political parties through the last thirty years. Some groups consider these efforts a way to discourage the use of Spanish, whereas some others, including the Catalan government and the European Union consider the policies respectful, or even as an example which \"should be disseminated throughout the Union\".",
"title": "Demographics"
},
{
"paragraph_id": 126,
"text": "Today, Catalan is the main language of the Catalan autonomous government and the other public institutions that fall under its jurisdiction. Basic public education is given mainly in Catalan, but also there are some hours per week of Spanish medium instruction. Although businesses are required by law to display all information (e.g. menus, posters) at least in Catalan, this not systematically enforced. There is no obligation to display this information in either Occitan or Spanish, although there is no restriction on doing so in these or other languages. The use of fines was introduced in a 1997 linguistic law that aims to increase the public use of Catalan and defend the rights of Catalan speakers. On the other hand, the Spanish Constitution does not recognize equal language rights for national minorities since it enshrined Spanish as the only official language of the state, the knowledge of which being compulsory. Numerous laws regarding for instance the labelling of pharmaceutical products, make in effect Spanish the only language of compulsory use.",
"title": "Demographics"
},
{
"paragraph_id": 127,
"text": "The law ensures that both Catalan and Spanish – being official languages – can be used by the citizens without prejudice in all public and private activities. The Generalitat uses Catalan in its communications and notifications addressed to the general population, but citizens can also receive information from the Generalitat in Spanish if they so wish. Debates in the Catalan Parliament take place almost exclusively in Catalan and the Catalan public television broadcasts programs mainly in Catalan.",
"title": "Demographics"
},
{
"paragraph_id": 128,
"text": "Due to the intense immigration which Spain in general and Catalonia in particular experienced in the first decade of the 21st century, many foreign languages are spoken in various cultural communities in Catalonia, of which Rif-Berber, Moroccan Arabic, Romanian and Urdu are the most common ones.",
"title": "Demographics"
},
{
"paragraph_id": 129,
"text": "In Catalonia, there is a high social and political consensus on the language policies favoring Catalan, also among Spanish speakers and speakers of other languages. However, some of these policies have been criticised for trying to promote Catalan by imposing fines on businesses. For example, following the passage of the law on Catalan cinema in March 2010, which established that half of the movies shown in Catalan cinemas had to be in Catalan, a general strike of 75% of the cinemas took place. The Catalan government gave in and dropped the clause that forced 50% of the movies to be dubbed or subtitled in Catalan before the law came to effect. On the other hand, organisations such as Plataforma per la Llengua reported different violations of the linguistic rights of the Catalan speakers in Catalonia and the other Catalan-speaking territories in Spain, most of them caused by the institutions of the Spanish government in these territories.",
"title": "Demographics"
},
{
"paragraph_id": 130,
"text": "The Catalan language policy has been challenged by some political parties in the Catalan Parliament. Citizens, currently the main opposition party, has been one of the most consistent critics of the Catalan language policy within Catalonia. The Catalan branch of the People's Party has a more ambiguous position on the issue: on one hand, it demands a bilingual Catalan–Spanish education and a more balanced language policy that would defend Catalan without favoring it over Spanish, whereas on the other hand, a few local PP politicians have supported in their municipalities measures privileging Catalan over Spanish and it has defended some aspects of the official language policies, sometimes against the positions of its colleagues from other parts of Spain.",
"title": "Demographics"
},
{
"paragraph_id": 131,
"text": "Catalonia has given to the world many important figures in the area of the art. Catalan painters internationally known are, among others, Salvador Dalí, Joan Miró and Antoni Tàpies. Closely linked with the Catalan pictorial atmosphere, Pablo Picasso lived in Barcelona during his youth, training them as an artist and creating the movement of cubism. Other important artists are Claudi Lorenzale for the medieval Romanticism that marked the artistic Renaixença, Marià Fortuny for the Romanticism and Catalan Orientalism of the nineteenth century, Ramon Casas or Santiago Rusiñol, main representatives of the pictorial current of Catalan modernism from the end of the nineteenth century to the beginning of the twentieth century, Josep Maria Sert for early 20th-century Noucentisme, or Josep Maria Subirachs for expressionist or abstract sculpture and painting of the late twentieth century.",
"title": "Culture"
},
{
"paragraph_id": 132,
"text": "The most important painting museums of Catalonia are the Teatre-Museu Dalí in Figueres, the National Art Museum of Catalonia (MNAC), Picasso Museum, Fundació Antoni Tàpies, Joan Miró Foundation, the Barcelona Museum of Contemporary Art (MACBA), the Centre of Contemporary Culture of Barcelona (CCCB) and the CaixaForum.",
"title": "Culture"
},
{
"paragraph_id": 133,
"text": "In the field of architecture were developed and adapted to Catalonia different artistic styles prevalent in Europe, leaving footprints in many churches, monasteries and cathedrals, of Romanesque (the best examples of which are located in the northern half of the territory) and Gothic styles. The Gothic developed in Barcelona and its area of influence is known as Catalan Gothic, with some particular characteristics. The church of Santa Maria del Mar is an example of this kind of style. During the Middle Ages, many fortified castles were built by feudal nobles to mark their powers.",
"title": "Culture"
},
{
"paragraph_id": 134,
"text": "There are some examples of Renaissance (such as the Palau de la Generalitat), Baroque and Neoclassical architectures. In the late nineteenth century Modernism (Art Nouveau) appeared as the national art. The world-renowned Catalan architects of this style are Antoni Gaudí, Lluís Domènech i Montaner and Josep Puig i Cadafalch. Thanks to the urban expansion of Barcelona during the last decades of the century and the first ones of the next, many buildings of the Eixample are modernists. In the field of architectural rationalism, which turned especially relevant in Catalonia during the Republican era (1931–1939) highlighting Josep Lluís Sert and Josep Torres i Clavé, members of the GATCPAC and, in contemporany architecture, Ricardo Bofill and Enric Miralles.",
"title": "Culture"
},
{
"paragraph_id": 135,
"text": "There are several UNESCO World Heritage Sites in Catalonia:",
"title": "Culture"
},
{
"paragraph_id": 136,
"text": "The oldest surviving literary use of the Catalan language is considered to be the religious text known as Homilies d'Organyà, written either in late 11th or early 12th century.",
"title": "Culture"
},
{
"paragraph_id": 137,
"text": "There are two historical moments of splendor of Catalan literature. The first begins with the historiographic chronicles of the 13th century (chronicles written between the thirteenth and fourteenth centuries narrating the deeds of the monarchs and leading figures of the Crown of Aragon) and the subsequent Golden Age of the 14th and 15th centuries. After that period, between the 16th and 19th centuries the Romantic historiography defined this era as the Decadència, considered as the \"decadent\" period in Catalan literature because of a general falling into disuse of the vernacular language in cultural contexts and lack of patronage among the nobility.",
"title": "Culture"
},
{
"paragraph_id": 138,
"text": "The second moment of splendor began in the 19th century with the cultural and political Renaixença (Renaissance) represented by writers and poets such as Jacint Verdaguer, Víctor Català (pseudonym of Caterina Albert i Paradís), Narcís Oller, Joan Maragall and Àngel Guimerà. During the 20th century, avant-garde movements developed, initiated by the Generation of '14 (called Noucentisme in Catalonia), represented by Eugenio d'Ors, Joan Salvat-Papasseit, Josep Carner, Carles Riba, J.V. Foix and others. During the dictatorship of Primo de Rivera, the Civil War (Generation of '36) and the Francoist period, Catalan literature was maintained despite the repression against the Catalan language, being often produced in exile.",
"title": "Culture"
},
{
"paragraph_id": 139,
"text": "The most outstanding authors of this period are Salvador Espriu, Josep Pla, Josep Maria de Sagarra (who are considered mainly responsible for the renewal of Catalan prose), Mercè Rodoreda, Joan Oliver Sallarès or \"Pere Quart\", Pere Calders, Gabriel Ferrater, Manuel de Pedrolo, Agustí Bartra or Miquel Martí i Pol. In addition, several foreign writers who fought in the International Brigades, or other military units, have since recounted their experiences of fighting in their works, historical or fictional, with for example, George Orwell, in Homage to Catalonia (1938) or Claude Simon's Le Palace (1962) and Les Géorgiques (1981).",
"title": "Culture"
},
{
"paragraph_id": 140,
"text": "After the transition to democracy (1975–1978) and the restoration of the Generalitat (1977), literary life and the editorial market have returned to normality and literary production in Catalan is being bolstered with a number of language policies intended to protect Catalan culture. Besides the aforementioned authors, other relevant 20th-century writers of the Francoist and democracy periods include Joan Brossa, Agustí Bartra, Manuel de Pedrolo, Pere Calders or Quim Monzó.",
"title": "Culture"
},
{
"paragraph_id": 141,
"text": "Ana María Matute, Jaime Gil de Biedma, Manuel Vázquez Montalbán and Juan Goytisolo are among the most prominent Catalan writers in the Spanish language since the democratic restoration in Spain.",
"title": "Culture"
},
{
"paragraph_id": 142,
"text": "Castells are one of the main manifestations of Catalan popular culture. The activity consists in constructing human towers by competing colles castelleres (teams). This practice originated in Valls, on the region of the Camp de Tarragona, during the 18th century, and later it was extended to the rest of the territory, especially in the late 20th century. The tradition of els Castells i els Castellers was declared Masterpiece of the Oral and Intangible Heritage of Humanity by UNESCO in 2010.",
"title": "Culture"
},
{
"paragraph_id": 143,
"text": "In main celebrations, other elements of the Catalan popular culture are also usually present: parades with gegants (giants), bigheads, stick-dancers and musicians, and the correfoc, where devils and monsters dance and spray showers of sparks using firecrackers. Another traditional celebration in Catalonia is La Patum de Berga, declared a Masterpiece of the Oral and Intangible Heritage of Humanity by the UNESCO on 25 November 2005.",
"title": "Culture"
},
{
"paragraph_id": 144,
"text": "Christmas in Catalonia lasts two days, plus Christmas Eve. On the 25th, Christmas is celebrated, followed by a similar feast on the 26, called Sant Esteve (Saint Steve's Day). This allows families to visit and dine with different sectors of the extended family or get together with friends on the second day.",
"title": "Culture"
},
{
"paragraph_id": 145,
"text": "One of the most deeply rooted and curious Christmas traditions is the popular figure of the Tió de Nadal, consisting of an (often hollow) log with a face painted on it and often two little front legs appended, usually wearing a Catalan hat and scarf. The word has nothing to do with the Spanish word tío, meaning uncle. Tió means log in Catalan. The log is sometimes \"found in the woods\" (in an event staged for children) and then adopted and taken home, where it is fed and cared for during a month or so. On Christmas Day or on Christmas Eve, a game is played where children march around the house singing a song requesting the log to poop, then they hit the log with a stick, to make it poop, and lo and behold, as if through magic, it poops candy, and sometimes other small gifts. Usually, the larger or main gifts are brought by the Three Kings on 6 January, and the tió only brings small things.",
"title": "Culture"
},
{
"paragraph_id": 146,
"text": "Another custom is to make a pessebre (nativity scene) in the home or in shop windows, the latter sometimes competing in originality or sheer size and detail. Churches often host exhibits of numerous dioramas by nativity scene makers, or a single nativity scene they put out, and town halls generally put out a nativity scene in the central square. In Barcelona, every year, the main nativity scene is designed by different artists, and often ends up being an interesting, post-modern or conceptual and strange creation. In the home, the nativity scene often consists of strips of cork bark to represent cliffs or mountains in the background, moss as grass in the foreground, some wood chips or other as dirt, and aluminum foil for rivers and lakes. The traditional figurines often included are the three wise men on camels or horses, which are moved every day or so to go closer to the manger, a star with a long tail in the background to lead people to the spot, the annunciation with shepherds having a meal and an angel appearing (hanging from something), a washer lady washing clothes in the pond, sheep, ducks, people carrying packages on their backs, a donkey driver with a load of twigs, and atrezzo such as a starry sky, miniature towns placed in the distance, either Oriental-styled or local-looking, a bridge over the river, trees, etc.",
"title": "Culture"
},
{
"paragraph_id": 147,
"text": "One of the most astonishing and sui-generis figurines traditionally placed in the nativity scene, to the great glee of children, is the caganer, a person depicted in the act of defecating. This figurine is hidden in some corner of the nativity scene and the game is to detect it. Of course, churches forgo this figurine, and the main nativity scene of Barcelona, for instance, likewise does not feature it. The caganer is so popular it has, together with the tió, long been a major part of the Christmas markets, where they come in the guise of your favorite politicians or other famous people, as well as the traditional figures of a Catalan farmer. People often buy a figurine of a caganer in the guise of a famous person they are actually fond of, contrary to what one would imagine, though sometimes people buy a caganer in the guise of someone they dislike, although this means they have to look at them in the home.",
"title": "Culture"
},
{
"paragraph_id": 148,
"text": "Another (extended) Christmas tradition is the celebration of the Epiphany on 6 January, which is called Reis, meaning Three Kings Day. This is every important in Catalonia and the Catalan-speaking areas, and families go to watch major parades on the eve of the Epiphany, where they can greet the kings and watch them pass by in pomp and circumstance, on floats and preceded and followed by pages, musicians, dancers, etc. They often give the kings letters with their gift requests, which are collected by the pages. On the next day, the children find the gifts the three kings brought for them.",
"title": "Culture"
},
{
"paragraph_id": 149,
"text": "In addition to traditional local Catalan culture, traditions from other parts of Spain can be found as a result of migration from other regions, for instance the celebration of the Andalusian Feria de Abril in Catalonia.",
"title": "Culture"
},
{
"paragraph_id": 150,
"text": "On 28 July 2010, second only after the Canary Islands, Catalonia became another Spanish territory to forbid bullfighting. The ban, which went into effect on 1 January 2012, had originated in a popular petition supported by over 180,000 signatures.",
"title": "Culture"
},
{
"paragraph_id": 151,
"text": "The sardana is considered to be the most characteristic Catalan folk dance, interpreted to the rhythm of tamborí, tible and tenora (from the oboe family), trumpet, trombó (trombone), fiscorn (family of bugles) and contrabaix with three strings played by a cobla, and are danced in a circle dance. Other tunes and dances of the traditional music are the contrapàs (obsolete today), ball de bastons (the \"dance of sticks\"), the moixiganga, the goigs (popular songs), the galops or the jota in the southern part. The havaneres are characteristic in some marine localities of the Costa Brava, especially during the summer months when these songs are sung outdoors accompanied by a cremat of burned rum.",
"title": "Culture"
},
{
"paragraph_id": 152,
"text": "Art music was first developed, up to the nineteenth century and, as in much of Europe, in a liturgical setting, particularly marked by the Escolania de Montserrat. The main Western musical trends have marked these productions, medieval monodies or polyphonies, with the work of Abbot Oliba in the eleventh century or the compilation Llibre Vermell de Montserrat (\"Red Book of Montserrat\") from the fourteenth century. Through the Renaissance there were authors such as Pere Albert Vila, Joan Brudieu or the two Mateu Fletxa (\"The Old\" and \"The Young\"). Baroque had composers like Joan Cererols. The Romantic music was represented by composers such as Fernando Sor, Josep Anselm Clavé (father of choir movement in Catalonia and responsible of the music folk reviving) or Felip Pedrell.",
"title": "Culture"
},
{
"paragraph_id": 153,
"text": "Modernisme also expressed in musical terms from the end of the 19th century onwards, mixing folkloric and post-romantic influences, through the works of Isaac Albéniz and Enric Granados. The avant-garde spirit initiated by the modernists is prolonged throughout the twentieth century, thanks to the activities of the Orfeó Català, a choral society founded in 1891, with its monumental concert hall, the Palau de la Música Catalana in Catalan, built by Lluís Domènech i Montaner from 1905 to 1908, the Barcelona Symphony Orchestra created in 1944 and composers, conductors and musicians engaged against the Francoism like Robert Gerhard, Eduard Toldrà and Pau Casals.",
"title": "Culture"
},
{
"paragraph_id": 154,
"text": "Performances of opera, mostly imported from Italy, began in the 18th century, but some native operas were written as well, including the ones by Domènec Terradellas, Carles Baguer, Ramon Carles, Isaac Albéniz and Enric Granados. The Barcelona main opera house, Gran Teatre del Liceu (opened in 1847), remains one of the most important in Spain, hosting one of the most prestigious music schools in Barcelona, the Conservatori Superior de Música del Liceu. Several lyrical artists trained by this institution gained international renown during the 20th century, such as Victoria de los Ángeles, Montserrat Caballé, Giacomo Aragall and Josep Carreras.",
"title": "Culture"
},
{
"paragraph_id": 155,
"text": "Cellist Pau Casals is admired as an outstanding player. Other popular musical styles were born in the second half of the 20th century such as Nova Cançó from the 1960s with Lluís Llach and the group Els Setze Jutges, the Catalan rumba in the 1960s with Peret, Catalan Rock from the late 1970s with La Banda Trapera del Río and Decibelios for Punk Rock, Sau, Els Pets, Sopa de Cabra or Lax'n'Busto for pop rock or Sangtraït for hard rock, electropop since the 1990s with OBK and indie pop from the 1990s.",
"title": "Culture"
},
{
"paragraph_id": 156,
"text": "Catalonia is the autonomous community, along with Madrid, that has the most media (TV, magazines, newspapers etc.). In Catalonia there is a wide variety of local and comarcal media. With the restoration of democracy, many newspapers and magazines, until then in the hands of the Franco government, were recovered in order to convert them into free and democratic media, while local radios and televisions were implemented.",
"title": "Culture"
},
{
"paragraph_id": 157,
"text": "Televisió de Catalunya, which broadcasts entirely in the Catalan language, is the main Catalan public TV. It has five channels: TV3, El 33, Super3, 3/24, Esport3 and TV3CAT. In 2018, TV3 became the first television channel to be the most viewed one for nine consecutive years in Catalonia. State televisions that broadcast in Catalonia in Spanish language include Televisión Española (with few emissions in Catalan), Antena 3, Cuatro, Telecinco, and La Sexta. Other smaller Catalan television channels include 8TV (owned by Grup Godó), Barça TV and the local televisions, the greatest exponent of which is betevé, the TV channel of Barcelona, which also broadcasts in Catalan.",
"title": "Culture"
},
{
"paragraph_id": 158,
"text": "The two main Catalan newspapers of general information are El Periódico de Catalunya and La Vanguardia, both with editions in Catalan and Spanish. Catalan only published newspapers include Ara and El Punt Avui (from the fusion of El Punt and Avui in 2011), as well as most part of the local press. The Spanish newspapers, such as El País, El Mundo or La Razón, can be also acquired.",
"title": "Culture"
},
{
"paragraph_id": 159,
"text": "Catalonia has a long tradition of use of radio, the first regular radio broadcast in the country was from Ràdio Barcelona in 1924. Today, the public Catalunya Ràdio (owned by Catalan Media Corporation) and the private RAC 1 (belonging to Grup Godó) are the two main radios of Catalonia, both in Catalan.",
"title": "Culture"
},
{
"paragraph_id": 160,
"text": "Regarding the cinema, after the democratic transition, three styles have dominated since then. First, auteur cinema, in the continuity of the Barcelona School, emphasizes experimentation and form, while focusing on developing social and political themes. Worn first by Josep Maria Forn or Bigas Luna, then by Marc Recha, Jaime Rosales and Albert Serra, this genre has achieved some international recognition. Then, the documentary became another genre particularly representative of contemporary Catalan cinema, boosted by Joaquim Jordà i Català and José Luis Guerín. Later, horror films and thrillers have also emerged as a specialty of the Catalan film industry, thanks in particular to the vitality of the Sitges Film Festival, created in 1968. Several directors have gained worldwide renown thanks to this genre, starting with Jaume Balagueró and his series REC (co-directed with Valencian Paco Plaza), Juan Antonio Bayona and El Orfanato or Jaume Collet-Serra with Orphan, Unknown and Non-Stop.",
"title": "Culture"
},
{
"paragraph_id": 161,
"text": "Catalan actors have shot for Spanish and international productions, such as Sergi López.",
"title": "Culture"
},
{
"paragraph_id": 162,
"text": "The Museum of Cinema - Tomàs Mallol Collection (Museu del Cinema – Col.lecció Tomàs Mallol in Catalan) of Girona is home of important permanent exhibitions of cinema and pre-cinema objects. Other important institutions for the promotion of cinema are the Gaudí Awards (Premis Gaudí in Catalan, which replaced from 2009 Barcelona Film Awards themselves created in 2002), serving as equivalent for Catalonia to the Spanish Goya or French César.",
"title": "Culture"
},
{
"paragraph_id": 163,
"text": "Seny is a form of ancestral Catalan wisdom or sensibleness. It involves well-pondered perception of situations, level-headedness, awareness, integrity, and right action. Many Catalans consider seny something unique to their culture, is based on a set of ancestral local customs stemming from the scale of values and social norms of their society.",
"title": "Culture"
},
{
"paragraph_id": 164,
"text": "Sport has had a distinct importance in Catalan life and culture since the beginning of the 20th century; consequently, the region has a well-developed sports infrastructure. The main sports are football, basketball, handball, rink hockey, tennis and motorsport.",
"title": "Culture"
},
{
"paragraph_id": 165,
"text": "Despite the fact that the most popular sports are represented outside by the Spanish national teams, Catalonia can officially play as itself in some others, like korfball, futsal or rugby league. Most of Catalan Sports Federations have a long tradition and some of them participated in the foundation of international sports federations, as the Catalan Federation of Rugby, that was one of the founder members of the Fédération Internationale de Rugby Amateur (FIRA) in 1934. The majority of Catalan sport federations are part of the Sports Federation Union of Catalonia (Catalan: Unió de Federacions Esportives de Catalunya), founded in 1933.",
"title": "Culture"
},
{
"paragraph_id": 166,
"text": "The Catalan Football Federation also periodically fields a national team against international opposition, organizing friendly matches. In the recent years they have played with Bulgaria, Argentina, Brazil, Basque Country, Colombia, Nigeria, Cape Verde and Tunisia. The biggest football clubs are Barcelona (also known as Barça), who have won five European Cups (UEFA Champions League), and Espanyol, who have twice been runner-up of the UEFA Cup (now UEFA Europa League). Barcelona currently play in La Liga while Espanyol currently play in the Segunda División.",
"title": "Culture"
},
{
"paragraph_id": 167,
"text": "The Catalan waterpolo is one of the main powers of the Iberian Peninsula. The Catalans won triumphs in waterpolo competitions at European and world level by club (the Barcelona was champion of Europe in 1981/82 and the Catalonia in 1994/95) and national team (one gold and one silver in Olympic Games and World Championships). It also has many international synchronized swimming champions.",
"title": "Culture"
},
{
"paragraph_id": 168,
"text": "Motorsport has a long tradition in Catalonia, which involving many people, with some world champions and several competitions organized since the beginning of the 20th century. The Circuit de Catalunya, built in 1991, is one of the main motorsport venues, holding the Catalan motorcycle Grand Prix, the Spanish F1 Grand Prix, a DTM race, and several other races.",
"title": "Culture"
},
{
"paragraph_id": 169,
"text": "Catalonia hosted many relevant international sport events, such as the 1992 Summer Olympics in Barcelona, and also the 1955 Mediterranean Games, the 2013 World Aquatics Championships or the 2018 Mediterranean Games. It held annually the fourth-oldest still-existing cycling stage race in the world, the Volta a Catalunya (Tour of Catalonia).",
"title": "Culture"
},
{
"paragraph_id": 170,
"text": "Catalonia has its own representative and distinctive national symbols such as:",
"title": "Culture"
},
{
"paragraph_id": 171,
"text": "Catalan gastronomy has a long culinary tradition. Various local food recipes have been described in documents dating from the fifteenth century. As with all the cuisines of the Mediterranean, Catatonian dishes make abundant use of fish, seafood, olive oil, bread and vegetables. Regional specialties include the pa amb tomàquet (bread with tomato), which consists of bread (sometimes toasted), and tomato seasoned with olive oil and salt. Often the dish is accompanied with any number of sausages (cured botifarres, fuet, iberic ham, etc.), ham or cheeses. Others dishes include the calçotada, escudella i carn d'olla, suquet de peix (fish stew), and a dessert, Catalan cream.",
"title": "Culture"
},
{
"paragraph_id": 172,
"text": "Catalan vineyards also have several Denominacions d'Origen wines, such as: Priorat, Montsant, Penedès and Empordà. There is also a sparkling wine, the cava.",
"title": "Culture"
},
{
"paragraph_id": 173,
"text": "Catalonia is internationally recognized for its fine dining. Three of the World's 50 Best Restaurants are in Catalonia, and four restaurants have three Michelin stars, including restaurants like El Bulli or El Celler de Can Roca, both of which regularly dominate international rankings of restaurants. The region has been awarded the European Region of Gastronomy title for the year 2016.",
"title": "Culture"
},
{
"paragraph_id": 174,
"text": "This article incorporates text from a publication now in the public domain: Chisholm, Hugh, ed. (1911). \"Catalonia\". Encyclopædia Britannica (11th ed.). Cambridge University Press.",
"title": "References"
}
] | Catalonia is an autonomous community of Spain, designated as a nationality by its Statute of Autonomy. Most of its territory lies on the northeast of the Iberian Peninsula, to the south of the Pyrenees mountain range. Catalonia is administratively divided into four provinces: Barcelona, Girona, Lleida, and Tarragona. The capital and largest city, Barcelona, is the second-most populated municipality in Spain and the fifth-most populous urban area in the European Union. Modern-day Catalonia comprises most of the medieval and early modern Principality of Catalonia. It is bordered by France (Occitanie) and Andorra to the north, the Mediterranean Sea to the east, and the Spanish autonomous communities of Aragon to the west and Valencia to the south. The official languages are Catalan, Spanish, and the Aranese dialect of Occitan. In the late 8th century, various counties across the eastern Pyrenees were established by the Frankish kingdom as a defensive barrier against Muslim invasions. In the 10th century, the County of Barcelona became progressively independent. In 1137, Barcelona and the Kingdom of Aragon were united by marriage under the Crown of Aragon. Within the Crown, the Catalan counties adopted a common polity, the Principality of Catalonia, developing its institutional system, such as Courts, Generalitat and constitutions, becoming the base for the Crown's Mediterranean trade and expansionism. In the later Middle Ages, Catalan literature flourished. In 1469, the king of Aragon and the queen of Castile were married and ruled their realms together, retaining all of their distinct institutions and legislation. During the Franco-Spanish War (1635–1659), Catalonia revolted (1640–1652) against a large and burdensome presence of the royal army, being briefly proclaimed a republic under French protection until it was largely reconquered by the Spanish army. By the Treaty of the Pyrenees (1659), the northern parts of Catalonia, mostly the Roussillon, were ceded to France. During the War of the Spanish Succession (1701–1714), the Crown of Aragon sided against the Bourbon Philip V of Spain, but the Catalans were defeated with the fall of Barcelona on 11 September 1714. Philip V subsequently imposed a unifying administration across Spain, enacting the Nueva Planta decrees which, like in the other realms of the Crown of Aragon, suppressed Catalan institutions and rights. As a consequence, Catalan as a language of government and literature was eclipsed by Spanish. Throughout the 18th century, Catalonia experienced economic growth. In the 19th century, Catalonia was severely affected by the Napoleonic and Carlist Wars. In the second third of the century, it experienced industrialisation. As wealth from the industrial expansion grew, it saw a cultural renaissance coupled with incipient nationalism while several workers' movements appeared. With the establishment of the Second Spanish Republic (1931–1939), the Generalitat was restored as a Catalan autonomous government. After the Spanish Civil War, the Francoist dictatorship enacted repressive measures, abolishing Catalan self-government and banning the official use of the Catalan language. After a period of autarky, from the late 1950s through to the 1970s Catalonia saw rapid economic growth, drawing many workers from across Spain, making Barcelona one of Europe's largest industrial metropolitan areas and turning Catalonia into a major tourist destination. During the Spanish transition to democracy (1975–1982), Catalonia regained self-government and is now one of the most economically dynamic communities in Spain. Since the 2010s, there has been growing support for Catalan independence. On 27 October 2017, the Catalan Parliament unilaterally declared independence following a referendum that was deemed unconstitutional by the Spanish state. The Spanish Senate voted in favour of enforcing direct rule by removing the Catalan government and calling a snap regional election. The Spanish Supreme Court imprisoned seven former ministers of the Catalan government on charges of rebellion and misuse of public funds, while several others—including then-President Carles Puigdemont—fled to other European countries. Those in prison were pardoned by the Spanish government in 2021. | 2001-10-17T11:14:28Z | 2023-12-30T09:15:05Z | [
"Template:Doi",
"Template:Short description",
"Template:Lang-ca",
"Template:Ill",
"Template:ISBN",
"Template:Navboxes",
"Template:See also",
"Template:Circa",
"Template:Portal",
"Template:Dead link",
"Template:Catalonia topics",
"Template:Lang-la-x-medieval",
"Template:Abbr",
"Template:Webarchive",
"Template:Cbignore",
"Template:Transliteration",
"Template:Clarify",
"Template:Notelist",
"Template:ISSN",
"Template:Lang-oc",
"Template:Nbsp",
"Template:Pie chart",
"Template:Citation needed",
"Template:Cite news",
"Template:EB1911",
"Template:Flagu",
"Template:Cite book",
"Template:Cite journal",
"Template:Cite magazine",
"Template:Use British English",
"Template:Lang-es",
"Template:Dubious",
"Template:Further",
"Template:Unbulleted list",
"Template:Historical populations",
"Template:IPA-ca",
"Template:IPA-es",
"Template:Lang",
"Template:Which",
"Template:Convert",
"Template:Largest cities",
"Template:Cite web",
"Template:Use dmy dates",
"Template:Not a typo",
"Template:Rp",
"Template:Main",
"Template:Update inline",
"Template:IPA-oc",
"Template:Politics of Catalonia",
"Template:Identificador carretera española",
"Template:Authority control",
"Template:Multiple image",
"Template:When",
"Template:Sister project links",
"Template:Efn",
"Template:For timeline",
"Template:Flagicon",
"Template:Reflist",
"Template:Citation",
"Template:Explain",
"Template:Clear",
"Template:In lang",
"Template:About",
"Template:Infobox settlement",
"Template:IPAc-en",
"Template:Wikt-lang"
] | https://en.wikipedia.org/wiki/Catalonia |
6,823 | Konstantinos Kanaris | Konstantinos Kanaris (Greek: Κωνσταντίνος Κανάρης, Konstantínos Kanáris; c. 1790 – 2 September 1877), also anglicised as Constantine Kanaris or Canaris, was a Greek admiral, Prime Minister, and a hero of the Greek War of Independence.
Konstantinos Kanaris was born and grew up on the island of Psara, close to the island of Chios, in the Aegean. The exact year of his birth is unknown. Official records of the Hellenic Navy indicate 1795, however, modern Greek historians consider 1790 or 1793 to be more probable.
He was left an orphan at a young age. Having to support himself, he chose to become a seaman like most members of his family since the beginning of the 18th century. He was subsequently hired as a boy on the brig of his uncle Dimitris Bourekas.
Kanaris gained his fame during the Greek War of Independence (1821–1829). Unlike most other prominent figures of the War, he had never been initiated into the Filiki Eteria (Society of Friends), which played a significant role in the uprising against the Ottoman Empire, primarily by secret recruitment of supporters against the Turkish rule.
By early 1821, the movement had gained enough support to launch a revolution. This seems to have inspired Kanaris, who was in Odessa at the time. He returned to the island of Psara in haste and was present when it joined the uprising on 10 April 1821.
The island formed its own fleet and the famed seamen of Psara, already known for their well-equipped ships and successful battles against sea pirates, proved to be highly effective in naval warfare. Kanaris soon distinguished himself as a fire ship captain.
At Chios, on the moonless night of 6–7 June 1822, forces under his command destroyed the flagship of Nasuhzade Ali Pasha, Kapudan Pasha (Grand Admiral) of the Ottoman fleet, in revenge for the Chios massacre. The admiral was holding a Bayram celebration, allowing Kanaris and his men to position their fire ship without being noticed. When the flagship's powder store caught fire, all men aboard were instantly killed. The Turkish casualties comprised 2,300 men, both naval officers and common sailors, as well as Nasuhzade Ali Pasha himself.
Kanaris led another successful attack against the Ottoman fleet at Tenedos in November 1822. He was famously said to have encouraged himself by murmuring "Konstantí, you are going to die" every time he was approaching a Turkish warship on the fire boat he was about to detonate.
The Ottoman fleet captured Psara on 21 June 1824. A part of the population, including Kanaris, managed to flee the island, but those who didn't were either sold into slavery or slaughtered. After the destruction of his home island, he continued to lead attacks against Turkish forces. In August 1824, he engaged in naval combats in the Dodecanese.
The following year, Kanaris led the Greek raid on Alexandria, a daring attempt to destroy the Egyptian fleet with fire ships that might have been successful if the wind had not failed just after the Greek ships entered Alexandria harbour.
After the end of the War and the independence of Greece, Kanaris became an officer of the new Hellenic Navy, reaching the rank of admiral, and a prominent politician.
Konstantinos Kanaris was one of the few with the personal confidence of Ioannis Kapodistrias, the first Head of State of independent Greece. After the assassination of Kapodistrias on 9 October 1831, he retired to the island of Syros.
During the reign of King Otto I, Kanaris served as Minister in various governments and then as Prime Minister in the provisional government (16 February – 30 March 1844). He served a second term (15 October 1848 – 12 December 1849), and as Navy Minister in the 1854 cabinet of Alexandros Mavrokordatos.
In 1862, he was among the rare War of Independence veterans who took part in the bloodless insurrection that deposed the increasingly unpopular King Otto I and led to the election of Prince William of Denmark as King George I of Greece. During his reign, Kanaris served as a Prime Minister for a third term (6 March – 16 April 1864), fourth term (26 July 1864 – 26 February 1865) and fifth and last term (7 June – 2 September 1877).
Kanaris died on 2 September 1877 whilst still serving in office as Prime Minister. Following his death his government remained in power until 14 September 1877 without agreeing on a replacement at its head. He was buried in the First Cemetery of Athens and his heart was placed in a silver urn.
Konstantinos Kanaris is considered a national hero in Greece and ranks amongst the most notable participants of the War of Independence. Many statues and busts have been erected in his honour, such as Kanaris a Scio in Palermo, Italy. He was also featured on a Greek ₯1 coin and a ₯100 banknote issued by the Bank of Greece.
To honour Kanaris, the following ships of the Hellenic Navy have been named after him:
Te Korowhakaunu / Kanáris Sound, a section of Taiari / Chalky Inlet in New Zealand's Fiordland National Park, was named after Konstantinos Kanaris by French navigator and explorer Jules de Blosseville (1802–1833).
In 1817, Konstantinos Kanaris married Despoina Maniatis, from a historical family of Psara.
They had seven children:
Wilhelm Canaris, a German Admiral, speculated that he might be a descendant of Konstantinos Kanaris. An official genealogical family history that was researched in 1938 showed however, that he was of Italian descent and not related to the Kanaris family from Greece. | [
{
"paragraph_id": 0,
"text": "Konstantinos Kanaris (Greek: Κωνσταντίνος Κανάρης, Konstantínos Kanáris; c. 1790 – 2 September 1877), also anglicised as Constantine Kanaris or Canaris, was a Greek admiral, Prime Minister, and a hero of the Greek War of Independence.",
"title": ""
},
{
"paragraph_id": 1,
"text": "Konstantinos Kanaris was born and grew up on the island of Psara, close to the island of Chios, in the Aegean. The exact year of his birth is unknown. Official records of the Hellenic Navy indicate 1795, however, modern Greek historians consider 1790 or 1793 to be more probable.",
"title": "Biography"
},
{
"paragraph_id": 2,
"text": "He was left an orphan at a young age. Having to support himself, he chose to become a seaman like most members of his family since the beginning of the 18th century. He was subsequently hired as a boy on the brig of his uncle Dimitris Bourekas.",
"title": "Biography"
},
{
"paragraph_id": 3,
"text": "Kanaris gained his fame during the Greek War of Independence (1821–1829). Unlike most other prominent figures of the War, he had never been initiated into the Filiki Eteria (Society of Friends), which played a significant role in the uprising against the Ottoman Empire, primarily by secret recruitment of supporters against the Turkish rule.",
"title": "Biography"
},
{
"paragraph_id": 4,
"text": "By early 1821, the movement had gained enough support to launch a revolution. This seems to have inspired Kanaris, who was in Odessa at the time. He returned to the island of Psara in haste and was present when it joined the uprising on 10 April 1821.",
"title": "Biography"
},
{
"paragraph_id": 5,
"text": "The island formed its own fleet and the famed seamen of Psara, already known for their well-equipped ships and successful battles against sea pirates, proved to be highly effective in naval warfare. Kanaris soon distinguished himself as a fire ship captain.",
"title": "Biography"
},
{
"paragraph_id": 6,
"text": "At Chios, on the moonless night of 6–7 June 1822, forces under his command destroyed the flagship of Nasuhzade Ali Pasha, Kapudan Pasha (Grand Admiral) of the Ottoman fleet, in revenge for the Chios massacre. The admiral was holding a Bayram celebration, allowing Kanaris and his men to position their fire ship without being noticed. When the flagship's powder store caught fire, all men aboard were instantly killed. The Turkish casualties comprised 2,300 men, both naval officers and common sailors, as well as Nasuhzade Ali Pasha himself.",
"title": "Biography"
},
{
"paragraph_id": 7,
"text": "Kanaris led another successful attack against the Ottoman fleet at Tenedos in November 1822. He was famously said to have encouraged himself by murmuring \"Konstantí, you are going to die\" every time he was approaching a Turkish warship on the fire boat he was about to detonate.",
"title": "Biography"
},
{
"paragraph_id": 8,
"text": "The Ottoman fleet captured Psara on 21 June 1824. A part of the population, including Kanaris, managed to flee the island, but those who didn't were either sold into slavery or slaughtered. After the destruction of his home island, he continued to lead attacks against Turkish forces. In August 1824, he engaged in naval combats in the Dodecanese.",
"title": "Biography"
},
{
"paragraph_id": 9,
"text": "The following year, Kanaris led the Greek raid on Alexandria, a daring attempt to destroy the Egyptian fleet with fire ships that might have been successful if the wind had not failed just after the Greek ships entered Alexandria harbour.",
"title": "Biography"
},
{
"paragraph_id": 10,
"text": "After the end of the War and the independence of Greece, Kanaris became an officer of the new Hellenic Navy, reaching the rank of admiral, and a prominent politician.",
"title": "Biography"
},
{
"paragraph_id": 11,
"text": "Konstantinos Kanaris was one of the few with the personal confidence of Ioannis Kapodistrias, the first Head of State of independent Greece. After the assassination of Kapodistrias on 9 October 1831, he retired to the island of Syros.",
"title": "Biography"
},
{
"paragraph_id": 12,
"text": "During the reign of King Otto I, Kanaris served as Minister in various governments and then as Prime Minister in the provisional government (16 February – 30 March 1844). He served a second term (15 October 1848 – 12 December 1849), and as Navy Minister in the 1854 cabinet of Alexandros Mavrokordatos.",
"title": "Biography"
},
{
"paragraph_id": 13,
"text": "In 1862, he was among the rare War of Independence veterans who took part in the bloodless insurrection that deposed the increasingly unpopular King Otto I and led to the election of Prince William of Denmark as King George I of Greece. During his reign, Kanaris served as a Prime Minister for a third term (6 March – 16 April 1864), fourth term (26 July 1864 – 26 February 1865) and fifth and last term (7 June – 2 September 1877).",
"title": "Biography"
},
{
"paragraph_id": 14,
"text": "Kanaris died on 2 September 1877 whilst still serving in office as Prime Minister. Following his death his government remained in power until 14 September 1877 without agreeing on a replacement at its head. He was buried in the First Cemetery of Athens and his heart was placed in a silver urn.",
"title": "Biography"
},
{
"paragraph_id": 15,
"text": "Konstantinos Kanaris is considered a national hero in Greece and ranks amongst the most notable participants of the War of Independence. Many statues and busts have been erected in his honour, such as Kanaris a Scio in Palermo, Italy. He was also featured on a Greek ₯1 coin and a ₯100 banknote issued by the Bank of Greece.",
"title": "Legacy"
},
{
"paragraph_id": 16,
"text": "To honour Kanaris, the following ships of the Hellenic Navy have been named after him:",
"title": "Legacy"
},
{
"paragraph_id": 17,
"text": "Te Korowhakaunu / Kanáris Sound, a section of Taiari / Chalky Inlet in New Zealand's Fiordland National Park, was named after Konstantinos Kanaris by French navigator and explorer Jules de Blosseville (1802–1833).",
"title": "Legacy"
},
{
"paragraph_id": 18,
"text": "In 1817, Konstantinos Kanaris married Despoina Maniatis, from a historical family of Psara.",
"title": "Family"
},
{
"paragraph_id": 19,
"text": "They had seven children:",
"title": "Family"
},
{
"paragraph_id": 20,
"text": "Wilhelm Canaris, a German Admiral, speculated that he might be a descendant of Konstantinos Kanaris. An official genealogical family history that was researched in 1938 showed however, that he was of Italian descent and not related to the Kanaris family from Greece.",
"title": "Family"
}
] | Konstantinos Kanaris, also anglicised as Constantine Kanaris or Canaris, was a Greek admiral, Prime Minister, and a hero of the Greek War of Independence. | 2001-10-17T10:31:22Z | 2023-12-16T20:02:02Z | [
"Template:About",
"Template:Cite web",
"Template:Cite news",
"Template:EB1911",
"Template:Commons category",
"Template:Authority control",
"Template:Short description",
"Template:Ship",
"Template:Sclass",
"Template:Greece Old Style dating",
"Template:S-off",
"Template:S-bef",
"Template:S-aft",
"Template:Greek War of Independence",
"Template:Use dmy dates",
"Template:Lang-el",
"Template:Spaced ndash",
"Template:Cite book",
"Template:Naval Wars in the Levant 1559–1853",
"Template:S-start",
"Template:S-ttl",
"Template:S-end",
"Template:Infobox officeholder",
"Template:Transliteration",
"Template:Reflist",
"Template:Heads of government of Greece"
] | https://en.wikipedia.org/wiki/Konstantinos_Kanaris |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.